AECOM is looking for a Senior Data Engineer to design, deliver, and optimize data platforms and solutions across a wide range of projects. You will lead major components of the data solution lifecycle, mentor junior engineers, and ensure robust, scalable, secure, and value-driven data architecture.
What You'll Do
- Lead concepts through the solution development lifecycle, focusing on principles like optimization and scalability.
- Oversee end-to-end data processes including ingestion, transformation, modelling, and integration across multiple external, facing projects.
- Facilitate technical workshops and requirements gathering sessions with stakeholders across the organization and external clients.
- Collaborate with cross-functional data teams to translate client strategic and business requirements into technical specifications.
- Work closely with Data Analysts and Data Scientists to support analytical projects, including feature engineering and big data analysis activities.
- Collaborate with project managers, architects, and technical teams to ensure seamless integration of data solutions within wider digital ecosystems.
- Promote and lead with data engineering best practices including code quality, testing, CI/CD, and documentation standards.
- Lead the implementation of data governance controls, including metadata management, access controls, data lineage, PII protection, and compliance with organizational and regulatory requirements.
- Develop monitoring and alerting strategies for data solutions, maintaining high availability, performance, and reliability.
- Troubleshoot complex issues across infrastructure, data solutions, and custom analytical products.
- Lead prototyping and proof-of-concept efforts to evaluate emerging technologies and their value within AECOM’s data ecosystem.
- Support the operationalization and deployment of predictive models and analytics solutions.
- Continuously explore new cloud capabilities, data platforms, and modern data stack tools to drive innovation within the team.
- Provide technical guidance, code reviews, and coaching to junior and mid-level data engineers.
- Champion knowledge-sharing, standardization, and collaborative team practices.
What We're Looking For
- Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related field (or equivalent professional experience).
- Professional experience designing and delivering cloud-based data engineering solutions at scale.
- Advanced proficiency in at least one programming language commonly used in data engineering (Python preferred; Scala, Java, or C# also beneficial).
- Strong SQL skills and deep understanding of relational databases, non-relational stores, and data warehouse principles.
- Solid experience with data modelling methodologies (dimensional modelling, star/snowflake schemas, data vault, etc.).
- Strong grounding in analytical workflows and support for data-science activities (feature engineering, data preparation, exploratory analysis).
- Experience designing and operating ETL/ELT pipelines and modern workflow orchestration tools (e.g., Apache Airflow, Azure Data Factory, Azure Functions).
- Practical experience with CI/CD, version control (Git), testing frameworks, and DevOps practices.
- Understanding of APIs, REST principles, and data integration patterns.
- Experience implementing data quality, validation, and observability frameworks.
- The selected candidate must be able to obtain security clearance.
Nice to Have
- Master’s degree in Computer Science, Engineering, Mathematics, or related discipline.
- Professional certifications in cloud platforms (AWS, Azure, or GCP).
- Experience supporting or operationalizing machine-learning models (e.g., model deployments, monitoring, ML pipelines).
- Exposure to advanced analytics frameworks (e.g., scikit-learn, MLflow, Databricks Runtime).
- Proficiency in containerization and IaC (Docker, Kubernetes, Terraform, Bicep).
- Strong expertise in at least one major cloud platform (Azure preferred).
- Hands-on experience with cloud-native data services such as Databricks, Synapse Analytics, BigQuery, Redshift, or Snowflake.
- Experience with distributed processing frameworks such as Apache Spark, Kafka, or Flink.
- Familiarity with data visualization and BI requirements to support downstream consumers.
Technical Stack
- Languages: Python, Scala, Java, C#, SQL
- Orchestration: Apache Airflow, Azure Data Factory, Azure Functions
- Platforms & Cloud: AWS, Azure, GCP
- ML & Analytics: scikit-learn, MLflow, Databricks Runtime
- Infrastructure: Docker, Kubernetes, Terraform, Bicep
- Data Services: Databricks, Synapse Analytics, BigQuery, Redshift, Snowflake
- Processing: Apache Spark, Kafka, Flink
- Tools: Git
Team & Environment
You will work closely with Data Analysts, Data Scientists, and cross-functional digital teams.
Benefits & Compensation
- Hybrid work options
- Flexibility to work from an AECOM office, remote location or at a client site
Work Mode
This position offers a hybrid work model.
As an Equal Opportunity Employer, we believe in each person’s potential, and we’ll help you reach yours. All your information will be kept confidential according to EEO guidelines.




