Moody's is looking for a Data Engineer to join the Corps and Gov Data and AI Platform Data Engineering team. In this role, you will focus on building and maintaining robust data pipelines on the Databricks Lakehouse to support compliant downstream analytics and customer-facing risk solutions.
What You'll Do
- Develop and maintain data pipelines on the Databricks Lakehouse, contributing to governed data products that support compliant downstream analytics.
- Build and maintain pipelines to provide access to usable data for Corps and Gov business applications, embedded analytics, and ad-hoc stakeholder requests.
- Contribute to the 'pipes' that enable downstream use cases—ingestion, transformation, and delivery patterns supporting customer-facing risk solutions.
- Build and support Delta Live Tables (DLT) pipelines under guidance from senior engineers to improve data freshness, quality, and resilience.
- Write and test transformations using dbt, SQL, and Spark/Python; participate in code reviews and adhere to established quality standards.
- Help orchestrate and monitor workflows in AWS MWAA (Airflow) and Databricks Workflows, including alerting setup, retry logic, and backfill procedures.
- Support infrastructure and reusable frameworks that allow stakeholders to create analytics and ML workloads on top of platform data.
- Collaborate with cross-functional stakeholders—including KYC/AML experts, product managers, and application developers—to understand data needs and deliver solutions.
- Apply data security best practices (access controls, encryption, secrets management) and observability patterns (monitoring, data quality checks, alerting) in day-to-day work.
- Participate in incident triage and pipeline troubleshooting, learning to diagnose root causes and implement durable fixes.
- Proactively learn and adopt team standards for code quality, testing, documentation, and operational readiness.
What We're Looking For
- 1–3 years of experience in data engineering, backend engineering, or a related technical role.
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- Foundational experience working with the Databricks Lakehouse (Delta Lake) or a comparable platform (Snowflake, BigQuery).
- Familiarity with orchestration tools such as AWS MWAA (Managed Workflows for Apache Airflow) or Databricks Workflows.
- Solid SQL skills and working knowledge of Python.
- Understanding of DataOps principles: CI/CD for data pipelines, automated testing, and environment promotion practices.
- Awareness of data security fundamentals—encryption at rest and in transit, role-based access control, secrets management—with a willingness to deepen that knowledge.
- Understanding of data observability and monitoring concepts (pipeline health, data quality checks, alerting, and incident triage).
- Basic understanding of artificial intelligence concepts, with curiosity and enthusiasm for learning how AI tools can improve processes and drive efficiency.
- Interest in exploring AI systems and a willingness to develop awareness of responsible AI practices, including risk management and ethical use.
Nice to Have
- Exposure to Delta Live Tables (DLT).
- Exposure to Spark and dbt.
- Exposure to Machine Learning concepts (e.g., feature generation, training vs. serving); comfort collaborating with data scientists and ML engineers.
- Master’s degree in a related discipline.
Technical Stack
- Databricks Lakehouse (Delta Lake)
- Snowflake
- BigQuery
- Delta Live Tables (DLT)
- AWS MWAA (Apache Airflow)
- Databricks Workflows
- SQL
- Python
- Spark
- dbt
Team & Environment
You will be part of the CnG Data and AI Platform Data Engineering team, which operates within the Corps and Gov Operating Unit.
Moody’s is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, sexual orientation, gender expression, gender identity or any other characteristic protected by law.






