Sezzle is looking for a Senior Data Engineer to design, build, and optimize the large-scale data pipelines that power our analytics, product insights, and operational workflows. In this role, you will architect and evolve our data ecosystem while partnering closely with teams across engineering, analytics, product, finance, and risk.
What You'll Do
- Design, build, and optimize large-scale, high-performance data pipelines to support analytics, product insights, and operational workflows.
- Architect and evolve Sezzle’s data ecosystem, driving improvements in reliability, scalability, and efficiency.
- Lead development of ETL/ELT workflows using Redshift, DBT, AWS DMS, and related modern data tooling.
- Partner with cross-functional teams to gather requirements and deliver robust, high-quality datasets.
- Evaluate and integrate new technologies, guiding the evolution of Sezzle’s data stack and infrastructure.
- Optimize Redshift and warehouse performance, including query tuning, modeling improvements, and cost management.
What We're Looking For
- 9+ years of experience in data engineering, with a strong track record of production-grade systems.
- Deep expertise with AWS Redshift or similar products, including performance tuning, table design, and workload management.
- Strong hands-on experience with ETL/ELT frameworks, especially DBT and AWS DMS.
- Proficiency in SQL (advanced level) and at least one programming language such as Python, Scala, or Java.
- Experience building and maintaining AWS-based data platforms, including S3, Lambda, Glue, or EMR.
- Track record designing scalable, fault-tolerant data pipelines using modern orchestration tools (Airflow, Dagster, Prefect, etc.) processing more than 100GB - 1 TB of new data a day.
- Strong understanding of data modeling, distributed systems, and warehouse/lake design patterns.
- Ability to work in a fast-paced, collaborative environment with excellent communication and documentation skills.
Nice to Have
- Prior experience in high-growth, data-intensive fintech or similar regulated environments.
- Familiarity with streaming technologies like Kafka, Kinesis, Flink, or Spark Streaming.
- Knowledge of lakehouse architectures and modern stacks such as Snowflake, Databricks, Iceberg, or Delta Lake.
- Exposure to machine learning pipelines, feature stores, or MLOps concepts.
- Experience leading data platform migrations, warehouse re-architectures, or large-scale performance overhauls.
- Enthusiasm for automation, CI/CD for data, and infrastructure as code (Terraform, CloudFormation).
Technical Stack
- Redshift, AWS DMS, DBT, SQL, Python, Scala, Java
- AWS S3, AWS Lambda, AWS Glue, EMR
- Airflow, Dagster, Prefect
- Kafka, Kinesis, Flink, Spark Streaming
- Snowflake, Databricks, Iceberg, Delta Lake
- Terraform, CloudFormation
Team & Environment
You will partner closely with cross-functional teams across engineering, analytics, product, finance, and risk to deliver high-quality data solutions.
Benefits & Compensation
- Compensation: $5,000 - $9,500 USD per month (Gross)
Work Mode
This is a local-country position based in Chile.
Sezzle is an equal opportunity employer.




