What You'll Do
Take full responsibility for the design, operation, and advancement of our core data infrastructure, including relational databases, data warehouses, and change data capture systems. You'll ensure high availability, performance, and scalability across MySQL, Postgres, and Redshift, setting clear KPIs and SLAs to maintain system integrity.
Drive performance improvements through query optimization, indexing strategies, and schema refinement. Develop internal tooling to automate upgrades, streamline management, and support seamless migrations as data volume and complexity grow.
Collaborate closely with engineering teams to ensure database designs align with product requirements and long-term scalability. Guide the adoption of modern data practices, evaluating and integrating new technologies that support high-throughput writes, event processing, and complex ETL workflows.
Support data engineers and analysts by improving warehouse efficiency—tuning queries, refining data models, and managing compute costs. Help shape the future of our data platform by exploring next-generation solutions for distributed storage, streaming, and analytics.
Requirements
- 12+ years of hands-on experience in database administration, site reliability engineering, or data engineering, with proven success scaling production systems under heavy load
- Deep technical knowledge of MySQL, Postgres, and AWS Redshift or comparable data warehouse technologies, including performance tuning and workload management
- Proficiency in SQL and data modeling, with experience designing efficient table structures and access patterns
- Extensive background with ETL/ELT frameworks such as DBT and AWS DMS, and strong scripting skills in Python or similar languages
- Understanding of distributed systems, replication protocols, and data consistency models
- Excellent communication skills, with the ability to document systems clearly and work effectively across technical teams
Preferred Qualifications
- Experience in fintech or other regulated, high-data-volume environments
- Familiarity with modern data lakehouse platforms such as Snowflake, Databricks, Iceberg, or Delta Lake
- Experience building scalable data pipelines using orchestration tools like Airflow, Dagster, or Prefect, handling 100GB to 1TB of daily data ingestion
- Knowledge of streaming systems including Kafka, Kinesis, Flink, or Spark Streaming
- Interest in leveraging AI tools to enhance engineering productivity and system observability
Technical Environment
Our stack includes MySQL, Postgres, Redshift, AWS Aurora RDS, AWS DMS, DBT, Elasticsearch, Golang, Python, TypeScript, Kubernetes, Git, GitLab, Airflow, Dagster, Prefect, Kafka, Kinesis, Flink, Spark Streaming, Snowflake, Databricks, Iceberg, and Delta Lake.
Compensation
Monthly gross salary ranges from $6,000 to $12,500 USD, adjusted for experience level and geographic location.
