Requirements
- Over seven years of experience in developing production data or backend systems
- Expertise in both Python (including PySpark and data engineering) and Java, with the ability to work with Spring/JVM codebases
- Experience with lakehouse or data warehouse architectures on cloud platforms such as Iceberg, Delta Lake, Redshift, BigQuery, or Snowflake
- Hands-on experience with AWS data services, including Glue, Athena, S3, and Lambda
- Experience with workflow orchestration tools like Airflow, Kestra, or Step Functions
- Ability to work across system boundaries, understanding upstream event schemas, data models, and downstream consumer needs
Nice to Have
- Practical experience with Apache Iceberg, including MERGE operations, schema evolution, and partition evolution
- Experience with Trino or Presto for federated or interactive SQL analytics at scale
- Experience with dbt for data transformation, modeling, and testing
- Familiarity with data quality frameworks such as Great Expectations or Monte Carlo
- Background in event-driven architectures, including Kinesis, Kafka, or SQS
- Interest in AI-assisted development and LLM-based engineering workflows
- Familiarity with the challenges of multi-tenant B2B SaaS data in the hospitality domain
Compensation
Not specified
Work Arrangement
Not specified
Team
Not specified
Other
The company is an equal opportunity employer. We value diversity and are dedicated to fostering an inclusive environment for all employees. All qualified candidates will be considered for employment regardless of race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other characteristic protected by law.
Not specified