Shape the foundation of a data-driven platform
What You'll Do
- Design and manage scalable data pipelines that handle large-scale information flows across distributed systems
- Work alongside data scientists and engineers to define effective data models and ensure alignment with business needs
- Develop and refine ETL processes to improve data accuracy, accessibility, and performance
- Optimize data storage strategies to balance speed, reliability, and cost across cloud environments
- Build internal tools and reusable frameworks that streamline data workflows for engineering teams
- Ensure data integrity and consistency by collaborating with cross-functional partners
- Explore emerging technologies in data engineering and distributed computing to keep systems modern and efficient
What We're Looking For
- Bachelor’s degree in Computer Science or a related technical field
- At least five years of hands-on experience in software or data engineering roles
- Proficiency in programming languages such as Python or Java
- Proven experience with big data frameworks like Apache Spark or Hadoop
- Familiarity with cloud platforms including AWS, GCP, or Azure
- Working knowledge of SQL, NoSQL databases, and data warehouse architectures
- Strong analytical abilities with a focus on system reliability and performance tuning
- Clear communication skills and a collaborative mindset
Technology Environment
You'll work with a modern data stack including Python, Java, Apache Spark, Hadoop, AWS, GCP, Azure, SQL, NoSQL databases, and cloud-based data warehouses.
