Join Adobe’s mission to transform the future of creativity and data-driven experiences. We are seeking a senior computer scientist who brings a unique blend of software engineering, data science, and DevOps expertise to build scalable, intelligent systems that power Adobe’s next-generation platforms.
What You'll Do
- Design and develop robust, scalable applications and services across client-side (C++, iOS/Swift, Android/Java) or server-side (Java) environments.
- Own projects through the full software development lifecycle: requirements gathering, architecture, coding, testing, deployment, and maintenance.
- Apply strong computer science principles: algorithms, data structures, design patterns, and system architecture.
- Tune code to address high performance and load requirements.
- Respond to urgent production issues requiring fast resolution and deployment of code fixes/updates.
- Build and optimize PySpark jobs for large-scale data processing.
- Work with modern data ecosystems: Apache Hadoop, AWS EMR, Azure Databricks, or Azure Data Explorer (ADX).
- Implement machine learning workflows and integrate predictive analytics into production systems.
- Collaborate with the team to visualize, design, and experiment with data-driven agentic workflows.
- Manage configurations and deployments for (elastic compute) Spark environments.
- Participate in system upgrades (OS, libraries, Spark versions) and ensure compatibility with data pipelines.
- Analyze cost dashboards and optimize resource utilization for Spark jobs.
- Provide cost projections for new Spark jobs in development.
- Partner with teams (engineering, product, UX) to deliver impactful solutions.
- Communicate proactively: share successes, raise challenges early, and ask for help when needed.
- Contribute to a culture of innovation, inclusivity, and continuous improvement.
What We're Looking For
- Bachelor’s or higher in Computer Science, Engineering, or related field.
- 5+ years in software development with strong coding skills in Java, C++, or mobile platforms.
- Hands-on experience with PySpark and distributed data processing frameworks.
- Familiarity with cloud platforms (AWS, Azure) and data services.
- Solid understanding of computer science fundamentals: algorithms, complexity, parallelism, and system design.
- Knowledge of DevOps practices for data environments.
- Strong problem-solving and debugging skills.
- Strong written and verbal communication and interpersonal skills.
Nice to Have
- Experience with MLOps, CI/CD pipelines, and containerization (Docker/Kubernetes).
- Exposure to cost optimization and performance tuning for Spark workloads.
Technical Stack
- Java, C++, iOS/Swift, Android/Java
- PySpark, Apache Hadoop, AWS EMR, Azure Databricks, Azure Data Explorer (ADX)
- AWS, Azure, Docker, Kubernetes
Benefits & Compensation
- U.S. pay range: $159,200 -- $301,600 annually.
- California: $208,300 - $301,600.
- Washington: $190,200 - $275,400.
- + equity: Certain roles may be eligible for long-term incentives in the form of a new hire equity award.
Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law.



