Hyderabad, India Hybrid Full-time

Synchrony is hiring an AVP, Applied Model Ops Developer (L11)

About the Role

Role Overview

This position is central to the design and implementation of data infrastructure that enables continuous monitoring of machine learning models in production. As the AVP, Applied Model Ops Developer, you will develop robust pipelines and tooling that ensure models remain accurate, reliable, and compliant with governance standards. The role sits at the intersection of data science and engineering, translating analytical models into scalable, production-grade systems.

Key Responsibilities

  • Collaborate with model developers, validators, and risk teams to understand data requirements for model development, monitoring, and compliance.
  • Work with business units including credit, fraud, marketing, and operations to identify and prioritize data needs for model monitoring use cases.
  • Design and implement scalable data architectures that support both real-time and batch processing for monitoring workflows.
  • Develop and maintain automated pipelines for data ingestion, transformation, feature engineering, and model training.
  • Convert raw data into structured features suitable for machine learning, such as deriving time-based patterns from timestamps.
  • Operationalize data science prototypes by building high-performance, maintainable software systems capable of handling large-scale data streams.
  • Build CI/CD pipelines that integrate code deployment with data validation, model training, and artifact tracking.
  • Ensure data pipelines are integrated with MLOps platforms and observability tools to enable end-to-end model performance tracking.
  • Support regulatory compliance by maintaining data lineage, audit trails, and documentation in alignment with model risk governance standards.
  • Partner with cloud, data lake, and governance teams to prioritize technical deliverables and ensure smooth execution.
  • Optimize storage and compute resources for high-frequency monitoring scenarios involving large model ensembles.

Required Qualifications

  • Bachelor’s degree in a quantitative or technical field such as Computer Science, Statistics, Engineering, or Data Science, with at least 6 years of relevant experience; or 8 years of experience in lieu of a degree.
  • Minimum of 6 years in model operations, data engineering, or analytics infrastructure roles.
  • Proficient in data engineering technologies including Apache Spark, Airflow, Kafka, dbt, and PySpark.
  • Strong programming skills in Python, SQL, and SAS for developing monitoring workflows and validation logic.
  • Experience with cloud platforms such as AWS, Azure, or GCP, and data warehouse solutions like Snowflake, Redshift, or BigQuery.
  • Familiarity with MLOps tools including MLflow, Evidently AI, WhyLabs, and Prometheus for model tracking and monitoring.
  • Understanding of model risk management frameworks and the role of data in compliant model oversight.
  • Proven ability to deliver production-quality code in agile environments alongside DevOps and platform teams.

Preferred Qualifications

  • Advanced degree or professional certification in a relevant technical or analytical discipline.
  • Demonstrated ability to solve complex data problems and automate routine monitoring tasks.
  • Attention to detail with a strong commitment to data quality, integrity, and regulatory compliance.
  • Experience designing alerting systems and diagnostic logs to detect model drift or anomalies.
  • Strong communication skills, with the ability to convey technical concepts to non-technical stakeholders.
  • Exposure to model explainability frameworks and techniques that enhance transparency in AI systems.

Technical Environment

The role leverages a modern data stack including Apache Spark, Airflow, Kafka, dbt, PySpark, SAS, Python, and SQL. Infrastructure spans AWS, Azure, and GCP, with data warehousing in Snowflake, Redshift, and BigQuery. MLOps tooling includes MLflow, Evidently AI, WhyLabs, and Prometheus for model observability and lifecycle management.

Work Model

This is a hybrid role based in India, with flexibility to work from home or from regional hubs in Hyderabad, Bengaluru, Pune, Kolkata, or Delhi/NCR. Employees are expected to be available between 06:00 AM and 11:30 AM Eastern Time for cross-regional collaboration, with the remainder of the schedule flexible. Local working hours will adjust twice yearly to align with US Eastern Time.

Company Culture

The organization is recognized as a top workplace in India for innovation, diversity, and employee satisfaction. It is consistently ranked among the best companies to work for, with a strong focus on inclusive leadership and career development. Employees benefit from a supportive environment, access to modern tools, and initiatives aimed at advancing diverse talent into leadership roles.

Required Skills
Apache SparkAirflowKafkadbtPySparkSASPythonSQLAWSAzuredata engineeringcloud infrastructuremodel operationsvalidation pipelinescredit risk Apache SparkAirflowKafkadbtPySparkSASPythonSQLAWSAzuredata engineeringcloud infrastructuremodel operationsvalidation pipelinescredit risk
Earn more as a remote developer

Performance pay that rewards your skills

Iglu's revenue-sharing model means top performers earn significantly more than traditional salaries. Choose your projects, deliver great work, and see it reflected in your pay.

Revenue-sharing compensation
Project choice & autonomy
International client base
Career growth support
Check compensation
Top earners exceed market rate
About company
Synchrony
Synchrony is a financial services company focused on building a world-class Marketing Organization for its retail and payment partners.
All jobs at Synchrony Visit website
Job Details
Department Decision Management, Model Operations & Analytics
Category data
Posted 2 days ago