EXL is hiring an AI Engineer to design, develop, and deploy advanced AI-driven solutions. In this role, you will leverage large language models and modern Generative AI technologies to build scalable systems, deploy applications in production, and ensure the secure handling of sensitive data such as PII/PHI.
What You'll Do
- Design, develop, and deploy Generative AI and machine learning solutions using large language models and modern AI frameworks.
- Implement LLM fine-tuning, prompt engineering, retrieval-augmented generation (RAG), and model optimization techniques.
- Deploy and manage AI/ML pipelines and GenAI applications in production environments using cloud platforms such as Azure or GCP.
- Build and maintain agentic AI systems leveraging modern orchestration and agent frameworks.
- Work with large datasets to build data pipelines and ensure effective model training and evaluation.
- Develop AI solutions using Python and modern ML libraries.
- Manage deployments and infrastructure in Linux-based environments.
- Ensure secure processing and governance of sensitive data, including PII and PHI, adhering to enterprise security and compliance standards.
- Optimize AI models and infrastructure for performance, scalability, reliability, and cost efficiency.
- Collaborate with cross-functional teams to integrate AI models into enterprise applications and production systems.
- Document model architectures, experiments, deployment pipelines, and operational processes.
- Stay current with emerging developments in Generative AI, LLMs, and AI engineering practices.
What We're Looking For
- Bachelor’s degree in Engineering, Computer Science, AI/ML, Data Science, or a related technical field.
- 10+ years of overall professional experience in software engineering, machine learning, or AI engineering.
- Minimum 3+ years of hands-on experience working with Generative AI technologies and LLMs.
- Strong programming experience in Python.
- Proven experience fine-tuning, deploying, and optimizing large language models.
- Experience deploying Generative AI solutions into production environments.
- Hands-on experience working with Linux-based systems.
- Experience with cloud platforms such as Microsoft Azure or Google Cloud Platform (GCP).
- Experience building ML pipelines and deploying AI models using modern MLOps practices.
- Experience handling and securing sensitive data such as PII and PHI in compliance with security and regulatory requirements.
Technical Stack
- Python
- Large Language Models (LLMs)
- Generative AI frameworks
- Azure
- Google Cloud Platform (GCP)
- Linux
Team & Environment
You will collaborate with data scientists, software engineers, and business stakeholders.





