Red Hat is looking for a Principal Software Engineer to drive the future of Agentic AI & Orchestration. In this role, you will lead the design and implementation of features and solutions that combine open source, hybrid cloud, and AI technologies, collaborating closely with engineers, product management, and partners.
What You'll Do
- Lead the implementation of scalable, distributed computing solutions for Agentic AI and ensure seamless integration with the Red Hat product portfolio.
- Define and implement Multi-Agent System (MAS) architectures, including orchestration layers, state machines, tool registries, and resilient routing policies.
- Implement Model Context Protocol (MCP) for standardized tool/data access and Agent-to-Agent (A2A) or ACP protocols for cross-platform agent communication.
- Contribute to and influence upstream AI/ML communities to steer the evolution of open standards for agentic workflows.
- Partner with AI/ML vendors and internal teams to refine AI strategies addressing specific use cases.
- Develop technical blueprints and multi-product demos showcasing the Red Hat AI stack.
- Proactively explore emerging AI technologies to identify opportunities for incorporating new capabilities into software development workflows.
- Drive AI integration within the software development lifecycle (SDLC) and share successful experiment use cases with stakeholders.
What We're Looking For
- 7+ years of relevant software engineering experience.
- Bachelor’s degree in Computer Science or a related technical field, or equivalent practical experience.
- Proven experience building agents and tooling frameworks; deep expertise in LangGraph, PydanticAI, or similar state-management libraries.
- Experience implementing sophisticated RAG, long-term memory systems, semantic caches, and vector databases.
- Expert-level proficiency in Python or Go, with a focus on building resilient, asynchronous distributed systems.
- Solid experience with containers and orchestration via OpenShift or Kubernetes.
- Familiarity with model parallelization, quantization, and memory optimization (e.g., vLLM, DeepSpeed, OpenVino).
- Experience with GitOps, automation pipelines, and managing the AI/ML lifecycle in production environments.
- Direct experience with Agent Evaluation (Eval) frameworks and implementing Guardrails & Governance.
Nice to Have
- Cloud Computing experience with AWS, GCP, Azure, or IBM Cloud.
- A history of open-source contributions or active participation in the AI/ML community (GitHub, Research, or Upstream).
Technical Stack
- Languages: Python, Go
- Platforms: OpenShift, Kubernetes
- AI/ML Tools: vLLM, DeepSpeed, OpenVino, LangGraph, PydanticAI
Team & Environment
You will collaborate with a diverse, highly motivated group of engineers, product management, other engineering teams, Red Hat partners, and lighthouse customers.
Benefits & Compensation
- Target compensation range: $151,510.00 - $249,950.00.
- Comprehensive medical, dental, and vision coverage.
- Flexible Spending Account - healthcare and dependent care.
- Health Savings Account - high deductible medical plan.
- Retirement 401(k) with employer match.
- Paid time off and holidays.
- Paid parental leave plans for all new parents.
- Leave benefits including disability, paid family medical leave, and paid military leave.
- Additional benefits including employee stock purchase plan, family planning reimbursement, tuition reimbursement, transportation expense account, employee assistance program, and more.
Work Mode
This is a hybrid position available for candidates based Remote-US.
Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law.



