The Principal AI Engineer will serve as the technical authority and hands-on architect for high-code agent development, orchestration frameworks, and enterprise AI integration during the Gemini rollout at F5. This role is central to shaping how engineering teams across F5 adopt and implement secure, production-grade AI solutions using Gemini, Vertex AI, and internal infrastructure.
What You'll Do
- Design and implement enterprise-grade agent orchestration frameworks supporting tool use, memory, RAG, agentic workflows and automation.
- Establish patterns for multi-agent collaboration, event-driven execution, and workflow chaining across enterprise systems.
- Define standards for agent lifecycle management, state persistence, and context engineering.
- Lead technical integration of Gemini models via Vertex AI, ensuring secure, scalable API consumption and proper model routing.
- Develop internal SDKs, abstractions, and reusable components to standardize Gemini usage across F5 teams.
- Optimize prompt engineering, token efficiency, grounding strategies, and structured output patterns.
- Build reference implementations and reusable frameworks for high-code agents in Java, Python, Go, or TypeScript.
- Establish secure integration patterns for agents interacting with Salesforce, Snowflake, SharePoint, ServiceNow, and internal APIs.
- Drive best practices for MCP (Model Context Protocol) server development and secure API mediation.
- Implement logging, tracing, telemetry, and evaluation pipelines for agent performance and reliability.
- Establish guardrails including input/output validation, hallucination mitigation, prompt injection defenses, and policy enforcement.
- Partner with Security to ensure secure data handling, RBAC enforcement, and compliance alignment.
- Support engineering teams adopting Gemini Code Assist, CLI workflows, and internal AI development platforms.
- Create technical documentation, internal libraries, and code samples for no-code, low-code, and pro-code agent builders.
- Provide architectural review and guidance for AI-enabled applications across F5.
- Optimize inference latency, parallelization, and cost management strategies across agent workflows.
- Implement caching strategies, streaming responses, and batching techniques to improve throughput and reliability.
- Evaluate and benchmark agent/model performance across different workloads.
What We're Looking For
- 10+ years of experience in software engineering, with significant experience in distributed systems and backend architecture.
- Deep hands-on coding expertise in Python and at least one of: Go, Java, or TypeScript.
- Production experience with LLM-based systems, including prompt engineering, tool calling, RAG, embeddings, and agent frameworks.
- Experience with Vertex AI, Gemini APIs, OpenAI APIs, or similar enterprise AI platforms.
- Strong understanding of API design, microservices, Kubernetes, and cloud-native architectures.
- Experience building or integrating orchestration frameworks (e.g., LangChain, LlamaIndex, custom orchestration layers).
- Familiarity with vector databases, embedding pipelines, and retrieval strategies.
- Strong understanding of authentication, authorization, and enterprise security patterns.
- Proven ability to build reusable platforms, not point solutions.
Nice to Have
- Experience building multi-agent systems or autonomous workflow engines.
- Experience with model evaluation pipelines and AI quality metrics.
- Familiarity with structured output enforcement (JSON schemas, function calling).
- Experience working with enterprise data systems such as Snowflake, Salesforce, ServiceNow, SharePoint.
- Knowledge of cost modeling and inference optimization techniques.
- Experience contributing to internal developer platforms or SDK ecosystems.
- Background in AI safety, red-teaming, or model robustness evaluation.
Technical Stack
Gemini, Vertex AI, OpenAI APIs, LangChain, LlamaIndex, Python, Go, Java, TypeScript, Kubernetes, Microservices, APIs, Vector databases, RAG, LLM architectures, Agent frameworks, Salesforce, Snowflake, SharePoint, ServiceNow, Internal APIs, MCP (Model Context Protocol)
Team & Environment
Engineering teams across F5 adopting AI platforms
Benefits & Compensation
- Incentive compensation
- Bonus
- Restricted stock units
- Health benefits (details available at https://www.f5.com/company/careers/benefits)
- Reasonable accommodations available upon request
- Equal employment opportunity
Compensation: $186,400.00 - $279,600.00 base salary, with restricted stock units, incentive compensation, and bonus.
Work Mode
Broad salary ranges account for geographic locations and market conditions.
It is the policy of F5 to provide equal employment opportunities to all employees and employment applicants without regard to unlawful considerations of race, religion, color, national origin, sex, sexual orientation, gender identity or expression, age, sensory, physical, or mental disability, marital status, veteran or military status, genetic information, or any other classification protected by applicable local, state, or federal laws. This policy applies to all aspects of employment, including, but not limited to, hiring, job assignment, compensation, promotion, benefits, training, discipline, and termination.