Design and implement machine learning solutions that support critical defense and security missions. This role centers on full-cycle development—from prototyping models to deploying scalable AI systems in production environments.
What You'll Do
- Create and refine machine learning models using PyTorch, TensorFlow, scikit-learn, Hugging Face, and LangChain.
- Develop and manage end-to-end MLOps workflows with tools like MLflow, Kubeflow, and DVC to streamline training and inference.
- Build and optimize vector databases using Milvus, Pinecone, Chroma, or FAISS, and implement retrieval-augmented architectures including RAG and hybrid graph systems.
- Train and fine-tune large language models using parameter-efficient methods such as LoRA, QLoRA, and PEFT.
- Develop agent-based AI applications using LangGraph, AutoGen, CrewAI, or DSPy to support autonomous reasoning tasks.
- Write robust, production-grade Python code for data pipelines, feature extraction, embedding generation, and model serving.
- Integrate AI components into existing systems via APIs, event-driven logic, or embedded copilot interfaces.
- Work closely with data engineers, software developers, and mission stakeholders to align AI capabilities with operational requirements.
- Participate in code reviews, maintain shared repositories, and document experiments to ensure reproducibility and auditability.
What We Require
- U.S. citizenship and willingness to obtain and maintain a security clearance.
- 6–10+ years of hands-on experience deploying AI/ML systems in production.
- At least 3 years working within Department of Defense or equivalent environments involving AI assurance, security, and deployment.
- Strong proficiency in Python and experience building AI-driven applications.
- Proven track record with ML frameworks including PyTorch, TensorFlow, or Hugging Face.
- Experience building MLOps pipelines using MLflow, Kubeflow, DVC, or similar tools.
- Familiarity with vector databases and retrieval architectures such as RAG, graph-based search, or hybrid systems.
- Professional experience fine-tuning and evaluating LLMs or task-specific models using parameter-efficient techniques.
- Track record integrating AI features into real-world software or mission-critical systems.
Nice to Have
- Experience with agentic frameworks like LangGraph, AutoGen, CrewAI, or DSPy.
- Knowledge of prompt engineering, retrieval quality evaluation, and grounding strategies.
- Background in GPU-accelerated or edge-device inference setups.
- Degree in Computer Science, Engineering, Data Science, or related field.
- Active Secret clearance; ability to obtain one is required if not currently held.