Responsibilities
- Build and deploy ML/AI services. Design, develop, and ship ML models and AI systems that Product Engineering teams rely on. You write the model code, the API layer, the monitoring, and the tests. Not notebooks; production services.
- Design with LLMs and APIs. Use LLM APIs (OpenAI, Anthropic, etc.) as building blocks in production systems. You know when to call an LLM, when to fine-tune, when to use a classical model, and when to write a rule. You think about cost, latency, and quality together.
- Ship production software. Write clean, well-structured code with solid OOP, proper abstractions, error handling, and tests. Your code gets reviewed by SWEs and passes. CI/CD is how you work, not something you bolt on at the end.
- Partner with product and engineering. Translate business problems into ML solutions. Define API contracts with product engineers. Explain your approach clearly to non-ML partners and leave the room with alignment, not confusion.
- Evaluate and iterate fast. Build evaluation frameworks, run experiments, and make data-driven decisions about model and system performance. Ship and iterate; don’t wait for perfect.
- Ship AI-powered workflows. Put AI to work on your own processes: automate pipelines, build agentic workflows, and contribute reusable skills and context to Checkr’s agentic platform. The expectation is that our teams operate AI-first.
Requirements
- A Bachelor’s or Master’s degree in Computer Science, Mathematics, or a related technical field, or equivalent depth from experience
- 4+ years building software professionally, with at least 2 of those building ML systems that run in production
- Strong Python fluency; you write clean, testable, well-structured code with solid OOP instincts. Not scripts; software
- Hands-on experience using LLM APIs in production systems: prompt engineering, structured outputs, function calling, cost management, and evaluation
- You’ve built and maintained APIs, worked with CI/CD pipelines, and shipped code that other engineers depend on
- Comfortable with distributed systems concepts: queues, async processing, caching, horizontal scaling
- Experience with NLP tasks in production: classification, extraction, entity resolution, summarization
- Comfort with and enthusiasm for AI-assisted workflows; experience using LLMs, code-generation tools, or agentic systems in production or operational contexts is a strong signal
- You can evaluate tradeoffs: fine-tune vs. prompt, hosted vs. self-deployed, classical ML vs. LLM, rule vs. model
- Strong communication skills; you explain technical decisions clearly to engineers and non-engineers alike, without hiding behind jargon
- You use AI tools (Copilot, Claude, etc.) to move faster, but you understand every line they produce. You can spot AI slop and you don’t ship it
- An A-player mindset with a strong bias for action: you raise the bar, move with urgency, stay resilient through ambiguity, and take ownership to deliver meaningful outcomes.
Nice to Have
- Experience with MLOps platforms (MLflow, SageMaker, Vertex, or similar)
- Background in document processing, OCR, or information extraction
- Experience with PySpark or large-scale data processing
- Ruby experience (Checkr’s platform runs on Rails)
- Familiarity with compliance-sensitive domains (fintech, legal tech, HR tech)
- Working knowledge of dbt, Snowflake, or modern ELT/data transformation tools
Benefits
- A fast-paced, low-bureaucracy environment where shipping matters more than process
- Direct exposure to executive and C-level decision-making
- A seat on the Data & ML team shaping Checkr’s AI strategy
- Learning and development allowance
- Competitive cash and equity compensation and opportunity for advancement
- 100% medical, dental, and vision coverage
- Up to $25K reimbursement for fertility, adoption, and parental planning services
- Flexible PTO policy
- Monthly wellness stipend
Additional Information
- You will partner daily with Product Engineering, Product, and cross-functional teams.