Clara's Innovation Team is seeking an AI Engineer to rapidly build and deploy AI-powered features that solve real customer and internal problems. This role emphasizes speed, pragmatism, and immediate impact, with end-to-end ownership of projects from scoping to deployment.
What You'll Do
- Design, build, and deploy AI-powered features and applications from 0 to 1 in production
- Integrate LLMs and AI models into existing products and workflows, handling the full stack from API integration to user-facing features
- Build intelligent automation tools that improve internal operations, customer experience, or business processes
- Create MVPs and prototypes quickly to validate ideas, then iterate based on real usage and feedback
- Own the entire lifecycle: scoping, technical design, implementation, deployment, and monitoring
- Build robust APIs and backend services that power AI features with proper authentication, rate limiting, and error handling
- Design and implement data pipelines that support AI applications: document processing, embedding generation, vector search
- Deploy and maintain containerized applications on AWS infrastructure (ECS, Lambda, S3, RDS)
- Implement monitoring, logging, and observability for AI features in production
- Ensure AI applications meet security, privacy, and compliance requirements for financial services
- Work closely with product teams, data scientists, and business stakeholders to identify high-impact AI opportunities
- Translate business problems into technical solutions, making pragmatic decisions about build vs. buy vs. API
- Share knowledge and evangelize successful patterns across engineering teams
- Balance speed with sustainability—ship fast without creating technical debt that blocks future iteration
- Contribute to the broader engineering organization by bringing innovation team learnings back to core teams
- Stay current with rapidly evolving AI tools, frameworks, and best practices
- Experiment with new AI capabilities and evaluate their potential for Clara's use cases
- Share findings, demos, and insights with the broader team to inspire innovation
What We're Looking For
- Strong proficiency in Python
- Working knowledge of Node.js or Java
- Hands-on experience integrating LLMs into production applications (not just prompt engineering)—you've built real features with OpenAI, Anthropic, or similar APIs
- Database expertise: PostgreSQL, vector databases (Pinecone, Weaviate, Chroma, pgvector), or data modeling
- Cloud infrastructure: AWS services (ECS, S3, Lambda, RDS, API Gateway, SQS, etc.)
- API development: RESTful services, authentication, rate limiting, error handling
- Built and deployed at least 2 AI/ML features or products to production that real users interact with
- Experience with containerization (Docker) and orchestration (ECS, EKS, or similar)
- Comfortable with git workflows and CI/CD practices (GitHub Actions, GitLab CI, automated deployments)
- Experience working across the full stack: backend APIs, data processing, and basic frontend integration
- Problem-solver first, technology evangelist second—you choose the right tool for the job, not the newest one
- Comfortable with ambiguity and rapid iteration—you thrive when requirements are fuzzy and priorities shift
- Self-directed with ability to scope and execute projects independently—you can take a problem and run with it
- Bias toward action—you ship working solutions and iterate based on feedback rather than pursuing perfection
Nice to Have
- Experience with LangChain, LlamaIndex, or similar LLM frameworks for building RAG applications
- Familiarity with embedding models and semantic search implementations
- Experience with streaming APIs and real-time AI applications (WebSockets, Server-Sent Events)
- Frontend experience (React, Next.js) to build full-stack AI features
- Knowledge of ML model deployment (model serving, inference optimization, A/B testing)
- Experience in fintech or highly regulated industries understanding compliance and security requirements
- Background in data engineering or analytics to understand data pipelines and infrastructure
- Experience with prompt engineering best practices and LLM evaluation frameworks
- Contributions to open-source AI/ML projects or technical writing/blogging
Technical Stack
- Python
- Node.js
- Java
- OpenAI
- Anthropic
- PostgreSQL
- Pinecone
- Weaviate
- Chroma
- pgvector
- AWS
- ECS
- S3
- Lambda
- RDS
- API Gateway
- SQS
- RESTful APIs
- Docker
- EKS
- Git
- GitHub Actions
- GitLab CI
- LangChain
- LlamaIndex
- WebSockets
- Server-Sent Events
- React
- Next.js
Team & Environment
- Distributed across the Americas
- Innovation Team operating in ultra-fast mode, working across product, engineering, and AI
- High-ownership environment: we move fast, learn fast, and raise the bar — together
- Smart, ambitious teammates — low ego, high impact
- #Clarity. We say things clearly, directly, and proactively.
- #Simplicity. We reduce noise to focus on what really matters.
- #Ownership. We take responsibility and never wait to be told.
- #Pride. We build products and experiences we’re proud of.
- #Always Be Changing (ABC). We grow through feedback, risk-taking, and action.
- #Inclusivity. Every voice counts. Everyone contributes to our mission.
Benefits & Compensation
- Competitive salary
- Stock options (ESOP) from day one
- Annual learning budget
- Internal accelerated development paths
- Flexible vacation
- Hybrid work model focused on results
- Multicultural team with daily exposure to Portuguese, Spanish, and English (corporate language)
Work Mode
Claridians in a hybrid mode split their time between working from the office, talking to or visiting customers, or working from home. We don't enforce a minimum number of days for most roles, but you're expected to spend time at the office organically, and be at the office most days during your ramp-up or when required by your leader. Location: São Paulo.
