Miami; Agra; Amritsar; Ankara; Arequipa; Arlington; Asunción; Athens; Atlanta; Aurangabad; Austin; Bandar Lampung; Bandung; Bangalore; Barranquilla; Batam; Bekasi; Belém; Belgrade; Belo Horizonte; Berlin; Bhopal; Bogor; Bogotá; Boise; Boston; Brasília; Bratislava; Brussels; Bucharest; Budapest; Buenos Aires; Calgary; Cali; Campinas; Cartagena; Casablanca; Chattogram; Chennai; Chesapeake; Chicago; Ciudad del Este; Cochabamba; Coimbatore; Columbus; Córdoba; Cuenca; Curitiba; Dallas; Delhi; Denver; Depok; Dhaka; Dhanbad; Dublin; Edmonton; El Alto; Fairfax; Faridabad; Fez; Fort Worth; Fortaleza; Gazipur; Ghaziabad; Goiânia; Guarulhos; Guayaquil; Gwalior; Helsinki; Howrah; Huston; Hyderabad; Indianapolis; Indore; Istanbul; İzmir; Jabalpur; Jacksonville; Jaipur; Jakarta; Jodhpur; Kanpur; Khulna; Kolkata; Kota; La Paz; La Plata; Las Vegas; Lima; Lisbon; London; Lucknow; Ludhiana; Madrid; Madurai; Makassar; Manaus; Mar del Plata; Marrakesh; Medan; Medellín; Meerut; Mississauga; Montevideo; Montreal; Narayanganj; Nashville; Norfolk; Oklahoma City; Oruro; Ottawa; Palembang; Patna; Pekanbaru; Philadelphia; Phnom Penh; Podgorica; Porto; Porto Alegre; Prague; Prayagraj; Prishtinë; Quito; Rabat; Raipur; Ranchi; Recife; Reno; Richmond; Riga; Rio de Janeiro; Rome; Rosario; Salvador; San Antonio; Santa Cruz de la Sierra; Santiago de Chile; Santo Domingo; São Paulo; Sarajevo; Savannah; Seattle; Semarang; Siem Reap; Skopje; Sofia; Srinagar; Surabaya; Tallinn; Tanger; Tangerang; Thane; Tirana; Toronto; Valletta; Vancouver; Varanasi; Vienna; Vijayawada; Vilnius; Virginia Beach; Visakhapatnam; Warsaw; Washington, D.C.; Winnipeg; Zagreb Remote (Global) $50 - $200/hour

G2i is hiring a Senior Software Engineer - AI Interaction Evaluator (Codex / Claude Code, up to $200/hr)

Responsibilities

  • Evaluate AI-generated coding interactions end-to-end
  • Judge whether outputs are useful, correct at a high level, and aligned with how a strong engineer would think
  • Assess the quality of explanations and reasoning, not just code
  • Distinguish between different levels of response quality (e.g. what makes something a 2 vs 4)
  • Provide clear, opinionated feedback on what worked, what didn’t, and what felt 'off' or misleading
  • Help define what great looks like when interacting with tools like Cursor

Requirements

  • Staff / Principal-level engineer (or equivalent experience)
  • Strong background in one of the following: TypeScript / JavaScript, Python
  • Hands-on experience using: OpenAI Codex, Claude Code, Cursor
  • Deep familiarity with modern AI-assisted dev workflows
  • Able to evaluate code without needing to fully execute or deeply review every line
  • Comfortable giving direct, opinionated feedback
  • High bar for what 'good engineering' looks like

Nice to Have

  • Experience with tools like Cursor or similar AI-first IDEs
  • Prior exposure to prompt design or evaluation workflows
  • Experience mentoring senior engineers or defining engineering standards

Work Arrangement

Remote (Worldwide)

What This Role Actually Is

  • You will assess how AI coding agents behave in real-world scenarios — focusing on: Whether the response makes sense, Whether the preamble and reasoning are useful, Whether the output reflects strong engineering judgment, Whether the interaction feels right to an experienced developer
  • This role is about engineering taste — not syntax correctness.

What We Mean by “Taste”

  • We’re specifically looking for engineers who can answer questions like: Does this feel like something a strong engineer would actually say?, Is this explanation helpful, or just technically correct?, Is the model guiding the user well, or just dumping output?, Would this interaction build or erode trust?
  • You should be comfortable making subjective but rigorous judgments.

Engagement Details

  • US and Canada up to $200/hr
  • EU and Latam up to $150/hr
  • Other locations up to $100/hr
  • Hours: ~10–20 hours/week
  • Duration: Through early May (with possible extension)
  • Start: ASAP
  • Process: Take-home evaluation exercise, One behavioral interview

Additional Information

  • 10–20 hrs/week
  • Duration: Through early May (with possible extension)
  • Start: ASAP
  • Process: Take-home evaluation exercise
  • Process: One behavioral interview
Required Skills
modern AI-assisted dev workflowstools like Cursor or similar AI-first ID modern AI-assisted dev workflowstools like Cursor or similar AI-first ID
About company
G2i

G2i is a video-based platform for hiring contract or full-time engineers, designed to help companies hire world-class talent quickly and efficiently. Since 2016, G2i has focused on reducing hiring noise by using video-based technical screening and assessments to increase hiring signal.

The company emphasizes quality, speed, and flexibility, offering a 7-day free trial and matching engineers in days rather than months. G2i serves startups to enterprises and supports hiring across the US, Canada, Latin America, and Europe.

Born in the open-source ecosystem, G2i actively gives back through initiatives like React Miami, the Developer Health Fund, and Dev Health OS. The platform specializes in frontend, backend, full-stack, mobile, infrastructure, data science, product management, and product design roles.

All jobs at G2i Visit website
Job Details
Department G2i Eng Team
Category other
Posted 4 days ago