Responsibilities
- Lead end-to-end project execution, from initial planning and guideline development to data delivery and post-project review.
- Supervise large-scale multilingual data workflows, including collection of audio, text, and image data, as well as evaluation of large language models using techniques like RLHF, SFT, ranking, and safety assessments.
- Predict workforce capacity requirements and coordinate internal and external contributors across time zones and language regions.
- Track and report on throughput metrics, such as volume of data processed per hour or day.
- Track and report on data quality metrics, including accuracy rates, Inter-Annotator Agreement (IAA), and performance on gold-standard datasets.
- Track and report on productivity metrics, including cost per task and efficiency of workforce output.
- Establish quality assurance processes, conduct root cause analysis for performance issues, and implement targeted retraining for annotation teams.
- Develop and maintain real-time dashboards to monitor project status and identify operational bottlenecks.
- Manage partnerships with linguistic experts and crowd-sourced contributors, ensuring compliance with service-level agreements and attention to regional language details.
- Collaborate with technical operations teams working on applied AI systems.
- Convert complex technical specifications into clear, easy-to-follow instructions for non-technical team members.
- Enable continuous improvement by integrating data insights into updates for annotation guidelines and model training approaches.
Work Arrangement
Remote (Worldwide)
Other
- Work on varied projects from any location and at any preferred time.
- Receive timely and equitable compensation.
- Grow your professional connections within a collaborative environment.
- Experience a simplified application process designed around your skills.