Remote; Argentina (Remote); Brazil (Remote); China (Remote); Egypt (Remote); France (Remote); Germany (Remote); India (Remote); Indonesia (Remote); Japan (Remote); Kenya (Remote); Korea (Remote); Mexico (Remote); Russia; Thailand (Remote); Turkey (Remote); Ukraine (Remote); Vietnam (Remote) Remote (Global) Contract

LILT is hiring a Software Engineering & DevOps AI Rater/Evaluator

About the Role

The AI Rater/Evaluator will play a crucial role in assessing the quality of AI-generated text, with a specific focus on software engineering and DevOps content. The ideal candidate will have a strong background in software engineering, experience with DevOps practices, and a keen eye for detail to ensure the accuracy and relevance of the AI's outputs.

Responsibilities

  • Evaluate AI-generated text for accuracy and relevance in software engineering and DevOps contexts.
  • Provide detailed feedback to improve the AI's language models.
  • Collaborate with the team to identify and address areas for improvement in AI outputs.
  • Ensure that AI-generated content meets high standards of quality and reliability.
  • Contribute to the development of evaluation criteria and guidelines.
  • Participate in training sessions to stay updated on the latest AI technologies and best practices.
  • Work closely with software engineers and DevOps professionals to understand their needs and expectations.
  • Analyze and report on the performance of AI models in generating software engineering and DevOps content.
  • Identify and document patterns and trends in AI-generated text.
  • Provide insights and recommendations to enhance the AI's language capabilities.
  • Assist in the creation of test cases and scenarios to evaluate AI performance.
  • Monitor and evaluate the effectiveness of AI-generated content in real-world applications.
  • Collaborate with cross-functional teams to integrate AI evaluations into the development process.
  • Ensure that AI-generated content adheres to industry standards and best practices.
  • Provide regular updates on the progress and outcomes of AI evaluations.
  • Contribute to the continuous improvement of AI models through iterative feedback.
  • Participate in brainstorming sessions to explore new evaluation methods and techniques.
  • Assist in the development of tools and resources to support AI evaluations.
  • Collaborate with data scientists to analyze and interpret evaluation results.
  • Provide training and support to other team members on AI evaluation processes.
  • Stay current with advancements in AI and machine learning technologies.

Nice to Have

  • Advanced degree in computer science, software engineering, or a related field.
  • Certification in AI, machine learning, or NLP.
  • Experience with AI model development and deployment.
  • Proficiency in programming languages such as Python, Java, or C++.
  • Experience with cloud platforms such as AWS, Azure, or Google Cloud.
  • Familiarity with DevOps tools such as Docker, Kubernetes, or Jenkins.
  • Experience with data visualization and reporting tools.
  • Knowledge of software engineering best practices and methodologies.
  • Experience with agile development methodologies.
  • Familiarity with AI ethics and responsible AI practices.

Compensation

Competitive

Work Arrangement

Remote

Team

Collaborative and dynamic team focused on AI and software engineering.

About the Role

This role involves evaluating AI-generated text specifically in the domains of software engineering and DevOps. The evaluator will work closely with a team of experts to ensure that the AI's outputs are accurate, relevant, and of high quality. The ideal candidate will have a strong background in software engineering, experience with DevOps practices, and a keen eye for detail to provide constructive feedback and improve the AI's language models.

What You'll Do

Evaluate AI-generated text for accuracy and relevance in software engineering and DevOps contexts. Provide detailed feedback to improve the AI's language models. Collaborate with the team to identify and address areas for improvement in AI outputs. Ensure that AI-generated content meets high standards of quality and reliability. Contribute to the development of evaluation criteria and guidelines. Participate in training sessions to stay updated on the latest AI technologies and best practices. Work closely with software engineers and DevOps professionals to understand their needs and expectations. Analyze and report on the performance of AI models in generating software engineering and DevOps content. Identify and document patterns and trends in AI-generated text. Provide insights and recommendations to enhance the AI's language capabilities. Assist in the creation of test cases and scenarios to evaluate AI performance. Monitor and evaluate the effectiveness of AI-generated content in real-world applications. Collaborate with cross-functional teams to integrate AI evaluations into the development process. Ensure that AI-generated content adheres to industry standards and best practices. Provide regular updates on the progress and outcomes of AI evaluations. Contribute to the continuous improvement of AI models through iterative feedback. Participate in brainstorming sessions to explore new evaluation methods and techniques. Assist in the development of tools and resources to support AI evaluations. Collaborate with data scientists to analyze and interpret evaluation results. Provide training and support to other team members on AI evaluation processes. Stay current with advancements in AI and machine learning technologies.

What You'll Need

Proven experience in software engineering and DevOps. Strong analytical and problem-solving skills. Excellent attention to detail and accuracy. Familiarity with AI and machine learning concepts. Experience with natural language processing (NLP) is a plus. Ability to work independently and manage time effectively. Strong communication and collaboration skills. Proficiency in English, both written and verbal. Experience with data analysis and reporting tools. Knowledge of software development lifecycle and DevOps practices. Ability to provide constructive and actionable feedback. Experience with AI evaluation and testing methodologies. Familiarity with software engineering and DevOps terminology. Ability to work in a fast-paced and dynamic environment. Experience with AI-generated content evaluation. Strong organizational and time-management skills. Ability to adapt to new technologies and methodologies. Experience with collaborative tools and platforms. Familiarity with software engineering and DevOps tools and technologies. Ability to work remotely and manage remote collaboration effectively. Experience with AI model training and evaluation.

Nice to Have

Advanced degree in computer science, software engineering, or a related field. Certification in AI, machine learning, or NLP. Experience with AI model development and deployment. Proficiency in programming languages such as Python, Java, or C++. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with DevOps tools such as Docker, Kubernetes, or Jenkins. Experience with data visualization and reporting tools. Knowledge of software engineering best practices and methodologies. Experience with agile development methodologies. Familiarity with AI ethics and responsible AI practices.

What We Offer

Competitive compensation and benefits package. Remote work arrangement with flexible hours. Collaborative and dynamic team environment. Opportunities for professional growth and development. Access to the latest AI technologies and tools. Supportive and inclusive work culture. Regular training and development opportunities. Competitive compensation and benefits package. Remote work arrangement with flexible hours. Collaborative and dynamic team environment. Opportunities for professional growth and development. Access to the latest AI technologies and tools. Supportive and inclusive work culture. Regular training and development opportunities.

How to Apply

Interested candidates are encouraged to submit their resume and cover letter. Please include any relevant experience or qualifications that make you a strong fit for this role. We look forward to reviewing your application and potentially discussing how you can contribute to our team.

Not provided

Earn more as a remote developer

Performance pay that rewards your skills

Iglu's revenue-sharing model means top performers earn significantly more than traditional salaries. Choose your projects, deliver great work, and see it reflected in your pay.

Revenue-sharing compensation
Project choice & autonomy
International client base
Career growth support
Check compensation
Top earners exceed market rate
About company
LILT
LILT builds multilingual AI and human-verified services that make the world's information available to everyone, regardless of language. The company serves Enterprises, Governments, and AI Developers worldwide.
All jobs at LILT Visit website
Job Details
Department LiltLancer Community AI Data Services
Category infrastructure
Posted a month ago