Remote (Global) Employment

LILT is hiring an AI Red Team Engineer - Traditional Chinese

About the Role

LILT is seeking freelance AI Red Team Engineers fluent in Traditional Chinese to stress-test AI systems. In this role, you will apply adversarial thinking to uncover vulnerabilities, systematically document outcomes, and collaborate directly with engineers and safety researchers to enhance system robustness.

What You'll Do

  • Craft prompts and scenarios to test model guardrails for LLMs, multimodal models, inference services, RAG/embeddings, and product integrations.
  • Explore creative methods to bypass AI system restrictions.
  • Systematically document outcomes from adversarial testing.
  • Think like an adversary to uncover weaknesses in AI systems.
  • Collaborate with engineers and safety researchers to share findings and improve system defenses.

What We're Looking For

  • A Bachelor's or Master’s degree in Computer Science, Software Engineering, Cybersecurity, Digital Forensics, or a related field.
  • Advanced (C1) or above proficiency in English.
  • Demonstrated adversarial thinking.
  • Knowledge of common model vulnerabilities like prompt injection, prompt-history leakage, and data exfiltration via RAG.
  • Experience in AI/ML security, evaluation, and red teaming, particularly with LLMs, AI agents, and RAG pipelines.
  • Readiness to learn new methods, switch between tasks quickly, and sometimes work with challenging, complex guidelines.
  • Proficiency in scripting and automation using Python, Bash, or PowerShell.
  • Familiarity with AI red-teaming frameworks such as garak or PyRIT.

Nice to Have

  • Physical-world adversarial testing experience.
  • Experience with containerization and CI/CD security tools, especially Docker.
  • Proficiency in offensive exploitation and exploit development.
  • Skills in reverse engineering using tools like Ghidra or equivalents.
  • Expertise in network and application security, including web application security.
  • Knowledge of OS security concepts such as Linux privilege escalation and Windows internals.
  • Familiarity with secure coding practices for full-stack development.

Technical Stack

  • Python, Bash, PowerShell
  • Docker, Ghidra

Team & Environment

You will collaborate directly with engineers and safety researchers as part of a freelance project team.

Benefits & Compensation

  • Get paid for your expertise, with rates up to $55/hour based on skills, experience, and project needs.
  • Take part in a part-time, remote, freelance project that fits around your primary professional or academic commitments.
  • Work on advanced AI projects and gain valuable portfolio-enhancing experience.
  • Influence how future AI models understand and communicate in your field.

Work Mode

This is a fully remote, freelance position.

LILT is an equal opportunity employer. We extend equal opportunity to all individuals without regard to an individual’s race, religion, color, national origin, ancestry, sex, sexual orientation, gender identity, age, physical or mental disability, medical condition, genetic characteristics, veteran or marital status, pregnancy, or any other classification protected by applicable local, state or federal laws.

Required Skills
PythonBashPowerShellDockerGhidraRed TeamingVulnerability AssessmentPenetration TestingReverse EngineeringThreat ModelingSecurity ToolingScriptingMalware AnalysisAdversarial AI
Freelancing without stability?

Get steady projects, keep your freedom

Iglu connects you with international clients and handles contracts, payments, and admin. You get consistent work and flexibility — no more chasing invoices or worrying about gaps.

Consistent client projects
Contract & payment management
Flexible work schedule
Revenue-sharing compensation
See open positions
Work from anywhere
About company
LILT

Provides multilingual AI and human-verified services to Enterprises, Governments, and AI Developers around the world. Leading transformation in how the world communicates through AI.

Visit website
Job Details
Category security
Posted 4 months ago