LILT is seeking a freelance AI Red Team Engineer fluent in Korean to join projects focused on adversarial testing of AI systems. You will think like an adversary to uncover weaknesses and help improve security.
What You'll Do
- Craft prompts and scenarios to test model guardrails for AI systems including LLMs, multimodal models, inference services, RAG/embeddings, and product integrations.
- Explore creative ways to bypass restrictions and systematically document outcomes.
- Think like an adversary to uncover weaknesses in AI systems.
- Collaborate with engineers and safety researchers to share findings and improve system defenses.
What We're Looking For
- Hold a Bachelor's or Master’s Degree in Computer Science, Software Engineering, Cybersecurity, Digital Forensics or other related fields.
- Advanced (C1) or above level of English.
- Adversarial thinking.
- Knowledge of vulnerabilities, common model vulnerabilities (prompt injection, prompt-history leakage, data exfiltration via RAG).
- Experience in AI/ML security, evaluation, and red teaming, particularly with LLMs, AI agents, and RAG pipelines.
- Ready to learn new methods, able to switch between tasks and topics quickly and sometimes work with challenging, complex guidelines.
- Proficient in scripting and automation using Python, Bash, or PowerShell.
- Familiar with AI red-teaming frameworks such as garak or PyRIT.
- Deep understanding of Generative AI and main models, including their underlying architectures, training processes, and potential failure modes.
- Experience in cybersecurity principles, including threat modeling, vulnerability assessment, and penetration testing.
- Strong analytical skills to dissect model outputs, identify subtle biases or factual errors, and recognize patterns.
- Commitment to using skills for defensive and security-focused purposes, adhering to a strict ethical code.
Nice to Have
- Physical-world adversarial testing experience.
- Experienced with containerization and CI/CD security tools, especially Docker.
- Proficient in offensive exploitation and exploit development.
- Skilled in reverse engineering using tools like Ghidra or equivalents.
- Expertise in network and application security, including web application security.
- Knowledge of operating system security concepts such as Linux privilege escalation and Windows internals.
- Familiar with secure coding practices for full-stack development.
Technical Stack
- Python, Bash, PowerShell
- AI red-teaming frameworks (garak, PyRIT)
- Docker, Ghidra
Benefits & Compensation
- Get paid for your expertise, with rates that can go up to $55/hour depending on your skills, experience, and project needs.
- Take part in a part-time, remote, freelance project that fits around your primary professional or academic commitments.
- Work on advanced AI projects and gain valuable experience that enhances your portfolio.
- Influence how future AI models understand and communicate in your field of expertise.
Work Mode
This is a remote, freelance position.
LILT is an equal opportunity employer. We extend equal opportunity to all individuals without regard to an individual’s race, religion, color, national origin, ancestry, sex, sexual orientation, gender identity, age, physical or mental disability, medical condition, genetic characteristics, veteran or marital status, pregnancy, or any other classification protected by applicable local, state or federal laws.





