ByteDance
Location: Singapore
Category: AI / Safety / Research
Duration: Minimum 3 Months
Start: 2026
Allowance: Competitive
About the Role
ByteDance’s Seed Global Data Team sits at the centre of the company’s efforts to build and improve advanced Large Language Models (LLMs). As a Model Safety Policy Project Intern, you will gain first-hand exposure to how AI systems are trained, evaluated, and made safer for users worldwide. This internship offers a fast-paced environment, meaningful project ownership, and an opportunity to explore the evolving field of AI safety.
What You Will Do
- Conduct research on the latest developments in AI safety across academia, industry, and policy spheres.
- Support the design and refinement of evaluation frameworks for multi-modal models, identifying safety risks and failure modes.
- Assist in analysing safety-related datasets to uncover insights that shape model improvements and product decisions.
- Contribute to short-term, high-impact projects focused on safe model training, evaluation, and policy alignment.
- Work with cross-functional teams to strengthen internal safety standards and responsible AI practices.
Note: The role may involve exposure to sensitive or harmful content. ByteDance provides resilience training and structured support resources.
Requirements
- Currently pursuing a Bachelor’s or Master’s degree in AI policy, Computer Science, Engineering, Journalism, International Relations, Law, Regional Studies, or related fields.
- Strong analytical ability with comfort working with qualitative and quantitative data.
- Creative problem-solver who works well under ambiguity and adopts tools to improve workflows.
Preferred
- Experience in AI Safety, Trust & Safety, risk management, or related domains.
- Curious, detail-oriented, and eager to learn from real-world case studies.
- Strong interest in emerging technologies and the human impact of AI systems.