Hybrid | 8 month contract | Multinational Tech Company
You will conduct research, data input and support functions for the clients most prominent Generative AI products, using novel strategies to identify unknown content safety risks. This will involve:
- Identifying strategies and methodologies.
- Using desk bound research, you will enable the team by identifying techniques that they can use in their exercises.
- Collaborating with testing teams - partner other teams to understand product functionality, potential vulnerabilities, and develop actionable solutions.
- Data Analysis: Review raw results to assess policy or egregious content violations/trends. Includes manual reviews.
- Conduct in-depth analysis of complex issues and edge cases, providing clear and actionable insights.
Skills:
- Bachelor's degree in a relevant field (e.g. Social Science, Computer Science, Information Security, Linguistics).
- 2-4 years of experience in Trust and Safety, Product Policy, Privacy and Security, Legal, Compliance, Risk Management, or similar.
- Proven ability to think strategically and identify emerging threats and vulnerabilities.
- Excellent analytical skills.
- Strong written and verbal communication skills to effectively convey complex technical information to both technical and non-technical audiences.
- Research skills, data analysis, data entry, annotation, research on methodology.
This role may be exposed to graphic, controversial, and/or upsetting content.
