Staff Engineering Analyst, Generative AI, Trust and Safety

Apply Now

Company: Google

Location: Sunnyvale, CA 94087

Description:

Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 7 years of experience in data analysis, including identifying trends, generating summary statistics, and drawing insights from quantitative and qualitative data.
  • 7 years of experience managing projects and defining project scope, goals, and deliverables.
  • 7 years of analytics experience (i.e., BI, data engineering/science, etc.) using SQL.


Preferred qualifications:

  • Master's degree in a quantitative discipline.
  • 7 years of experience with one or more of the following languages: SQL, R, Python, or C .
  • 7 years of experience with machine learning systems.
  • Experience in tuning and applying Large Language Models for data labeling
  • Excellent written and verbal communication skills.


About the job

Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what's right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.

As an Engineering Analyst in the Trust and Safety (T&S) Search, Assistant and Geo (SAGe) team, you will work to discover, measure and mitigate user trust risks in search products through scalable solutions. You will build relationships and partner closely with Engineers, Product Managers, Data Scientists and other functions. You will work with a team of high-performing analysts creating metrics, templates and datasets to improve trust and safety protections. You will learn about product design details, product policies and relevant quality signals. You will work on analyzing existing product protections, evaluating content and helping to improve policy definitions. You will also enable the deployment of key defenses to stop abuse, and lead process improvement efforts to improve speed and quality of response to abuse. You will resolve problems at scale either by working with engineers on automated product protections or through vendor support.

At Google we work hard to earn our users' trust every day. Trust & Safety is Google's team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google's products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

The US base salary range for this full-time position is $174,000-$258,000 bonus equity benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google .

Responsibilities

  • Partner with Search Generative AI teams on project scoping, risk assessments and prioritization. Lead projects and cross-functional initiatives within Google, interacting with executive stakeholders from Engineering, Legal, Product teams and more.
  • Build Large Language Model (LLM) based models that can evaluate content safety according to our product policies.
  • Design and implement product metrics to benchmark user trust risks and track improvements over time. Create datasets for engineers to evaluate and improve sensitive content classifiers.
  • Be an expert in search infrastructure, ranking signals and search features. Deliver leadership and impact as for the broader Trust and Safety Search, Assistant and Geo team.
  • Be exposed to graphic, controversial, or upsetting content. Perform on-call responsibilities on a rotating basis, including weekend coverage/holidays as needed.

Similar Jobs