Machine Learning Engineer, Assessments

Speak
San Francisco, CA
Category Engineering
Remote
Job Description
We're hiring an ML Engineer, Assessments to help build best-in-class assessment systems across multiple products. You will work in a tight loop with our Assessment Design Lead, Machine Learning, Product, and Engineering to turn assessment constructs and rubrics into reliable, scalable scoring + feedback systems.

Requirements

  • Ship and own assessment ML systems end-to-end
  • Build, deploy, and maintain scoring models/pipelines (feature extraction → model training → inference → feedback generation)
  • Own monitoring, regression tests, and ongoing iteration to maintain accuracy targets
  • Define and operationalize evaluation
  • Implement validation/evaluation frameworks for assessments, including metrics, test sets, and offline/online analysis
  • Translate assessment requirements into measurable acceptance criteria and guardrails
  • Partner deeply with the Assessment Design Lead
  • Co-develop the strategy, together with the Content team, to grow assessments into a core platform at Speak
  • Work in a tight weekly loop to deliver incremental improvement
  • Drive near-term delivery across products
  • Stand up or improve summative assessments (spoken language ability) and bring them reliably to production
  • Prototype and validate formative assessment approaches to measure improvement over weeks/months
  • Support data and labeling strategy
  • Help define data needs for training/evaluation (including psychometric measurement needs)
  • Build or improve pipelines that support label collection and analysis (especially for efficacy studies)

Benefits

  • Competitive salary
  • Equity
  • Generous Paid Time Off
  • 401k Matching
  • Retirement Plan
  • Visa Sponsorship
  • Four Day Work Week
  • Generous Parental Leave
  • Tuition Reimbursement
  • Relocation Assistance
]]>