logo inner

Research Staff, LLMs

DeepgramAnn Arbor, Michigan, United States | San Francisco, California, United StatesRemote, Onsite

Company Overview


Deepgram is the leading voice AI platform for developers building speech-to-text (STT), text-to-speech (TTS) and full speech-to-speech (STS) offerings. 200,000+ developers build with Deepgram’s voice-native foundational models – accessed through APIs or as self-managed software – due to our unmatched accuracy, latency and pricing. Customers include software companies building voice products, co-sell partners working with large enterprises, and enterprises solving internal voice AI use cases.

The company ended 2024 cash-flow positive with 400+ enterprise customers, 3.3x annual usage growth across the past 4 years, over 50,000 years of audio processed and over 1 trillion words transcribed. There is no organization in the world that understands voice better than Deepgram

The Opportunity


Voice is the most natural modality for human interaction with machines. However, current sequence modeling paradigms based on jointly scaling model and data cannot deliver voice AI capable of universal human interaction. The challenges are rooted in fundamental data problems posed by audio: real-world audio data is scarce and enormously diverse, spanning a vast space of voices, speaking styles, and acoustic conditions. Even if billions of hours of audio were accessible, its inherent high dimensionality creates computational and storage costs that make training and deployment prohibitively expensive at world scale.

We believe that entirely new paradigms for audio AI are needed to overcome these challenges and make voice interaction accessible to everyone.

The Role


Deepgram is currently looking for an experienced researcher to who has worked extensively with Large Language Models (LLMS) and has a deep understanding of transformer architecture to join our Research Staff. As a Member of the Research Staff, this individual should have extensive experience working on the hard technical aspects of LLMs, such as data curation, distributed large-scale training, optimization of transformer architecture, and Reinforcement Learning (RL) training.

The Challenge


We are seeking researchers who:

  • See "unsolved" problems as opportunities to pioneer entirely new approaches
  • Can identify the one critical experiment that will validate or kill an idea in days, not months
  • Have the vision to scale successful proofs-of-concept 100x
  • Are obsessed with using AI to automate and amplify your own impact

If you find yourself energized rather than daunted by these expectations—if you're already thinking about five ideas to try while reading this—you might be the researcher we need. This role demands obsession with the problems, creativity in approach, and relentless drive toward elegant, scalable solutions. The technical challenges are immense, but the potential impact is transformative.

What You'll Do


  • Brainstorming and collaborating with other members of the Research Staff to define new LLM research initiatives
  • Broad surveying of literature, evaluating, classifying, and distilling current methods
  • Designing and carrying out experimental programs for LLMs
  • Driving transformer (LLM) training jobs successfully on distributed compute infrastructure and deploying new models into production
  • Documenting and presenting results and complex technical concepts clearly for a target audience
  • Staying up to date with the latest advances in deep learning and LLMs, with a particular eye towards their implications and applications within our products

You'll Love This Role if You


  • Are passionate about AI and excited about working on state of the art LLM research
  • Have an interest in producing and applying new science to help us develop and deploy large language models
  • Enjoy building from the ground up and love to create new systems.
  • Have strong communication skills and are able to translate complex concepts clearly
  • Are highly analytical and enjoy delving into detailed analyses when necessary

It's Important to Us That You Have


  • 3+ years of experience in applied deep learning research, with a solid understanding toward the applications and implications of different neural network types, architectures, and loss mechanism
  • Proven experience working with large language models (LLMs) - including experience with data curation, distributed large-scale training, optimization of transformer architecture, and RL Learning
  • Strong experience coding in Python and working with Pytorch
  • Experience with various transformer architectures (auto-regressive, sequence-to-sequence.etc)
  • Experience with distributed computing and large-scale data processing
  • Prior experience in conducting experimental programs and using results to optimize models

It Would Be Great if You Had


  • Deep understanding of transformers, causal LMs, and their underlying architecture
  • Understanding of distributed training and distributed inference schemes for LLMs
  • Familiarity with RLHF labeling and training pipelines
  • Up-to-date knowledge of recent LLM techniques and developments

  • Published papers in Deep Learning Research, particularly related to LLMs and deep neural networks

Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $85 million in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly.

We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.We are happy to provide accommodations for applicants who need them.Compensation Range: $150K - $220K

Life at Deepgram

Deepgram builds Speech Recognition for Enterprise. Powered by our own Deep Learning models, our scalable API reliably translates high-value audio into parsable data with industry leading accuracy. Deepgram is an NVIDIA partner and a Y Combinator company.
Thrive Here & What We Value* Equal Opportunity Employer* Collaboration and Doing the Right Thing* CustomerCentric Approach* GrowthOriented Company* CuttingEdge Technology
Your tracker settings

We use cookies and similar methods to recognize visitors and remember their preferences. We also use them to measure ad campaign effectiveness, target ads and analyze site traffic. To learn more about these methods, including how to disable them, view our Cookie Policy or Privacy Policy.

By tapping `Accept`, you consent to the use of these methods by us and third parties. You can always change your tracker preferences by visiting our Cookie Policy.

logo innerThatStartupJob
Discover the best startup and their job positions, all in one place.
Copyright © 2025