Tavus has raised $40 million in Series B funding to redefine the relationship between humans and computers. The funding round was led by CRV, with strong participation from Scale Venture Partners, Sequoia Capital, Y Combinator, HubSpot Ventures, and Flex Capital. The investment will accelerate Tavus’ mission to create AI humans capable of emotional intelligence and multimodal interaction through text, voice, and face-to-face communication.
At the core of this vision lies PALs (Personal Affective Links) AI-driven digital humans that can see, hear, understand, and respond like real people. These PALs represent a major leap forward in human computing, marking a shift from traditional chatbots to AI companions that genuinely understand context, emotion, and personality.
AI Authority Trend: HumanSignal Launches Multimodal Data Services with Acquisition of Erud AI
“We’ve spent decades forcing humans to learn to speak the language of machines,” said Hassaan Raza, CEO of Tavus. “With PALs, we’re finally teaching machines to think like humans to see, hear, respond, and look like we do. To understand emotion, context, and all the messy, beautiful stuff that makes us who we are. It’s not about more intelligent AI, it’s about AI that actually meets you where you are.”
For years, the human-computer interface has remained stagnant. From the command-line interfaces of the 1980s to modern graphical user interfaces, innovation plateaued at the point where users had to adapt to machines. Today’s text-based chatbots echo those early limitations, forcing users to articulate every command. However, Tavus is finally changing that narrative. By introducing PALs, the company is making digital interaction as seamless as talking to a friend.
PALs redefine the concept of AI assistants. They possess lifelike visual presence, interpret expressions and emotions, and move fluidly between video, voice, and text. More impressively, these AI humans remember context, understand subtle cues, and even take initiative managing calendars, sending emails, and following up autonomously. Over time, they learn from every interaction, adapting to individual habits and personalities.
AI Authority Trend: AI-Generated Voices Are Now Indistinguishable from Real Humans
Behind each PAL lies a set of proprietary foundational models developed by Tavus’ in-house research team. These include:
- Phoenix-4, a state-of-the-art rendering model that enables lifelike facial expressions and emotion generation at real-time speed.
- Sparrow-1, an advanced audio model designed to grasp tone, timing, and emotional nuance for fluid, natural conversations.
- Raven-1, a contextual perception model that understands gestures, environments, and expressions, allowing PALs to perceive the world like humans.
Together, these models integrate through an advanced orchestration and memory management system that gives PALs agency, presence, and emotional depth. Unlike previous AI systems, they don’t just mimic humans they interact, remember, and act independently.
As Tavus leads the charge into the next frontier of AI, the company envisions a future where machines finally speak our language emotionally aware, contextually intelligent, and truly human in interaction. This marks the dawn of a new era: computers that don’t just process data, but feel alive.
AI Authority Trend: Applied Intuition Acquires Reblika’s GenAI Tech for 3D Digital Humans
To share your insights, please write to us at info@intentamplify.com



