HomeTech & AITwo undergrads built an AI speech model to rival NotebookLM

Two undergrads built an AI speech model to rival NotebookLM


A pair of undergrads, neither with extensive AI expertise, say that they’ve created an openly available AI model that can generate podcast-style clips similar to Google’s NotebookLM.

The market for synthetic speech tools is vast and growing. ElevenLabs is one of the largest players, but there’s no shortage of challengers (see PlayAI, Sesame, and so on). Investors believe that these tools have immense potential. According to PitchBook, startups developing voice AI tech raised over $398 million in VC funding last year.

Toby Kim, one of the Korea-based co-founders of Nari Labs, the group behind the newly released model, said that he and his fellow co-founder started learning about speech AI three months ago. Inspired by NotebookLM, they wanted to create a model that offered more control over generated voices and “freedom in the script.”

Kim says they used Google’s TPU Research Cloud program, which provides researchers with free access to the company’s TPU AI chips, to train Nari’s model, Dia. Weighing in at 1.6 billion parameters, Dia can generate dialogue from a script, letting users customize speakers’ tones and insert disfluencies, coughs, laughs, and other nonverbal cues.

Parameters are the internal variables models use to make predictions. Generally, models with more parameters perform better.

Available from the AI dev platform Hugging Face and GitHub, Dia can run on most modern PCs with at least 10GB of VRAM. It generates a random voice unless prompted with a description of an intended style, but it can also clone a person’s voice.

In TechCrunch’s brief testing of Dia through Nari’s web demo, Dia worked quite well, uncomplaining generating two-way chats about any subject. The quality of the voices seems competitive with other tools out there, and the voice cloning function is among the easiest this reporter has tried.

Here’s a sample:

Like many voice generators, Dia offers little in the way of safeguards, however. It’d be trivially easy to craft disinformation or a scammy recording. On Dia’s project pages, Nari discourages abuse of the model to impersonate, deceive, or otherwise engage in illicit campaigns, but the group says it “isn’t responsible” for misuse.

Nari also hasn’t disclosed which data it scraped to train Dia. It’s possible Dia was developed using copyrighted content — a commenter on Hacker News notes that one sample sounds like the hosts of NPR’s “Planet Money” podcast. Training models on copyrighted content is a widespread but legally dubious practice. Some AI companies claim that fair use shields them from liability, while rights holders assert that fair use doesn’t apply to training.

In any event, Kim says Nari’s plan is to create a synthetic voice platform with a “social aspect” on top of Dia and larger, future models. Nari also intends to release a technical report for Dia, and to expand the model’s support to languages beyond English.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

spot_img