• Practically AI
  • Posts
  • đź§  AI Is Moving Into Voice, Search, and Wearables

đź§  AI Is Moving Into Voice, Search, and Wearables

Today in AI: NVIDIA’s open voice model, Google’s AI Mode, and Apple’s next bet

đź‘‹ Hello hello,

NVIDIA just dropped a free, open-source real-time voice model, which is basically an invitation to build your own conversational AI without paying a closed API tax.

Meanwhile, Google is laying out its vision for “AI Mode” in Search — a shift toward personal intelligence that makes Search feel less like a website and more like something that understands what you’re trying to get done.

And if you thought AI was staying inside your apps, Apple may have other plans: reports say they’re working on an AI wearable, which could push assistants off the screen and into your daily life.

🔥🔥🔥 Three big updates

NVIDIA just released a real-time conversational AI voice model that’s free and open source. It’s called PersonaPlex 7B, and you can grab it directly on Hugging Face. That alone is a big deal—real-time voice is one of those “sounds simple, is actually hard” categories.

Open-source voice models mean builders don’t have to rely only on closed APIs to ship voice assistants. More experimentation. Lower cost. Faster iteration. And a lot more weird (but fun) voice apps are coming soon.

Google published an update on where Search is headed next: a more personal, AI-powered experience. The core idea is “AI Mode” — where Search becomes less about ten blue links and more about understanding what you’re trying to do, then helping you do it.

This is Google saying the quiet part out loud. Search is turning into an assistant layer, not just a discovery layer. That changes how people find answers, make decisions, and even how creators get visibility.

Apple is reportedly working on an AI wearable. Details are still early, but the direction is clear: AI is moving off the screen and into something you wear. That’s a whole different category of usefulness—and privacy expectations.

A wearable changes behavior. It’s always there, always accessible, and can become the default way people interact with assistants without opening an app. If Apple enters this space seriously, it’s going to pressure everyone else to rethink what “AI product” even looks like.

🔥🔥 Two Tools Worth Trying

Krea introduced Realtime Edit, which enables you to edit images with complex instructions in real-time. If you liked the “Nano Banana” style of instant visual editing, this is the same vibe—fast iterations, quick creative control, and less time stuck in prompt purgatory. Best for creators, designers, and marketers who want to refine visuals live instead of running five separate generations.

VEED just launched Dynamic Subtitles—aka viral-style AI captions in one click. If you post short-form content (Reels, Shorts, TikToks), you already know captions aren’t optional anymore. This eliminates the “I don’t want to edit captions for 40 minutes” problem. Best for creators and social teams shipping volume fast.

🔥🔥 Things You Didn’t Know You Can Do With AI

Simon Meyer built an AI film with a surprisingly simple workflow—starting with character creation, then generating interview scenes with better audio quality using Ingredients.

1. Generate your main character image using Google DeepMind Nano Banana (expect lots of iterations).
2. Lock the character + environment until it feels consistent and believable.
3. Use Google DeepMind Veo 3.1 Ingredients (via Freepik and invideo) to create the interview clips.
4. Focus on the Ingredients mode to improve audio quality (less echo, less distortion).
5. Compile the scenes into the final film and polish timing like a real edit.

Did you learn something new?

Login or Subscribe to participate in polls.

💬 Quick poll: What’s one AI tool or workflow you use every week that most people would find super helpful?

Until next time,
Kushank @DigitalSamaritan

Reply

or to participate.