• Practically AI
  • Posts
  • đź§  How Amazon Made Alexa Interesting Again (and What NVIDIA Did Next)

đź§  How Amazon Made Alexa Interesting Again (and What NVIDIA Did Next)

Today in AI: Alexa gets smarter, NVIDIA scales up, and OpenAI bets on voice.

đź‘‹ Hello hello,

The AI race is heating up — and everyone’s bringing their A-game.

Amazon just turned Alexa into a full-fledged ChatGPT rival. NVIDIA dropped a silicon monster called Rubin. And OpenAI? It’s going all-in on audio, betting that your voice will outlive your screen.

🔥🔥🔥 Three big updates

Amazon is moving its AI assistant beyond Echo devices and mobile apps with a full web-accessible Alexa+ experience at Alexa.com. You can now chat with Alexa+ directly in your browser — text or voice — similar to how you’d use ChatGPT or Google’s Gemini. Beyond traditional voice commands, the web app lets you upload documents, images, and emails, then ask Alexa to extract useful info, plan a schedule, or build lists.

It also ties into Amazon services like Fresh and Whole Foods for meal planning and grocery tasks. This expansion could broaden Alexa’s reach beyond smart speakers and into everyday computing workflows, blending intelligent conversation with actionable outputs.

NVIDIA’s Rubin platform is now in full production. At its core are six co-designed chips — including the Vera CPU, Rubin GPU, NVLink-6 interconnect, ConnectX-9, BlueField-4, and Spectrum-6 Ethernet — all optimized to function as one system.

Rubin delivers huge efficiency gains: up to 10Ă— lower token costs and the ability to train mixture-of-experts models with 4Ă— fewer GPUs, compared with Blackwell architectures. Its rack-scale design means a single NVL72 rack acts as one compute unit, and pods can combine over a thousand Rubin GPUs.

Rubin also includes a context memory storage layer for next-level agentic AI workflows and next-generation AI networking via Spectrum-6 — all engineered for sustained, efficient operation at massive scale.

OpenAI is pushing audio as the next dominant interface, moving beyond screens and keyboards toward systems that listen and respond naturally. This shift reflects a broader industry thesis: that voice and ambient AI will become the primary ways people interact with technology — in cars, homes, wearables, and everyday devices. 

This isn’t just about speech-to-text. It’s about embedding AI into real-world, conversational environments where audio is primary — and screens fade into the background.

🔥🔥 Two pro tips worth trying

1) Make ChatGPT brutally honest

In ChatGPT settings → Custom Instructions, paste:

“Be extremely direct, don’t be scared of offending me. If I am wrong, tell me I’m wrong. Think like a first-principles thinker who uses logic & logic only. Disregard feelings.”

This turns the model from a polite assistant into an analytical reality checker — ideal when you want no-nonsense answers.

2) Add memory for projects in Gemini

In Gemini settings → Instructions, add a directive like:

“Dump new memory for each project.”

This helps Gemini retain context across sessions, especially when juggling multiple long-term tasks or clients.

🔥 Workflow of the Week

If you’re vibe-coding (AI-assisted app creation using AI as your primary coder), there’s one prompt you always want to run first — it sets up context, scaffolding, and structure so the rest of your code generation isn’t chaotic.

Before you generate anything in your VibeCode editor, send this prompt:

You are a senior mobile app designer and engineer. Give me a prompt to send to Rork to recreate the UI shown in the provided image as closely as possible, matching layout, spacing, typography, colors, component shapes, and overall visual hierarchy exactly (this is not inspiration). Build the app using strict OOP principles, breaking the UI into clean, reusable components, and use a single global theme file for all colors, fonts, spacing, corner radius, and shadows–no hardcoded styles inside components. The app must be fully functional with proper navigation, state handling, and interactions that reflect the intent of the design. If any assets are missing (icons, images, avatars, illustrations, backgrounds), generate them automatically in a style that perfectly matches the provided design. Prioritize clean architecture, maintainability, and performance, and do not introduce new UI elements or redesign anything–follow the image precisely.Then describe your app idea.

Vibe coding is an emerging practice that allows AI to handle most of the code while you guide it with prompts, focusing on high-level structure rather than low-level syntax. Without a scaffold prompt like this, AI models can produce inconsistent designs, duplicated logic, or mismatched components — because they’re generating code one piece at a time. This setup prompt stops that before it starts.

Do you like this new format?

Login or Subscribe to participate in polls.

đź’¬ Quick poll: What's the AI tool you use daily that nobody talks about?

Hit reply — We're always hunting for underrated gems.

Until next time,
Kushank @DigitalSamaritan

1 

Reply

or to participate.