- Practically AI
- Posts
- đź§ Claude UI, NVIDIA models, Microsoft chips
đź§ Claude UI, NVIDIA models, Microsoft chips
Plus: free AI video courses, a near-perfect OCR model, and a Nano Banana prompt going viral.
đź‘‹ Hello hello,
Claude just turned work tools into things you can actually interact with, which is a quiet but important shift in how people learn to trust AI at work. NVIDIA, meanwhile, is opening up climate models that were previously locked behind significant infrastructure and substantial budgets.
And if you’re wondering how all of this scales in the real world, Microsoft’s new inference chip is the part most people will overlook but probably shouldn’t.
Let’s dive in.
🔥🔥🔥 Three big updates
Claude rolled out interactive versions of common work tools directly inside the interface.
You can now draft Slack messages, visualize ideas as Figma-style diagrams, or build and view Asana timelines without leaving Claude. Instead of describing outputs, you can see and interact with them.
For example, one use case is connecting Hex to ask questions about your data and get answers with charts, tables, and citations. Take a look below:
This is a subtle but important move to increase MCP adoption. By adding visible, tangible UI layers, Claude builds trust. People are more likely to use MCP when they can see it working, not just read about it.
NVIDIA announced that it’s open-sourcing models from its Earth-2 initiative, which focuses on high-resolution climate and weather simulation.
These models are designed to help researchers, governments, and companies simulate environmental systems with more precision and speed.
Climate modeling has traditionally been expensive, slow, and limited to well-funded institutions. Open models lower the barrier for experimentation and collaboration, especially for researchers and startups working on climate risk, infrastructure planning, or environmental forecasting.
This is NVIDIA flexing not just GPU power, but platform influence.
Microsoft announced Maia 200, a custom AI accelerator designed specifically for inference workloads. Inference is where AI actually gets used at scale — answering queries, generating outputs, and running in production. Training gets headlines, but inference eats the budget.
This signals Microsoft’s focus on efficiency and control across its AI stack. Owning inference hardware helps reduce dependency, control costs, and optimize performance for real-world deployment across Azure and Microsoft products.
🔥🔥 Two Pro Tips Worth Knowing
1. 🎬 Runway Academy (free AI video courses)
Runway quietly offers a solid set of free courses focused on AI video creation and workflows. If you’re experimenting with AI video but feel stuck at “cool clip” instead of “repeatable system,” these courses help bridge that gap. They’re practical, visual, and easy to follow.
Best for creators and teams who want to understand how to actually use AI video tools, not just test them once.
2. đź“„ NuMarkdown-8B (open-source OCR at 99% accuracy)
An open-source model called NuMarkdown-8B is making waves for extracting text from images with reported 99% OCR accuracy. It’s designed for production-ready document digitization — fast extraction, clean structure, and no manual typing.
Best for anyone working with scanned documents, PDFs, or image-heavy workflows who wants reliable OCR without closed systems.
You can access the reference model here.
🔥🔥 Things You Didn’t Know You Can Do With AI
A Nano Banana prompt is going viral for showing how detailed, interactive AR-style visuals can be generated from a single prompt.
1. Use Gemini Nano Banana Pro as your image model.
2. Write a first-person perspective prompt set in a real environment (like a supermarket aisle).
3. Specify realistic human interaction (hands holding an object close to the camera).
4. Layer in holographic AR UI elements like nutrition data, freshness, and recipes.
5. Add visual cues like gaze-responsive UI, glass-like panels, and realistic lighting.
Here’s the full prompt:
First-person perspective inside a brightly lit supermarket aisle. Realistic human hands are holding a bottle of Fanta soda close to the camera. The vivid orange drink in its iconic branded bottle is surrounded by a multi-layered holographic augmented reality interface displaying nutritional data, including calorie count, sugar content, caffeine level, freshness indicator, expiration date, and recommended refreshing recipes and cocktails based on Fanta. The UI elements smoothly shift and reorganize based on the viewer’s gaze direction, as if dynamically responding to user focus. In the left peripheral vision, a vertical semi-transparent shopping list is visible with checked-off items, where Fanta is highlighted as the currently active selection. Hyper-realistic mixed reality, clean futuristic AR design, glass-like UI panels, soft ambient glow, realistic lighting and shadows, natural depth of field, immersive first-person interface, showcasing next-generation retail technology.
The result feels less like an image and more like a snapshot from a future retail interface.
Did you learn something new? |
💬 Quick poll: What’s one AI tool or workflow you use every week that most people would find super helpful?
Until next time,
Kushank @DigitalSamaritan


Reply