Jetson Orin Nano Deep Seek Testing: Performance, Python Code, Image Analysis & More!

- Authors
- Published on
- Published on
In today's thrilling episode, the All About AI team embarks on a heart-pounding mission to push the limits of the Jetson Orin Nano from Nvidia by running the powerful deep Seek. With a twinkle in their eyes, they dive into loading various deep R1 models using AMA, showcasing the impressive performance of this pint-sized powerhouse. Through a series of exhilarating tests, they uncover the true capabilities of this device, leaving them utterly impressed by its speed and efficiency. The screen lights up with the results, revealing token speeds that will make your head spin.
Switching gears, the team cranks up the power settings to unleash the full potential of the 1.5b model, witnessing a dramatic increase in token speed that will leave you on the edge of your seat. As they delve into the world of Python code on the Jetson, importing from AMA and testing prime number detection, the adrenaline reaches a fever pitch. But they don't stop there - combining the Moon dream image model with deep Seek 1.5, they embark on a mind-bending journey of image analysis that will make your jaw drop.
With a devil-may-care attitude, the team fearlessly pushes the boundaries by running the deep Seek model in a browser on the Jetson, proving that this device is not just a toy but a powerful tool for AI exploration. The browser hums to life, showcasing the seamless integration of chat GPT and leaving viewers in awe of the endless possibilities. And as the episode draws to a close, the team hints at an exciting giveaway for channel members, inviting viewers to join in on the high-octane action. So buckle up, hold on tight, and get ready to experience the thrill of AI exploration like never before!

Image copyright Youtube

Image copyright Youtube

Image copyright Youtube

Image copyright Youtube
Watch DeepSeek R1 Running On 15W | NVIDIA Jetson Orin Nano SUPER on Youtube
Viewer Reactions for DeepSeek R1 Running On 15W | NVIDIA Jetson Orin Nano SUPER
Affordable hardware running up to 600 billion para model
Sponsored video disclaimer suggestion
Not impressed by performance due to memory limit
Comparison of performance between Jetson and other setups
Concerns about the lack of speed for most LLM applications
Curiosity about different models' speeds on 25W
Difference in performance between micro SD card and NVMe SSD
Nvidia Jetson Orin Nano as default in schools
Deepseek 7b model compared to human's cat
Comparison of running models on different setups
Related Articles

Revolutionizing YouTube: Creating Videos Using AI and Cloud Code
The All About AI team showcases their innovative approach to creating a YouTube channel solely through the terminal using cloud code. They craft captivating videos by scripting voiceovers, generating scenes, and compiling videos with cutting-edge AI technologies.

Mastering AI Context Gathering: Techniques for Enhanced Workflow
Learn essential context gathering techniques for AI projects from All About AI. Explore copy-paste, web search integration, MCP server tools, and more to enhance workflow efficiency and project success.

Exploring MCP Servers: Image Manipulation and Voice Transformation
Discover the potential of MCP servers as microservices with the All About AI team. From image manipulation to voice transformation, explore the innovative applications and future possibilities of this cutting-edge technology.

Antropic Unveils Claude 4: Innovative AI Models and Tools
Antropic introduces Claude 4 models, showcasing strong performance in software engineering. The new code execution tool and MCP connector in the Antropic API offer innovative AI capabilities for diverse projects and applications.