Jetson Orin Nano Deep Seek Testing: Performance, Python Code, Image Analysis & More!

- Authors
- Published on
- Published on
In today's thrilling episode, the All About AI team embarks on a heart-pounding mission to push the limits of the Jetson Orin Nano from Nvidia by running the powerful deep Seek. With a twinkle in their eyes, they dive into loading various deep R1 models using AMA, showcasing the impressive performance of this pint-sized powerhouse. Through a series of exhilarating tests, they uncover the true capabilities of this device, leaving them utterly impressed by its speed and efficiency. The screen lights up with the results, revealing token speeds that will make your head spin.
Switching gears, the team cranks up the power settings to unleash the full potential of the 1.5b model, witnessing a dramatic increase in token speed that will leave you on the edge of your seat. As they delve into the world of Python code on the Jetson, importing from AMA and testing prime number detection, the adrenaline reaches a fever pitch. But they don't stop there - combining the Moon dream image model with deep Seek 1.5, they embark on a mind-bending journey of image analysis that will make your jaw drop.
With a devil-may-care attitude, the team fearlessly pushes the boundaries by running the deep Seek model in a browser on the Jetson, proving that this device is not just a toy but a powerful tool for AI exploration. The browser hums to life, showcasing the seamless integration of chat GPT and leaving viewers in awe of the endless possibilities. And as the episode draws to a close, the team hints at an exciting giveaway for channel members, inviting viewers to join in on the high-octane action. So buckle up, hold on tight, and get ready to experience the thrill of AI exploration like never before!

Image copyright Youtube

Image copyright Youtube

Image copyright Youtube

Image copyright Youtube
Watch DeepSeek R1 Running On 15W | NVIDIA Jetson Orin Nano SUPER on Youtube
Viewer Reactions for DeepSeek R1 Running On 15W | NVIDIA Jetson Orin Nano SUPER
Affordable hardware running up to 600 billion para model
Sponsored video disclaimer suggestion
Not impressed by performance due to memory limit
Comparison of performance between Jetson and other setups
Concerns about the lack of speed for most LLM applications
Curiosity about different models' speeds on 25W
Difference in performance between micro SD card and NVMe SSD
Nvidia Jetson Orin Nano as default in schools
Deepseek 7b model compared to human's cat
Comparison of running models on different setups
Related Articles

Exploring Gemini 2.5 Flash: AI Model Testing and Performance Analysis
Gemini 2.5 Flash, a new AI model, impresses with its pricing and performance. The team tests its capabilities by building an MCP server using different thinking modes and token budgets, showcasing its potential to revolutionize AI technology.

Unlocking Innovation: OpenAI Codec CLI and 04 Mini Model Exploration
Explore the exciting world of OpenAI's latest release, the codec CLI, with the All About AI team. Follow their journey as they install and test the CLI with the new 04 mini model to build an MCP server, showcasing the power and potential of Codeex in AI development.

Mastering Parallel Coding: Collaborative Efficiency Unleashed
Explore the exciting world of parallel coding with All About AI as two clients collaborate seamlessly using an MCP server. Witness the efficiency of real-time communication and autonomous message exchange in this cutting-edge demonstration.

GPT 4.1: Revolutionizing AI with Coding Improvements and Image Processing
OpenAI's latest release, GPT 4.1, challenges Claude 3.7 and Gemini 2.5 Pro. The model excels in coding instructions, image processing, and real-time applications. Despite minor connectivity issues, the team explores its speed and accuracy, hinting at its promising future in AI technology.