Why the Future of Edge AI Belongs to Microcontrollers, Not Jetsons
In the race to bring artificial intelligence to the edge, the industry has largely been looking in the wrong direction.
For years, the narrative has been dominated by power-hungry, GPU-based platforms like the NVIDIA Jetson family. These are undeniably powerful boards — I’ve used them myself in prototyping environments — but they don’t represent the future of truly scalable, field-deployable edge intelligence.
At Obsidian Reach, we’re working on the next generation of remote AI cameras—low-power, long-life, intelligent devices that can be mounted virtually anywhere: on trees, buildings, lamp posts, or tactical helmets. And for deployments like these, power efficiency isn’t just a nice-to-have. It’s the constraint that defines the entire architecture.
The future of Edge AI isn't in boards that run AI like a data centre. It's in microcontrollers that can think on a sip of energy.
Jetsons Are Great—for Labs and Laptops
Let’s be clear: the NVIDIA Jetson ecosystem is an impressive feat of engineering. Jetson Nano, Xavier, and Orin offer powerful GPU cores capable of running YOLOv5, object detection, pose estimation—you name it. But they also demand significant power, thermal management, and continuous internet connectivity to fully deliver on their promise.
That’s fine if your device is plugged into the wall. Or if it lives in a vehicle, robot, or drone with a dedicated power source.
But what about devices that need to run for months—or years—on a battery? What about devices that live in rural, hostile, or stealth environments? What about sensor nodes in forests, border checkpoints, or embedded in tactical gear?
Jetsons simply don’t scale there.
Microcontrollers: The Sleeping Giants of AI
We’re now entering a new era—one where microcontrollers are becoming the true workhorses of the AI edge.
Thanks to advancements in tools like TensorFlow Lite Micro, Edge Impulse, and TVM, we can now compress and quantise deep learning models into forms that fit comfortably on an MCU with as little as 64KB of RAM. These aren’t general-purpose AI systems—they’re specialised, single-task-driven models optimised to run efficiently on minimal hardware.
Think:
- Face detection on an nRF52 with no OS
- Person-of-interest classification on a Cortex-M7
- Audio keyword spotting on a low-power DSP core
This is the kind of thinking driving our design philosophy at Obsidian Reach.
Our AI cameras don’t need to run general-purpose models. They need to do one thing extremely well—and do it continuously for months in the field with minimal infrastructure.
The Cloud-to-Edge Pipeline: Train Once, Deploy Many
Here’s where the real magic happens.
In the past, deploying AI to embedded systems required hand-converting models, writing bespoke code, and wrestling with toolchains. Today, we can design and train complex models in the cloud using full-fat Python frameworks—PyTorch, TensorFlow, Keras—and then automatically convert them into lightweight, device-ready versions using well-maintained pipelines.
At Obsidian Reach, we’re building a cloud-to-edge workflow where:
- Models are trained in the cloud on rich datasets
- Optimised for single-task execution (e.g. “detect car”, “detect open door”, “alert on face match”)
- Quantised and compiled to MCU-optimised binaries
- Uploaded to field devices via OTA or during provisioning
This approach gives us the power of cloud-scale training with the efficiency of edge-only inference. It also means we can update models remotely as conditions change, or as accuracy improves.
Why This Approach Wins
From my perspective, here’s why low-power, single-purpose Edge AI on microcontrollers is not just an alternative — it’s the future:
- Energy Efficiency: Devices can run for months or years on battery or solar.
- Security & Privacy: No data needs to be streamed to the cloud. Inference happens locally.
- Cost: MCUs are a fraction of the cost of GPU boards, making them viable at scale.
- Deployability: Small form factors and minimal heat allow deployment anywhere—from helmets to forests to lamp posts.
- Reliability: Fewer moving parts, no fans, no Linux dependency. Just purpose-built, rock-solid firmware.
It’s not about building a general AI device. It’s about building 10,000 smart devices, each doing one job flawlessly.
A New Edge AI Ecosystem is Emerging
To support this new wave of embedded AI, we need to rethink the tooling ecosystem.
From what I’ve seen, we’re heading toward:
- Unified model training platforms that support easy export to multiple MCU targets
- Driver-agnostic hardware libraries, so camera, audio, and sensor inputs are standardised across vendors
- Declarative config formats (YAML, TOML) for defining model parameters, inputs, and hardware bindings
- Auto-generated SDKs tailored to specific chipsets and models, reducing developer effort to near zero
And most importantly, a move toward hardware-agnostic AI deployments, where what matters is not which chip you’re using, but what problem your device is solving.
Conclusion: Less Power, More Intelligence
In the world of Edge AI, less is more. The most effective systems are not the most powerful—they’re the most efficient, the most targeted, and the most deployable.
At Obsidian Reach, we’re building tools and devices that embrace this philosophy. Our remote AI cameras don’t just watch—they understand. Quietly. Continuously. Reliably.
Jetson may power the demo, but the microcontroller will power the future.