I love that OrangePi is making good hardware, but after my experience with the OrangePi 5 Max, I won’t be buying more hardware from them again. The device is largely useless due to a lack of software support. This also happened with the MangoPi MQ-Pro. I’ll just stick with RPi. I may not get as much hardware for the money, but the software support is fantastic.
I can run my N100 nuc at 4W wall socket power draw idle. If I keep turbo boost off, it also stays there under normal load up to 6W full power. Then it is also terribly slow. With turbo boost enabled power draw can go to 8-10W on full load.
Not sure how this compares to the OrangePI in terms of performance per watt but it is already pretty far into the area of marginal gains for me at the cost of having to deal with ARM, custom housing, adapters to ensure the wall socket draw to be efficient etc. Having an efficient pico psu power a pi or orange pi is also not cheap.
Not the poster you're replying to, but I run an Acer laptop with an N305 CPU as a Plex server. Idle power draw with the lid closed is 4-5W and I keep the battery capped at 80% charge.
The N100/150/200/etc. can be clocked to use less power at idle (and capped for better thermals, especially in smaller or power-constrained devices).
A lot of the cheaper mini PCs seem to let the chip go wild, and don't implement sleep/low power states correctly, which is why the range is so wide. I've seen N100 boards idle at 6W, and others idle at 10-12W.
Boost enabled.
WiFi disabled.
No changes to P clock states or something from bios.
Fedora.
Applied all suggestions from powertop.
I don’t recall changing anything else.
I have an even cheesier competitor, which randomly has a dragon on the lid (it would be a terrible choice for all but the wimpiest casual gaming... but it makes a good Home Assistant HAOS server!)
That's quite a lot for the very heatsink that still results in those overheating problems I mentioned. A standard CPU cooler will not be mountable on this in any reasonable way, that's like parking a truck on a lawn chair.
I gave up on them and switched to a second hand mini pc. These mini desktops are offloaded in bulk by governments and offices for cheap and have much better specs than the same priced SBC. And you are no longer limited to “raspberry pi” builds of distros.
Unless you strictly need the tiny form factor of an SBC you are so much better going with x86.
I was planning to build a NAS from OPi 5 to minimise power consumption, but ended up going for a Zen 3 Ryzen CPU and having zero regrets. The savings are miniscule and would not justify the costs.
> The device is largely useless due to a lack of software support.
I think everyone considering an SBC should be warned that none of these are going to be supported by upstream in the way a cheap Intel or AMD desktop will be.
Even the Raspberry Pi 5, one of the most well supported of the SBCs, is still getting trickles of mainline support.
The trend of buying SBCs for general purpose compute is declining, thankfully, as more people come to realize that these are not the best options for general purpose computing.
More like people try doing anything other than use the base OS, and realize the bottom-tier x86 mini-PCs are 3-4x faster for the same price, and can encode a basic video stream without bogging down.
If the RPI came with any recent mid-tier Snapdragon SOC, it might be interesting. Or if someone made a Linux distro that supports all devices on one of the Snapdragon X Elite laptops, that would be interesting.
Instead, it's more like the equivalent of a cheap desktop with integrated GPU from 20 years ago, on a single board, with decent linux support, and GPIO. So it's either a linux learning toy, or an integrated component within another product, and not much in between.
Since the introduction of the OG Raspberry Pi, 14 years ago, there's been an ongoing cognitive problem wherein people look at the price of a brand new, never used SBC that can purchased from a reliable retail company.
Then they also look at the price of a used corpo PC (that is bigger, and noisier) that some rando in Iowa is selling on eBay.
And then they boldly compare the prices of the two things as if these details just don't exist.
But the details do exist. The details show that the two things are not the same. They can never be the same.
One is a shiny fresh apple that is free of blemishes, and the other is a bruised old grapefruit that someone has already started eating. They're both fruit, but they're very different things.
Qualcomm has rebranded a Snapdragon with quadruple Cortex-A78 cores (and 4 small Cortex-A55), from the expensive smartphones of 2021, as "Dragonwing" QCM6490 and they now sell it for embedded devices.
There are at least 3 or 4 SBCs with it, in RPI sizes and prices.
Cortex-A78 is much faster than the Cortex-A76 from RK3588 or the latest RPI (e.g. at least 50% faster at the same clock frequency), and its speed at the same clock frequency does not differ much from that of recent medium-size cores like Cortex-A720 or Cortex-A725.
Cortex-A78 is the stage when Arm stopped making significant micro-architectural changes in medium-sized cores. The later improvements were in the bigger Cortex-X cores. The main disadvantage of the older Cortex-A78 is that it does not implement the SVE instruction set of the Armv9-A ISA.
While mini-PCs with Intel/AMD CPUs are usually preferable, for an ARM SBC I would no longer buy any model that has older cores than Cortex-A78.
Besides the Qualcomm Dragonwing based SBCs, there are also Cortex-A78 based SBCs with Mediatek or NVIDIA CPUs, but those are more expensive.
> So it's either a linux learning toy, or an integrated component within another product, and not much in between.
Raspberry Pi are excellent at being general-purpose, full-Linux boxes that consume very low power (some can idle at <1W). Perfect for ambient computing, cron-jobs, MQTT-related hackery, VPN gateways, ad-blocking DNS servers, or anything else that isn't CPU-bound, but benefits from being always available[1].
1. In my case, this ironically includes orchestrating higher-wattage computers via Wake-on-Lan and powering them down when not needed
I've used them for mostly dedicated tasks, at least the RPi3 and older. I've used the RPi3 as CUPS servers at a couple of sites, for a few printers. Been running for many years now 24/7 with no issues. As I could buy those SBCs for the original low price and the installation was a total no-brainer, I would never consider using any kind of mini PC for that.
I have a couple of RPi4 with 8GB and 4GB RAM respectively, these I have been using as kind-of general computers (they're running off SSDs instead of SD cards). I've had no reason so far to replace them with anything Intel/AMD. On the other hand they can't replace my laptop computer - though I wish they could, as I use the laptop computer with an external display and external keyboard 100% of the time, so its form factor is just in the way. But there's way too little RAM on the SBCs. It's bad enough on the laptop computer, with its measly 16GB.
I built a nice little cyberdeck around an RPi 5 but it's turned out to be very disappointing. I was counting on classic X11's virtual display stuff to enable a 1080x480 screen to be usable with panning (virtual 720p or something, just a cool vertical pan). Problem is, the X11 support sucks, and so there's almost no 2D acceleration, so this simple thing that used to work great on a 486 with an ATI SVGA doesn't work very well at all on a machine a thousand times faster. Wayland has of course no support for a feature like this one, so I'm stuck with a screen too narrow to use, and performance for everything else that's pretty sub-par.
Aah, I had totally forgotten about that X11 feature, I did use it for something very many years ago.
I have only used the default setup (which is presumably Wayland) on the Pi, looks good but I don't actually use display features much.
People do all manner of wacky stuff with Pis that could be more easily done with traditional machines. Kubernetes clusters and emulation boxes are the more common use cases; the former can be done with VMs on a desktop and the latter is easily accomplished via a used SFF machine off of eBay. I've also heard multiple anecdotes of people building Pi clusters to run agentic development workflows in parallel.
I think in all cases it's the sheer novelty of doing something with a different ISA and form factor. Having built and racked my share of servers I see no reason to build a miniature datacenter in my home but, hey, to each their own.
I concur with this. The novelty of the Pi is getting a computer somewhere that you normally wouldn't due to the size and complexity. GPIO is a very nice addition, but it looks like conventional USB to GPIO is a thing so it's not really a huge driver to use a Pi.
Yeah Raspi even sells a keyboard formfactor and there was a Raspi laptop made from 3D printable casing and basic peripherals (screen, keyboard with mouse nub) for it. A cheap quasi-open source laptop at the time.
It takes a few years, but the Broadcom chips in Pis eventually get mainline support for most peripherals, similar to modern Rockchip SoCs.
The major difference is Raspberry Pi maintains a parallel fork of Linux and keeps it up to date with LTS and new releases, even updating their Pi OS to later kernels faster than the upstream Debian releases.
Also, unlike a lot of other manufacturers who only provide builds of Linux for their own hardware for a couple of years, it seems that even the latest version of the official Raspberry Pi OS supports every Raspberry Pi model all the way back to the first one with the 32-bit version of the OS.
Likewise, the 64-bit version of the OS looks like it supports every Raspberry Pi model that has a 64-bit CPU.
Yeah I was very impressed at being able to download a raspi image last year for my original pi model B, most companies would have just told me to throw it in the bin and buy the new one (at 10x the price lol)
"none of these are going to be supported by upstream in the way a cheap Intel or AMD desktop will be"
Going big-name doesn't even help you here. It's the same story with Nvidia's Jetson platforms; they show up, then within 2-3 years they're abandonware, trapped on an ancient kernel and EOL Ubuntu distro.
You can't build a product on this kind of support timeline.
Yup, I'm working a lot with Jetsons, and having the Orin NX on 22.04 is quite limiting sometimes, even with the most basic things. I got a random USB Wi-Fi dongle for it, and nope! Not supported in kernel 5.15, now have fun figuring out what to do with it.
For what it’s worth, Jetson at least has documentation, front ported / maintained patches, and some effort to upstream. It’s possible with only moderate effort and no extensive non-OEM source modification to have an Orin NX running an OpenEmbedded based system using the OE4T recipes and a modern kernel, for example, something that isn’t really possible on most random label SBCs.
> The trend of buying SBCs for general purpose compute is declining, thankfully, as more people come to realize that these are not the best options for general purpose computing.
If we take a step back, I think this is something to be saddened by. I, too, find boards without proper mainline support to be e-waste, and I am glad that we perhaps aren't producing quite as much of that anymore. But imagine if a good chunk of these boards did indeed have great mainline support. These incredibly cheap devices would be a perfect guarantor of democratized, unstoppable general compute in the face of the forces that many of us fear are rising. Even if that's not a fear you share, they'd make the perfect tinkering environment for children and adults not otherwise exposed to such things.
I have. It’s great on the RPi. On OPi5max, it didn’t support the hardware.
Worse, if you flash it to UEFI you’ll lose compat with the one system that did support it (older versions of BredOS). For that, you grab an old release, and never update. If you’re running something simple that you know won’t benefit from any update at all, that’s great. An RK3588 is a decent piece of kit though, and it really deserves better.
Hardware video decoding support for h264 and av1 just landed in 7.0 so it hasn't been a great bleeding edge experience for desktop and Plex etc users. But IMO late support is still support.
Not on this list is the current GPU Vulkan drivers Collabora are working on too. Don't think that's really blame Rockchip since they're ARM Mali-G610 GPUs, but yeah those didn't get stable in Mesa until last year.
Yes, and no. I have an OrangePi 5 Ultra and I'm finally running a vanilla kernel on it.
Don't bother trying anything before kernel 6.18.x -- unless you are willing to stick with their 6.1.x kernel with a million+ line diff.
The u-boot environment that comes with the board is hacked up. eg: It supports an undocumented amount of extlinux.conf ... just enough that whatever Debian writes by default, breaks it. Luckily, the u-boot project does support the board and I was able to flash a newer u-boot to the boot media and then the onboard flash [1].
Now the hdmi port doesn't show anything and I use a couple of serial pins when I need to do anything before it's on-net.
--
I purchased a Rock 5T (also rk3588) and the story is similar ... but upstream support for the board is much worse. Doing a diff between device trees [2] (supplied via custom Debian image vs vanilla kernel) tells me a lot. eg: there are addresses that are different between the two.
Upstream u-boot doesn't have support for the board explicitly.
No display, serial console doesn't work after boot.
I just wanted this board for its dual 2.5Gb ethernet ports but the ports even seem buggy. It might be an issue with my ISP... they seem to think otherwise.
--
Not being able to run a vanilla kernel/u-boot is a deal-breaker for me. If I can't upgrade my kernel to deal with a vulnerability without the company existing/supporting my particular board, I'm not comfortable using it.
IMHO, these boards exist in a space somewhere between the old-embedded world (where just having a working image is enough) and the modern linux world (where one needs to be able to update/apply patches)
Oh they finally got that up and running? That's good, but extremely late. It released in 2021. That's half a decade. As long as running an upstream kernel means you have to use 5+ year old SoCs, running upstream Linux instead of a vendor kernel remains completely out of the question for most circumstances.
On a related note: I pulled my pinebook pro out of a drawer this week, and spent an hour or so trying to figure out why the factory os couldn’t pull updates.
I guess manjaro just abandoned arm entirely. The options are armbian (probably the pragmatic choice, but fsck systemd), or openbsd (no video acceleration because the drivers are gpl for some dumb reason).
This sort of thing is less likely to happen to rpi, but it’s also getting pretty frustrating at this point.
Maybe the LLM was wrong and manjaro completely broke the gpg chain (again), but it spent a long time following mirror links, checking timestamps and running internet searches, and I spent over an hour on manual debugging.
> device is largely useless due to a lack of software support.
Came looking for this. It's the pitfall of 99% of hardware projects. They get a great team of hardware engineer, they go through the maddening of actually producing a thing (which is crazy complex) at scale, economically viable (hopefully), logistic hurdles including tax worldwide, tariffs, etc... only to have only people on their team be able to build and run a Hello World example.
To be fair even big player, e.g. NVIDIA, sucks at that too. Sure they have their GPU and CUDA but if you look at the "small" things like Jetson everybody I met told me the same thing, great hardware, unusable because the stack worked once when shipped then wasn't maintained.
Welcome to the world of firmware. That’s why RaspberryPi won and pivoted to B2B compute module sales as they managed to leech broad community support for their chips and then turn around and sell it to industry who were tired of garbage BSPs.
The reality for actual products is even worse. Qualcomm and Broadcom (even before the PE acquisition) are some of the worst companies to work with imaginable. I’ve had situations where we wasted a month tracking down a bug only for our Qualcomm account manager to admit that the bug was in a peripheral and in their errata already but couldn’t share the whole thing with us, among many other horror stories. I’d rather crawl through a mile of broken glass than have to deal with that again, so I have an extreme aversion to using anything but RPi, as distasteful as that is sometimes.
What's Qualcomm and Broadcom moat? Is it "just" IP or could they be replaced by a slower more expensive equivalent, say FPGA based, relying on open building blocks?
The range of their offerings is immense and I think each product should be evaluated individually to competition. But just as an anecdote from my company - to create a full spectrum DOCSIS signal our HW team used multiple huge FPGA chips, I think it was Altera 10 or something (device is EOS by now) and that only for the DAC (kinda), there were separate CPU, separate 10G switch, separate utility FPGA, separate memory, separate everything. And it had to be glued together with some insane mash of code on top of the FPGA blobs which not always work as expected. All in all it was a ten unit monster which used something like 4000W in steady state and a dozen of industrial coolers at max to cool it off.
And today that is replaced with a single relatively tiny in area chip (those old FPGAs were huge) from Broadcom, which does literally everything and complies with newest standard and uses tens of watts of power, and it is passively cooled. It's not quite the correct comparison since arch changed in the meantime, but if someone would build an exact replacement for that older big device using new chips and have the same specs, it would be half as big and use under 1000W or even less. And all software is ready to use without reinventing half of it manually.
But yeah, Broadcom's support is slow and opaque. and they will stall any non-major customer for month for almost any request, because they are prioritizing different tasks internally. It's like a drug dealer dependency and there is only one dealer in your town :) .
I don't get how your argument infers from your parents comment.
To me it would be the opposite conclusion: stay away from ARM SBCs with proprietary firmware and just go Intel-x86 NUCs if you don't want surprises.
And yes, RPI was(is?) a proprietary-FW SBC as the Broadcom VideoCore GPU driver was never open sourced from the start and relied on community efforts for reverse engineering, which the rPI foundation then leveraged to sell their products at a markup to commercial customers after the FOSS community did all the legwork for them for free. Like so long and thanks for all the fish.
Meanwhile Intel iGPUs had full linux kernel drivers out of the box. That's why they're great Jellyfin transcoding servers.
I had to throw away, literally, a Gigabyte BRIX, because its firmware did not recognised any distro I throwed at it from internal drives, only if connected externally over USB.
The experiements with various kinds of SSD modules, Linux distros, and UEFI booting partitions, end up killing the motherboard in someway due to me manipulating it all the time, whatever.
Raspberry PIs are the only NUCs I can buy in something like Conrad Electronic, and be assured it actually works without me going through it as if I had just bough Linux Unleashed in 1995's Summer.
IDK, why does this matter? What if there's no retail stores close to me? I haven't been into a retail electronics store in years, when online ordering and easy returns makes it so much more convenient, especially for cases like yours with the Gigabyte Brix not working properly. So what were you trying to prove with this because I'm confused as you keep own-goaling yourself.
The thing is, for such a niche use-cases it's expected it's not gonna have major retailer availability since it's not something the general consumer is gonna be knowledgeable enough for it to sell in high volumes to be wort for retail stores wherever you may live to stock up shelves on NUCs with Linux preinstalled just to cater to your limited demographic who refuses to order online for some reason, is a very tall order and not really a good faith argument for anything.
The market for people who are like "ah shit, I need to spontaneously go out to the store and pick up a NUC right fucking now, and it has to have Linux preinstalled, because I can't wait a couple of days till it arrives online or know how to install Linux myself", is really REALLY small.
It does, because I get to punch someone if it doesn't work, instead of looking hopless to an online form.
On a more serious note, how do you want normies to get introduced to the Year of Desktop Linux, outside WebOS LG TVs, Android/Linux and ChromeOS, instead of getting Mac minis and Neos at said stores?
I guess it is buying SteamDecks to play Windows games. /s
Raspeberry PIs are the few devices that normies can buy with GNU/Linux pre-installed.
Now I'm certain I don't want to deal(even on the internet) with people who consider punching low wage workers in retail sector, as the acceptable resolution for their issues with product defects of manufacturer. Especially given this is what free returns of online orders is good for, makes it even more looney.
LE to your reply from below here: Excuse me but a form of expression for what? The spec sheet of that Gigabyte Brix explicitly lists only Windows 11 as the supported OS, not Linux. You tried to install an unsupported OS, and you broke it in the process. What exactly do you expect the retail store workers to do to fix the issue you yourself caused via using the product in a way it wasn't advertised? You can contact the manufacturer for warranty or return it via the online return window, but the fuckup is still on your end and not the issue of retail workers.
How many physical stores sell the alternatives at all? IIRC there is one in Cambridge specifically selling Pi kit and related stuff, but that is about it.
With the caveat that I might be slightly out of touch (I have nothing beyond the Pi4/400 and the last x86 mini-box I bought was over a year ago)…
IMO the key benefit of a Pi over an x86/a64 box, assuming you aren't using the IO breakouts and such, is power efficiency (particularly at idle-ish). The benefits of the x86/a64 boxes is computing power and being all-in-one (my need was due to my Pi4-based router becoming the bottleneck when my home line was upgraded to ~Gbit, and I wanted something with 2+ built-in NICs rather than relying on USB so didn't even look into the Pi5). Both options beat other SBC based options due to software support, the x86/a64 machines because support is essentially baked in and the rPi by virtue of the Pi foundation and the wider community making great efforts to plug any holes. A Pi range used to win significantly on price (or at least price/performance) too, but that is not the case these days.
Ouch. I sympathize, having gone through similar hoops with Renesas. We buy a hardware product from them and try to develop on it but they won't share more than a few superficial datasheets with us. And I know they have way more manuals / datasheets because they'll sometimes drip the info to me when I ask specific questions, but they won't just give us them so we can do it ourselves.
This is a common business model sadly where the seller wants the buyer to buy an additional support contract for any actual firmware development.
Every time there's a new discussion of some arm board, I compare the price / features / power use with the geekom n100 SBC I picked up awhile back.
As far as I can tell, the OrangePi 6 remains distinctly uncompetitive with SBCs based on low-end intel chips.
- Orange pi consumes much more power (despite being an arm CPU)
- A bit faster on some benchmarks, a bit slower on others
- Intel SBC is about 60% the price, and comes with case + storage
- Intel SBC runs mainline linux and everything has working drivers
I don't understand why many say that RPi software/firmware support is 'fantastic'. Maybe it used to be in the beginning compared to other chips and boards, but right now it's a bit above average: they ignore many things which is out of their control/can't debug and fix (as in Wi-Fi chip firmware).
>Those tickets would not be "unreliable" but simply "broken" with a "Won't fix" status.
Check the Wi-Fi tickets, they are sitting without any replies from the RPi team since 2025. It is broken in these configurations, I decided not to use this strong general term for this case (it's broken only in certain configurations and use cases).
The USB bug (from 2019) has not be fully fixed. They got it much less extent, but did not eliminate the issue.
>Because other vendors are way worse.
There's only a single difference: Chinese vendors don't fix issues in both things they do control and in things they don't. The thing they control is usually a "Distro Build" or Buildroot rootfs hierarchy, which I personally see little value in.
Bugs related to third-party hardware and firmware present on the board gets rarely fixed by both sides.
Don't get me wrong, I'm absolutely not happy with it. I bought Intel NUC, which has Intel Ethernet and Intel Wi-Fi, as my PC with the idea that Intel has end-user support and writes drivers, and NUCs should come with golden Linux support, right? Yet Intel developers still supposed that I had to fix the bug in Intel drivers myself: https://marc.info/?l=linux-pci&m=175368780217953&w=2
The main issue was that they forked UBoot and did not release their modifications, making it hard to run anything other than their Armbian fork. They forked Armbian a long time ago and kind of hacked things together rather than adding support for their HW to Armbian. After a while I gave up running anothing other than their releases, I had good experiences with the Orange Pi 3 and 5.
But it's really uncool that they don't release their UBoot build! Lame!
Their WiFi chip (Dragon something brand), had a bug where the WiFI beacon frames had incorrect element orderings, causing inconsistent results with some clients. Overall their Wifi was pure garbage. But apart from the Wifi it's robust stuff.
I had good experiences with the Orange Pi 5 (only problem was soft reboot hanging), but only because Joshua Riek had created and maintained an Ubuntu distribution for the Rockchip. That project seems to be in limbo now, no kernel updates since at least a year.
You have to go in with your eyes open wth SBCs. If you have a specific task for it and you can see that it either already supports it or all the required software is there and it just needs to be gathered, then they can be great gadgets.
Often they can go their entire lifespan without some hardware feature being usable because of lack of software.
The blunt truth is that someone has to make that software, and you can't expect someone to make it for you. They may make it for you, and that's great, but really if you want a feature supported, it either has to already be supported, or you have to make the support.
It will be interesting to see if AI gets to the point that more people are capable of developing their own resources. It's a hard task and a lot of devices means the hackers are spread thin. It would be nice to see more people able to meaningfully contribute.
I think even custom is unacceptable. It’s too much of a pain being limited in your distro choice because you are limited to specific builds. On x86 you can run anything.
There also seems to be a plan to add uefi support to u-boot[1]. Many of these kinds of boards have u-boot implementations, so could then boot uefi kernel.
However many of these ARM chips have their own sub-architecture in the Linux source tree, I'm not sure that it's possible today to build a single image with them all built in and choose the subarchitecture at runtime. Theoretically it could be done, of course, but who has the incentive to do that work?
(I seem to remember Linus complaining about this situation to the Arm maintainer, maybe 10-20 years ago)
Using vendor kernels is standard in embedded development. Upstreaming takes a long time so even among well-supported boards you either have to wait many years for everything to get upstreamed or find a board where the upstreamed kernel supports enough peripherals that you're not missing anything you need.
I think it's a good thing that people are realizing that these SBCs are better used as development tools for people who understand embedded dev instead of as general purpose PCs. For years now you can find comments under every Raspberry Pi or other SBC thread informing everyone that a mini PC is a better idea for general purpose compute unless you really need something an SBC offers, like specific interfaces or low power.
I have always found it perplexing. Why is that required?
Is it the lack of drivers in upstream? Is it something to do with how ARM devices seemingly can't install Linux the same way x86 machines can (something something device tree)?
Yeah lack of peripheral drivers upstream for all the little things on the board, plus (AIUI) ARM doesn't have the same self-describing hardware discovery mechanisms that x86 computers have. Basically, standardisation. They're closer to MCUs in that way, is how I found it (though my knowledge is way out of date now, been years since I was doing embedded)
I've just been doing some reading. The driver situation in Linux is a bit dire.
On the one hand there is no stable driver ABI because that would restrict the ability for Linux to optimize.
On the other hand vendors (like Orange Pi, Samsung, Qualcomm, etc etc) end up maintaining long running and often outdated custom forks of Linux in an effort to hide their driver sources.
x86 hardware has a standard way to boot and bring up the hardware, usually to at least a minimum level of functionality.
ARM devices aren't even really similar to one another. As a weird example, the Raspberry Pi boots from the GPU, which brings up the rest of the hardware.
It's not just about booting though. We solve this with hardware-specific devicetrees, which is less nice in a way than runtime discovery through PCI/ACPI/UEFI/etc, but it works. But we're not just talking about needing a hardware-specific devicetree; we're talking about needing hardware-specific vendor kernels. That's not due to the lack of boot standardization and runtime discovery.
Nothing, and there has been a push for more standardization including adopting UEFI in the ARM server space. It's just not popular in the embedded space. You'd have to ask Qualcomm or Rockchip about why.
You can hope but I don't think it'll happen any time soon.
The lack of standardized boot and runtime discovery isn't such a big issue; u-boot deals with the former and devicetrees deal with the latter, we could already have an ecosystem where you download a bog standard Ubuntu ARM image plus a bootloader and devicetree for your SBC and install them. It wouldn't be quite as elegant as in x86 but it wouldn't be that far off; you wouldn't have to use SBC-specific distros, you could get your packages and kernels straight from Canonical (or Debian or whatever).
The reason we don't have that today is that drivers for important hardware just isn't upstream. It remains locked away in Qualcomm's and Rockchip's kernel forks for years. Last I checked, you still couldn't get HDMI working for the popular RK3588 SoC for example with upstream Linux because the HDMI PHY driver was missing, even though the 3588 had been out for many years and the PHY driver had been available under the GPL for years in Rockchip's fork of Linux.
Even if we added UEFI and ACPI today, Canonical couldn't ship a kernel with support for all SBCs. They'd have to ship SBC-specific kernels to get the right drivers.
The "somehow" is Microsoft, who defines what the hardware architecture of what a x86-64 desktop/laptop/server is and builds the compatibility test suite (Windows HLK) to verify conformance. Open source operating systems rely on Microsoft's standardization.
Microsoft's standardization got AMD and Intel to write upstream Linux GPU drivers? Microsoft got Intel to maintain upstream xHCI Linux drivers? Microsoft got people to maintain upstream Linux drivers for touchpads, display controllers, keyboards, etc?
I doubt this. Microsoft played a role in standardizing UEFI/ACPI/PCI which allows for a standardized boot process and runtime discovery, letting you have one system image which can discover everything it needs during and after boot. In the non-server ARM world, we need devicetree and u-boot boot scripts in lieu of those standards. But this does not explain why we need vendor kernels.
> You can't have a custom kernel if you can't rebuild the device tree.
What is this supposed to mean? There is no device tree to rebuild on x86 platforms yet you can have a custom kernel on x86 platforms. You sometimes need to use kernel forks there too to work with really weird hardware without upstream drivers, there's nothing different about Linux's driver model on x86. It's just that in the x86 world, for the vast, vast majority of situations, pre-built distro kernels built from upstream kernel releases has all the necessary drivers.
It's legacy of IBM PC compatible standard, that has multiple vendors building computers, peripherals that work with each other. Microsoft tried their EEE approach with ACPI that made suspend flaky in linux in early years.
This does not explain why the drivers for all the hardware is upstreamed almost immediately in the x86 world but remains locked away in vendor trees for years or forever in the ARM world. Vendor kernels don't exist due to the lack of standardized boot and runtime discovery.
What's the feasibility these days of using AI assistanted software maintenance for drivers? Does this somewhat bridge the unsupported gap by doing it yourself or is this not really a valid approach?
I've found AI tools to be pretty awful for low level work. So much of it requires making small changes to poorly documented registers. AI is very good at confidently hallucinating what register value you should use, and often is wrong. There's often such a big develop -> test cycle in embedded, and AI really only solves a very small part of it.
> At some point SBCs that require a custom linux image will become unacceptable, right?
The flash images contain information used by the bios to configure and bring up the device. It's more than just a filesystem. Just because it's not the standard consoomer "bios menu" you're used to doesn't mean it's wrong. It's just different.
These boards are based off of solutions not generally made available to the public. As a result, they require a small amount of technical knowledge beyond what operating a consumer PC might require.
So, packaging a standard arm linux install into a "custom" image is perfectly fine, to be honest.
> If the image contains information required to bring up the device, why isn't that data shipped in firmware?
the firmware is usually an extremely minimal set of boot routines loaded on the SOC package itself. to save space and cost, their goal is to jump to an external program.
so, many reasons
- firmware is less modular, meaning you cant ship hardware variants without also shipping firmware updates (the boot blob contains the device tree). also raises cost (see next)
- requires flash, which adds to BOM. intended designs of these ultra low cost SOCs would simply ship a single emmc (which the SD card replaces)
- no guaranteed input device for interactive setup. they'd have to make ui variants, including for weird embedded devices (such as a transit kiosk). and who is that for? a technician who would just reimage the device anyways?
- firmware updates in the field add more complexity. these are often low service or automatic service devices
anyways if you're shipping a highly margin sensitive, mass market device (such as a set top box, which a lot of these chipsets were designed for), the product is not only the SOC but the board reference design. when you buy a pi-style product, you're usually missing out on a huge amount of normally-included ecosystem.
that means that you can get a SBC for cheap using mass produced merchant silicon, but the consumer experience is sub-par. after all, this wasn't designed for your use case :)
Yeah, I ended up using an old mac mini for my Home Assistant needs. It draws a whopping 7W from the wall at idle (and it's near always idle), but the price of a new RPi is the same as 13k hours of electric usage for this.
Using whatever compute you have sitting in a drawer usually makes the most sense (including an old phone).
It should be noted that the CIX P1(this board's SoC) has ongoing efforts to be upstreamed. Last I checked, the GPU drivers were still not available(due to them not supporting ACPI? I may be wrong on this) and power draw being weird and stuck at 10-15ish watts. It seems like this blog confirms nothing has changed on those 2 points.
With that being said, CIX and their main board partner, Radxa, have been open with the UEFI.
I am not an expert in low-level environments such as the kernel or the UEFI, but if these tidbits sound interesting I would encourage anyone who is to look further into the CIX P1. To my untrained eyes, CIX looks like a company that is working towards a desktop/laptop chip with real UEFI/ACPI support. I look forward to the day it is polished up a bit.
> lspci is a bit more revealing, especially because you get to see where the dual 5GbE setup and Wi-Fi controller are placed–each seems to get its own PCI bridge:
That's how PCIe works. A PCIe port - both upstream and downstream - is a "PCI bridge". The link is one bus. A switch chip's "interior" is another bus. The next links are each their own bus again. One per port. There's no switch here, bus 0 ( / 30 / 60)is "in" the CPU, each port is it's own bus.
The more interesting thing is the PCI domain, the first 4 digits:
This generally (caveat emptor) means the ports aren't handled in some common PCIe subsystem, rather each port is independently connected to the CPU crossbar. The ports may also not be able to access each other, or non-transparent mapping rules apply.
Doesn't have to, though; it might be due to some technicality, driver bug, misunderstanding, whatever else.
Unfortunately most of my attempts to power both embedded PC and robot motors from the same power bank result in unexpected reboots (even on what is effectively an RC car). And yes, I'm a CS Msc, not an electrical engineer, but ...
Yeah, I've got a new mini PC that reboots at random. It kind of "drops out." Works better with Ubuntu than Manjaro. Failed device. I was thinking of setting up a forum of some sort to discuss various devices that people may be tinkering with. The domain is tinkeriDOTng - just need to do something with it - could end up looking like Discogs with all the variations on gadgets and devices out there - sort of like music. And then there's the whole "builds" dimension.
How much otherwise highly useful stuff are people sitting on that they can't get going (or going well) for one reason or another? Also, recycling I expect will end up important at some point, and knowing who has what (and where) could expedite this process.
I bought a NanoPi R6C in the past in the hope that it's going to be a nice mini pc to run all my containers with super low power usage or router. But the software was bad, really bad. I found https://github.com/Joshua-Riek/ubuntu-rockchip/ , it was godsend but still had some shortcomings. after 2 years, it's bit stable but I just keep it around as a backup route to access my homelab incase the main machines go down.
But I just don't get... everything, I don't get the org, I don't get the users on hn, I'm like skinner in the 'no the kids are wrong' meme.
It's a lambda. It's a cheap, plug in, ssh, forget. And it's bloody wonderful.
If you buy a 1 or 2 off ebay, ok maybe a 3.
After that? Get a damn computer.
Want more bandwidth on the rj45? Get a computer.
Want faster usb? Get a computer.
Want ssd? Get a computer
Want a retro computing device? Get a computer.
Want a computer experience?
Etc etc etc, i don't need to labour this.
Want something that will sit there, have ssh and run python scripts for years without a reboot? Spend 20 quid on ebay.
People demanded faster horses. And the raspi org, for some, damn fool, reason, tried to give them.
There are people bemoaning the fact that raspberry pi's aren't able to run LLM's. And will then, without irony, complain that the prices are too high. For the love of God, raspi org, stop listening to dickheads on the Internet. Stop paying youtubers to shill. Stop and focus.
This resonates. I still have a Pi 3B running pihole and it's been up for years. No updates needed, just works. The newer boards trying to compete with mini PCs feels like a different product category entirely.
Looks like the SoC (CIX P1) has Cortex-A720/A520 cores which are Armv9.2, nice.
I've still been on the hunt for a cheap Arm board with a Armv8.3+ or Arvm9.0+ SoC for OSDev stuff, but it's hard to find them in hobbyist price range (this board included, $700-900 USD from what I see).
The NVIDIA Jetson Orin Nanos looked good but unfortunately SWD/JTAG is disabled unless you pay for the $2k model...
> and you end up diving far more into boot chains, vendor GPU blobs and inference runtimes than you ever intended.
Yep, I'll pass. I'm done dealing with that kind of crap that is spread like a nasty STD through the ARM world. I'm sticking with x64 unless/until ARM gets this crap together.
Excellent! There is an OrangePi Zero 3W. That means my radxa zero tv-computer now has some competition. It is sad that rasspberry pi abandoned the small, zero computer. Will keep the OrangePi Zero 3W in mind next time I need to cobble together a new tv computer.
This seems to be an overkill for most of my workloads that require an SBC.
I would choose Jetson for anything computationally intensive, as Orange Pi 6 Plus's NPU is not even utilized due to lack of software support.
For other workloads, this one seems a bit too large in terms of formfactor and power consumption, and older RK3588 should still be sufficient
Disappointing on the NPU. I have found it's a point where industry wide improvement is necessary. People talk tokens/sec, model sizes, what formats are supported... But I rarely see an objective accuracy comparison. I repeatedly see that AI models are resilient to errors and reduced precision which is what allows the 1 bit quantization and whatnot.
But at a certain point I guess it just breaks? And they need an objective "I gave these tokens, I got out those tokens". But I guess that would need an objective gold standard ground truth that's maybe hard to come by.
just try to find some benchmark top_k, temp, etc parameters for llama.cpp. There's no consistent framing of any of these things. Temp should be effectively 0 so it's atleast deterministic in it's random probabilities.
Right. There are countless parameters and seeds and whatnots to tweak. But theoretically if all the inputs are the same the outputs should be within Epsilon of a known good. I wouldn't even mandate temperature or any other parameter be a specific value, just that it's the same. That way you can make sure even the pseudorandom processes are the same, so long as nothing pulls from a hardware rng or something like that. Which seems reasonable for them to do so idk maybe an "insecure rng" mode
By default CUDA isn't deterministic because of thread scheduling.
The main difference comes from rounding order of reduction difference.
It does make a small difference. Unless you have an unstable floating point algorithm, but if you have an unstable floating point algorithm on a GPU at low precision you were doomed from the start.
The even more confounding factor is there are specific builds provided by every vendor of these Cix P1 systems: Radxa, Orange Pi, Minisforum, now MetaComputing... it is painful to try to sort it out, as someone who knows where to look.
I couldn't imagine recommending any of these boards to people who aren't already SBC tinkerers.
I was also onboard until he got to the NPU downsides. I don't care about use for an LLM, but I would like to see the ability to run smallish ONNX models generated from a classical ML workflow. Not only is a GPU overkill for the tasks I'm considering, but I'm also concerned that unattended GPUs out on the edge will be repurposed for something else (video games, crypto mining, or just straight up ganked)
There are a couple outfits making M.2 AI accelerators. Recently I noticed this one: DeepX DX-M1M 25 TOPS (INT8) M.2 module from Radxa[1]: https://radxa.com/products/aicore/dx-m1m
If you're in the business of selling unbundled edge accelerators, you're strongly incentivized to modularize your NPU software stack for arbitrary hosts, which increases the likelihood that it actually works, and for more than one particular kernel.
If I had an embedded AI use case, this is something I'd look at hard.
So, this is slightly off topic, but out of curiousity, what are NPUs good for right this very second? What software uses them? What would this NPU be able to run if it was in fact accessible?
This is an honest, neutral question, and it's specifically about what can concretely be done with them right now. Their theoretical use is clear to me. I'm explicitly asking only about their practical use, in the present time.
(One of the reasons I am asking is I am wondering if this is a classic case of the hardware running too far ahead of the actual needs and the result is hardware that badly mismatches the actual needs, e.g., an "NPU" that blazingly accelerates a 100 million parameter model because that was "large" when someone wrote the specs down, but is uselessly small in practice. Sometimes this sort of thing happens. However I'm still honestly interested just in what can be done with them right now.)
Unfortunately only available atm for extremely high prices. I'd like to pick some up to create a ceph cluster (with 1x 18tb hdd osd per node in an 8 node cluster with 4+2 erasure coding)
I love that OrangePi is making good hardware, but after my experience with the OrangePi 5 Max, I won’t be buying more hardware from them again. The device is largely useless due to a lack of software support. This also happened with the MangoPi MQ-Pro. I’ll just stick with RPi. I may not get as much hardware for the money, but the software support is fantastic.
Yeah that's the problem with ARM devices. Better just buy a N100
The N100 is way larger than a OrangePi 5 Max.
Also about half as efficient, if that matters, and 1.5-2x higher idle power consumption (again, if that matters).
Sometimes easier to acquire, but usually the same price or more expensive.
I can run my N100 nuc at 4W wall socket power draw idle. If I keep turbo boost off, it also stays there under normal load up to 6W full power. Then it is also terribly slow. With turbo boost enabled power draw can go to 8-10W on full load.
Not sure how this compares to the OrangePI in terms of performance per watt but it is already pretty far into the area of marginal gains for me at the cost of having to deal with ARM, custom housing, adapters to ensure the wall socket draw to be efficient etc. Having an efficient pico psu power a pi or orange pi is also not cheap.
Which NUC do you have? A lot of the nameless brands on aliexpress draw 10 watts on idle.
Not the poster you're replying to, but I run an Acer laptop with an N305 CPU as a Plex server. Idle power draw with the lid closed is 4-5W and I keep the battery capped at 80% charge.
The N100/150/200/etc. can be clocked to use less power at idle (and capped for better thermals, especially in smaller or power-constrained devices).
A lot of the cheaper mini PCs seem to let the chip go wild, and don't implement sleep/low power states correctly, which is why the range is so wide. I've seen N100 boards idle at 6W, and others idle at 10-12W.
I have a minisforum.
Boost enabled. WiFi disabled. No changes to P clock states or something from bios. Fedora. Applied all suggestions from powertop. I don’t recall changing anything else.
On the other hand RPi doesn't support suspend. So which wins depends if your application is always-on.
There are quite a few x86-64 machines in the 70mm x 70mm form factor[1], which is close?
1: https://www.ecs.com.tw/en/Product/Mini-PC/LIVA_Q2/
Lmao the hero background. They photoshopped the pc into the back pocket of that AI-generated woman. (or the entire thing is AI-generated)
I have an even cheesier competitor, which randomly has a dragon on the lid (it would be a terrible choice for all but the wimpiest casual gaming... but it makes a good Home Assistant HAOS server!)
welcome to online shopping..
I dunno, I hear it‘s easy to put in your pocket and let the computer is everywhere
This only has 4 gb lpddr4 memory max, 1GbE and it seems no pcie lanes at all. The orange pi has much better specs.
Well... https://radxa.com/products/x/x4/
It has major overheating issues though, the N100 was never meant to be put on such a tiny PCB.
They also sell a heatsink for mere $21 (on AliExpress), just in case you don't know how to fit a spare PC cooler onto it.
That's quite a lot for the very heatsink that still results in those overheating problems I mentioned. A standard CPU cooler will not be mountable on this in any reasonable way, that's like parking a truck on a lawn chair.
I have a Bosgame AG40 (low end Celeron N4020 - less powerful than the N100 - but runs fanless)[1].
It's 127 x 127 x 508 mm. I think most mini N100 PCs are around that size.
The OrangePi 5 Max board is 89x57mm (it says 1.6mm "thickness" on the spec sheet but I think that is a typo - the ethernet port is more than that)
Add a few mm for a case and it's roughly 2/3 as long and half the width of the A40.
[1] https://manuals.plus/asin/B0DG8P4DGV
My Ace Magician N100 is 190x115mm
Big by comparison, but still pretty small
Zimaboard2 has n150 and is smaller
I thought N100 equivalent SBC computers like Radxa's, etc., were largely out of stock for quite some time now.
I gave up on them and switched to a second hand mini pc. These mini desktops are offloaded in bulk by governments and offices for cheap and have much better specs than the same priced SBC. And you are no longer limited to “raspberry pi” builds of distros.
Unless you strictly need the tiny form factor of an SBC you are so much better going with x86.
The mobility+power has been a thing for me. I can pick it up and take it outside with a USB battery pack and it just works.
The N100 is more expensive, does not come with onboard wifi, and requires active cooling.
Once you get a case, power supply, and some usable diskspace is the n100 isn't that much more expensive.
active cooling is the killer though. I'd prefer a board that is fanless any day.
The Orange Pi 6 Plus has a fan.
My N200 tablet (Starlite) has no fan that I know of. Not exactly the same but number implies it should be more powerful.
N100 works just fine with fully passive cooling
I was planning to build a NAS from OPi 5 to minimise power consumption, but ended up going for a Zen 3 Ryzen CPU and having zero regrets. The savings are miniscule and would not justify the costs.
> The device is largely useless due to a lack of software support.
I think everyone considering an SBC should be warned that none of these are going to be supported by upstream in the way a cheap Intel or AMD desktop will be.
Even the Raspberry Pi 5, one of the most well supported of the SBCs, is still getting trickles of mainline support.
The trend of buying SBCs for general purpose compute is declining, thankfully, as more people come to realize that these are not the best options for general purpose computing.
> The trend of buying SBCs for general purpose compute is declining,
Were people actually doing that?
They probably define general purpose as anything homelab based that runs on a commodity OS.
More like people try doing anything other than use the base OS, and realize the bottom-tier x86 mini-PCs are 3-4x faster for the same price, and can encode a basic video stream without bogging down.
If the RPI came with any recent mid-tier Snapdragon SOC, it might be interesting. Or if someone made a Linux distro that supports all devices on one of the Snapdragon X Elite laptops, that would be interesting.
Instead, it's more like the equivalent of a cheap desktop with integrated GPU from 20 years ago, on a single board, with decent linux support, and GPIO. So it's either a linux learning toy, or an integrated component within another product, and not much in between.
Bottom tier computers were more than $25.
They still are. They always have been.
Since the introduction of the OG Raspberry Pi, 14 years ago, there's been an ongoing cognitive problem wherein people look at the price of a brand new, never used SBC that can purchased from a reliable retail company.
Then they also look at the price of a used corpo PC (that is bigger, and noisier) that some rando in Iowa is selling on eBay.
And then they boldly compare the prices of the two things as if these details just don't exist.
But the details do exist. The details show that the two things are not the same. They can never be the same.
One is a shiny fresh apple that is free of blemishes, and the other is a bruised old grapefruit that someone has already started eating. They're both fruit, but they're very different things.
It's tiny and low power, I run CI on a Pi5 and do a few other things and experiments on them.
Qualcomm has rebranded a Snapdragon with quadruple Cortex-A78 cores (and 4 small Cortex-A55), from the expensive smartphones of 2021, as "Dragonwing" QCM6490 and they now sell it for embedded devices.
There are at least 3 or 4 SBCs with it, in RPI sizes and prices.
Cortex-A78 is much faster than the Cortex-A76 from RK3588 or the latest RPI (e.g. at least 50% faster at the same clock frequency), and its speed at the same clock frequency does not differ much from that of recent medium-size cores like Cortex-A720 or Cortex-A725.
Cortex-A78 is the stage when Arm stopped making significant micro-architectural changes in medium-sized cores. The later improvements were in the bigger Cortex-X cores. The main disadvantage of the older Cortex-A78 is that it does not implement the SVE instruction set of the Armv9-A ISA.
While mini-PCs with Intel/AMD CPUs are usually preferable, for an ARM SBC I would no longer buy any model that has older cores than Cortex-A78.
Besides the Qualcomm Dragonwing based SBCs, there are also Cortex-A78 based SBCs with Mediatek or NVIDIA CPUs, but those are more expensive.
> So it's either a linux learning toy, or an integrated component within another product, and not much in between.
Raspberry Pi are excellent at being general-purpose, full-Linux boxes that consume very low power (some can idle at <1W). Perfect for ambient computing, cron-jobs, MQTT-related hackery, VPN gateways, ad-blocking DNS servers, or anything else that isn't CPU-bound, but benefits from being always available[1].
1. In my case, this ironically includes orchestrating higher-wattage computers via Wake-on-Lan and powering them down when not needed
They are cheap and seem like the hardware is good enough. The hardware is, but getting software support very diy.
I've used them for mostly dedicated tasks, at least the RPi3 and older. I've used the RPi3 as CUPS servers at a couple of sites, for a few printers. Been running for many years now 24/7 with no issues. As I could buy those SBCs for the original low price and the installation was a total no-brainer, I would never consider using any kind of mini PC for that.
I have a couple of RPi4 with 8GB and 4GB RAM respectively, these I have been using as kind-of general computers (they're running off SSDs instead of SD cards). I've had no reason so far to replace them with anything Intel/AMD. On the other hand they can't replace my laptop computer - though I wish they could, as I use the laptop computer with an external display and external keyboard 100% of the time, so its form factor is just in the way. But there's way too little RAM on the SBCs. It's bad enough on the laptop computer, with its measly 16GB.
I built a nice little cyberdeck around an RPi 5 but it's turned out to be very disappointing. I was counting on classic X11's virtual display stuff to enable a 1080x480 screen to be usable with panning (virtual 720p or something, just a cool vertical pan). Problem is, the X11 support sucks, and so there's almost no 2D acceleration, so this simple thing that used to work great on a 486 with an ATI SVGA doesn't work very well at all on a machine a thousand times faster. Wayland has of course no support for a feature like this one, so I'm stuck with a screen too narrow to use, and performance for everything else that's pretty sub-par.
Aah, I had totally forgotten about that X11 feature, I did use it for something very many years ago. I have only used the default setup (which is presumably Wayland) on the Pi, looks good but I don't actually use display features much.
I haven't tried it myself but niri might do what you want using Wayland https://github.com/niri-wm/niri
People do all manner of wacky stuff with Pis that could be more easily done with traditional machines. Kubernetes clusters and emulation boxes are the more common use cases; the former can be done with VMs on a desktop and the latter is easily accomplished via a used SFF machine off of eBay. I've also heard multiple anecdotes of people building Pi clusters to run agentic development workflows in parallel.
I think in all cases it's the sheer novelty of doing something with a different ISA and form factor. Having built and racked my share of servers I see no reason to build a miniature datacenter in my home but, hey, to each their own.
I concur with this. The novelty of the Pi is getting a computer somewhere that you normally wouldn't due to the size and complexity. GPIO is a very nice addition, but it looks like conventional USB to GPIO is a thing so it's not really a huge driver to use a Pi.
Raspberry Pi was and is selling official desktop kits: https://www.raspberrypi.com/products/raspberry-pi-4-desktop-...
I wouldn't wish it upon an enemy, but it's a thing.
Yeah Raspi even sells a keyboard formfactor and there was a Raspi laptop made from 3D printable casing and basic peripherals (screen, keyboard with mouse nub) for it. A cheap quasi-open source laptop at the time.
I daily drove my Raspberry Pi 5 for all of 2024. It primarily compiled tons of C++ and served 1080p video via Jellyfin, and it did so flawlessly.
> Even the Raspberry Pi 5 [...] is still getting trickles of mainline support.
I thought raspberry pi could basically run a mainline kernel these days -- are there unsupported peripherals besides Broadcom's GPU?
It takes a few years, but the Broadcom chips in Pis eventually get mainline support for most peripherals, similar to modern Rockchip SoCs.
The major difference is Raspberry Pi maintains a parallel fork of Linux and keeps it up to date with LTS and new releases, even updating their Pi OS to later kernels faster than the upstream Debian releases.
Also, unlike a lot of other manufacturers who only provide builds of Linux for their own hardware for a couple of years, it seems that even the latest version of the official Raspberry Pi OS supports every Raspberry Pi model all the way back to the first one with the 32-bit version of the OS.
Likewise, the 64-bit version of the OS looks like it supports every Raspberry Pi model that has a 64-bit CPU.
https://www.raspberrypi.com/software/operating-systems/
I can confirm that even my first rpi from over a decade still runs fine with newest dietpi.
Yeah I was very impressed at being able to download a raspi image last year for my original pi model B, most companies would have just told me to throw it in the bin and buy the new one (at 10x the price lol)
"none of these are going to be supported by upstream in the way a cheap Intel or AMD desktop will be"
Going big-name doesn't even help you here. It's the same story with Nvidia's Jetson platforms; they show up, then within 2-3 years they're abandonware, trapped on an ancient kernel and EOL Ubuntu distro.
You can't build a product on this kind of support timeline.
Yup, I'm working a lot with Jetsons, and having the Orin NX on 22.04 is quite limiting sometimes, even with the most basic things. I got a random USB Wi-Fi dongle for it, and nope! Not supported in kernel 5.15, now have fun figuring out what to do with it.
For what it’s worth, Jetson at least has documentation, front ported / maintained patches, and some effort to upstream. It’s possible with only moderate effort and no extensive non-OEM source modification to have an Orin NX running an OpenEmbedded based system using the OE4T recipes and a modern kernel, for example, something that isn’t really possible on most random label SBCs.
> The trend of buying SBCs for general purpose compute is declining, thankfully, as more people come to realize that these are not the best options for general purpose computing.
If we take a step back, I think this is something to be saddened by. I, too, find boards without proper mainline support to be e-waste, and I am glad that we perhaps aren't producing quite as much of that anymore. But imagine if a good chunk of these boards did indeed have great mainline support. These incredibly cheap devices would be a perfect guarantor of democratized, unstoppable general compute in the face of the forces that many of us fear are rising. Even if that's not a fear you share, they'd make the perfect tinkering environment for children and adults not otherwise exposed to such things.
So don N100s / N150s - often cheaper and better.
Sure. But a heterogenous environment is interesting. And I wouldn't put all my eggs in one product line.
Good. ESPs are better for low power IO. Cheap desktop HW / mini pcs are better for tinkering environments.
Have you taken a look at armbian? If so, what was your experience?
https://www.armbian.com/boards?vendor=xunlong
I have. It’s great on the RPi. On OPi5max, it didn’t support the hardware.
Worse, if you flash it to UEFI you’ll lose compat with the one system that did support it (older versions of BredOS). For that, you grab an old release, and never update. If you’re running something simple that you know won’t benefit from any update at all, that’s great. An RK3588 is a decent piece of kit though, and it really deserves better.
I thought RK3588 had pretty good mainline support, what's the issue with this board?
Video, networking, etc. To get working 3588 you’d have to go with a passionate group like MNT, and then you’re paying way more.
Hardware video decoding support for h264 and av1 just landed in 7.0 so it hasn't been a great bleeding edge experience for desktop and Plex etc users. But IMO late support is still support.
I've got this bookmarked for tracking: https://gitlab.collabora.com/hardware-enablement/rockchip-35...
Not on this list is the current GPU Vulkan drivers Collabora are working on too. Don't think that's really blame Rockchip since they're ARM Mali-G610 GPUs, but yeah those didn't get stable in Mesa until last year.
Current vetsion of vulkan panfrost notably doesn't run zed. Not just some games, a text editor doesn not get some surface extensions
Yes, and no. I have an OrangePi 5 Ultra and I'm finally running a vanilla kernel on it.
Don't bother trying anything before kernel 6.18.x -- unless you are willing to stick with their 6.1.x kernel with a million+ line diff.
The u-boot environment that comes with the board is hacked up. eg: It supports an undocumented amount of extlinux.conf ... just enough that whatever Debian writes by default, breaks it. Luckily, the u-boot project does support the board and I was able to flash a newer u-boot to the boot media and then the onboard flash [1].
Now the hdmi port doesn't show anything and I use a couple of serial pins when I need to do anything before it's on-net.
--
I purchased a Rock 5T (also rk3588) and the story is similar ... but upstream support for the board is much worse. Doing a diff between device trees [2] (supplied via custom Debian image vs vanilla kernel) tells me a lot. eg: there are addresses that are different between the two.
Upstream u-boot doesn't have support for the board explicitly.
No display, serial console doesn't work after boot.
I just wanted this board for its dual 2.5Gb ethernet ports but the ports even seem buggy. It might be an issue with my ISP... they seem to think otherwise.
--
Not being able to run a vanilla kernel/u-boot is a deal-breaker for me. If I can't upgrade my kernel to deal with a vulnerability without the company existing/supporting my particular board, I'm not comfortable using it.
IMHO, these boards exist in a space somewhere between the old-embedded world (where just having a working image is enough) and the modern linux world (where one needs to be able to update/apply patches)
[1] https://www.reddit.com/r/OrangePI/comments/1l6hnqk/comment/n...
[2] https://gist.github.com/imoverclocked/1354ef79bd24318b885527...
This is not an RK3588, to begin with.
The orange pi 5 max op's complaining about uses a rk3588
>The device is largely useless due to a lack of software support.
It's pretty hacky for sure but wouldn't classify it as useless. e.g. I managed to get some LLMs to run on the NPU of an Orange pi 5 a while back
I see there is now even a NPU compatible llama.cpp fork though haven't tried it
What specifically is lacking?
Hdmi output on the vanilla kernel started to work thus year. Hdmi in still doesnt. Zed doesn not start bc of some missing vulkan extensions
Oh they finally got that up and running? That's good, but extremely late. It released in 2021. That's half a decade. As long as running an upstream kernel means you have to use 5+ year old SoCs, running upstream Linux instead of a vendor kernel remains completely out of the question for most circumstances.
On a related note: I pulled my pinebook pro out of a drawer this week, and spent an hour or so trying to figure out why the factory os couldn’t pull updates.
I guess manjaro just abandoned arm entirely. The options are armbian (probably the pragmatic choice, but fsck systemd), or openbsd (no video acceleration because the drivers are gpl for some dumb reason).
This sort of thing is less likely to happen to rpi, but it’s also getting pretty frustrating at this point.
> I guess manjaro just abandoned arm entirely
Er?
https://manjaro.org/products/download/arm explicitly lists the pinebook pro?
None of the arm mirrors have recent updates.
Maybe the LLM was wrong and manjaro completely broke the gpg chain (again), but it spent a long time following mirror links, checking timestamps and running internet searches, and I spent over an hour on manual debugging.
Arch is pretty good on both arm and risc. Run my pis on it for years, including the orange one with rk3588
> device is largely useless due to a lack of software support.
Came looking for this. It's the pitfall of 99% of hardware projects. They get a great team of hardware engineer, they go through the maddening of actually producing a thing (which is crazy complex) at scale, economically viable (hopefully), logistic hurdles including tax worldwide, tariffs, etc... only to have only people on their team be able to build and run a Hello World example.
To be fair even big player, e.g. NVIDIA, sucks at that too. Sure they have their GPU and CUDA but if you look at the "small" things like Jetson everybody I met told me the same thing, great hardware, unusable because the stack worked once when shipped then wasn't maintained.
Welcome to the world of firmware. That’s why RaspberryPi won and pivoted to B2B compute module sales as they managed to leech broad community support for their chips and then turn around and sell it to industry who were tired of garbage BSPs.
The reality for actual products is even worse. Qualcomm and Broadcom (even before the PE acquisition) are some of the worst companies to work with imaginable. I’ve had situations where we wasted a month tracking down a bug only for our Qualcomm account manager to admit that the bug was in a peripheral and in their errata already but couldn’t share the whole thing with us, among many other horror stories. I’d rather crawl through a mile of broken glass than have to deal with that again, so I have an extreme aversion to using anything but RPi, as distasteful as that is sometimes.
What's Qualcomm and Broadcom moat? Is it "just" IP or could they be replaced by a slower more expensive equivalent, say FPGA based, relying on open building blocks?
The range of their offerings is immense and I think each product should be evaluated individually to competition. But just as an anecdote from my company - to create a full spectrum DOCSIS signal our HW team used multiple huge FPGA chips, I think it was Altera 10 or something (device is EOS by now) and that only for the DAC (kinda), there were separate CPU, separate 10G switch, separate utility FPGA, separate memory, separate everything. And it had to be glued together with some insane mash of code on top of the FPGA blobs which not always work as expected. All in all it was a ten unit monster which used something like 4000W in steady state and a dozen of industrial coolers at max to cool it off.
And today that is replaced with a single relatively tiny in area chip (those old FPGAs were huge) from Broadcom, which does literally everything and complies with newest standard and uses tens of watts of power, and it is passively cooled. It's not quite the correct comparison since arch changed in the meantime, but if someone would build an exact replacement for that older big device using new chips and have the same specs, it would be half as big and use under 1000W or even less. And all software is ready to use without reinventing half of it manually.
But yeah, Broadcom's support is slow and opaque. and they will stall any non-major customer for month for almost any request, because they are prioritizing different tasks internally. It's like a drug dealer dependency and there is only one dealer in your town :) .
Which is why Raspberry PIs are more valuable to me than an x86 NUC, even if the prices are similar.
There are no ARM NUCs at such prices, and even if there were the GNU/Linux support would be horrible.
I don't get how your argument infers from your parents comment.
To me it would be the opposite conclusion: stay away from ARM SBCs with proprietary firmware and just go Intel-x86 NUCs if you don't want surprises.
And yes, RPI was(is?) a proprietary-FW SBC as the Broadcom VideoCore GPU driver was never open sourced from the start and relied on community efforts for reverse engineering, which the rPI foundation then leveraged to sell their products at a markup to commercial customers after the FOSS community did all the legwork for them for free. Like so long and thanks for all the fish.
Meanwhile Intel iGPUs had full linux kernel drivers out of the box. That's why they're great Jellyfin transcoding servers.
If I don't want surprises!?!
I had to throw away, literally, a Gigabyte BRIX, because its firmware did not recognised any distro I throwed at it from internal drives, only if connected externally over USB.
The experiements with various kinds of SSD modules, Linux distros, and UEFI booting partitions, end up killing the motherboard in someway due to me manipulating it all the time, whatever.
Raspberry PIs are the only NUCs I can buy in something like Conrad Electronic, and be assured it actually works without me going through it as if I had just bough Linux Unleashed in 1995's Summer.
Luckily, the X86 NUC ecosystem is not defined by your unfortunate experience with a Gigabyte BRIX. Exceptions don't define the norm.
Which physical stores sell X86 NUCs with OEM supported Linux distributions pre-installed?
IDK, why does this matter? What if there's no retail stores close to me? I haven't been into a retail electronics store in years, when online ordering and easy returns makes it so much more convenient, especially for cases like yours with the Gigabyte Brix not working properly. So what were you trying to prove with this because I'm confused as you keep own-goaling yourself.
The thing is, for such a niche use-cases it's expected it's not gonna have major retailer availability since it's not something the general consumer is gonna be knowledgeable enough for it to sell in high volumes to be wort for retail stores wherever you may live to stock up shelves on NUCs with Linux preinstalled just to cater to your limited demographic who refuses to order online for some reason, is a very tall order and not really a good faith argument for anything.
The market for people who are like "ah shit, I need to spontaneously go out to the store and pick up a NUC right fucking now, and it has to have Linux preinstalled, because I can't wait a couple of days till it arrives online or know how to install Linux myself", is really REALLY small.
It does, because I get to punch someone if it doesn't work, instead of looking hopless to an online form.
On a more serious note, how do you want normies to get introduced to the Year of Desktop Linux, outside WebOS LG TVs, Android/Linux and ChromeOS, instead of getting Mac minis and Neos at said stores?
I guess it is buying SteamDecks to play Windows games. /s
Raspeberry PIs are the few devices that normies can buy with GNU/Linux pre-installed.
Now I'm certain I don't want to deal(even on the internet) with people who consider punching low wage workers in retail sector, as the acceptable resolution for their issues with product defects of manufacturer. Especially given this is what free returns of online orders is good for, makes it even more looney.
LE to your reply from below here: Excuse me but a form of expression for what? The spec sheet of that Gigabyte Brix explicitly lists only Windows 11 as the supported OS, not Linux. You tried to install an unsupported OS, and you broke it in the process. What exactly do you expect the retail store workers to do to fix the issue you yourself caused via using the product in a way it wasn't advertised? You can contact the manufacturer for warranty or return it via the online return window, but the fuckup is still on your end and not the issue of retail workers.
It was a form of expression, and yeah, whatever dude.
How many physical stores sell the alternatives at all? IIRC there is one in Cambridge specifically selling Pi kit and related stuff, but that is about it.
I gave a German example.
So “how many” is two. That and the one in Cambridge?
Greater than the zero for x86 NUCs.
True. But a lot less significant looked at that way, hardly worth stating as a challenge…
Yeah, the Year of Desktop Linux for normies on x86 NUCs is right around the corner.
With the caveat that I might be slightly out of touch (I have nothing beyond the Pi4/400 and the last x86 mini-box I bought was over a year ago)…
IMO the key benefit of a Pi over an x86/a64 box, assuming you aren't using the IO breakouts and such, is power efficiency (particularly at idle-ish). The benefits of the x86/a64 boxes is computing power and being all-in-one (my need was due to my Pi4-based router becoming the bottleneck when my home line was upgraded to ~Gbit, and I wanted something with 2+ built-in NICs rather than relying on USB so didn't even look into the Pi5). Both options beat other SBC based options due to software support, the x86/a64 machines because support is essentially baked in and the rPi by virtue of the Pi foundation and the wider community making great efforts to plug any holes. A Pi range used to win significantly on price (or at least price/performance) too, but that is not the case these days.
Ouch. I sympathize, having gone through similar hoops with Renesas. We buy a hardware product from them and try to develop on it but they won't share more than a few superficial datasheets with us. And I know they have way more manuals / datasheets because they'll sometimes drip the info to me when I ask specific questions, but they won't just give us them so we can do it ourselves.
This is a common business model sadly where the seller wants the buyer to buy an additional support contract for any actual firmware development.
Every time there's a new discussion of some arm board, I compare the price / features / power use with the geekom n100 SBC I picked up awhile back.
As far as I can tell, the OrangePi 6 remains distinctly uncompetitive with SBCs based on low-end intel chips.
- Orange pi consumes much more power (despite being an arm CPU) - A bit faster on some benchmarks, a bit slower on others - Intel SBC is about 60% the price, and comes with case + storage - Intel SBC runs mainline linux and everything has working drivers
>software support is fantastic.
Unreliable USB: https://github.com/raspberrypi/linux/issues/3259
Unreliable Wi-Fi:
* https://github.com/raspberrypi/linux/issues/7092
* https://github.com/raspberrypi/linux/issues/7111
* https://github.com/raspberrypi/linux/issues/7272
I don't understand why many say that RPi software/firmware support is 'fantastic'. Maybe it used to be in the beginning compared to other chips and boards, but right now it's a bit above average: they ignore many things which is out of their control/can't debug and fix (as in Wi-Fi chip firmware).
> I don't understand why many say that RPi software/firmware support is 'fantastic'.
Because other vendors are way worse. Those tickets would not be "unreliable" but simply "broken" with a "Won't fix" status.
>Those tickets would not be "unreliable" but simply "broken" with a "Won't fix" status.
Check the Wi-Fi tickets, they are sitting without any replies from the RPi team since 2025. It is broken in these configurations, I decided not to use this strong general term for this case (it's broken only in certain configurations and use cases).
The USB bug (from 2019) has not be fully fixed. They got it much less extent, but did not eliminate the issue.
>Because other vendors are way worse.
There's only a single difference: Chinese vendors don't fix issues in both things they do control and in things they don't. The thing they control is usually a "Distro Build" or Buildroot rootfs hierarchy, which I personally see little value in.
Bugs related to third-party hardware and firmware present on the board gets rarely fixed by both sides.
Don't get me wrong, I'm absolutely not happy with it. I bought Intel NUC, which has Intel Ethernet and Intel Wi-Fi, as my PC with the idea that Intel has end-user support and writes drivers, and NUCs should come with golden Linux support, right? Yet Intel developers still supposed that I had to fix the bug in Intel drivers myself: https://marc.info/?l=linux-pci&m=175368780217953&w=2
Good to know that the OrangePi software support is exactly at the same level it was during the first boards.
There's a reason people just default to RaspberryPi even though better _hardware_ exists. RPi at least gets drivers and software support consistently.
The main issue was that they forked UBoot and did not release their modifications, making it hard to run anything other than their Armbian fork. They forked Armbian a long time ago and kind of hacked things together rather than adding support for their HW to Armbian. After a while I gave up running anothing other than their releases, I had good experiences with the Orange Pi 3 and 5. But it's really uncool that they don't release their UBoot build! Lame! Their WiFi chip (Dragon something brand), had a bug where the WiFI beacon frames had incorrect element orderings, causing inconsistent results with some clients. Overall their Wifi was pure garbage. But apart from the Wifi it's robust stuff.
I had good experiences with the Orange Pi 5 (only problem was soft reboot hanging), but only because Joshua Riek had created and maintained an Ubuntu distribution for the Rockchip. That project seems to be in limbo now, no kernel updates since at least a year.
You have to go in with your eyes open wth SBCs. If you have a specific task for it and you can see that it either already supports it or all the required software is there and it just needs to be gathered, then they can be great gadgets.
Often they can go their entire lifespan without some hardware feature being usable because of lack of software.
The blunt truth is that someone has to make that software, and you can't expect someone to make it for you. They may make it for you, and that's great, but really if you want a feature supported, it either has to already be supported, or you have to make the support.
It will be interesting to see if AI gets to the point that more people are capable of developing their own resources. It's a hard task and a lot of devices means the hackers are spread thin. It would be nice to see more people able to meaningfully contribute.
At some point SBCs that require a custom linux image will become unacceptable, right?
Right?
“Custom”? No.
Proprietary and closed? One can hope.
I think even custom is unacceptable. It’s too much of a pain being limited in your distro choice because you are limited to specific builds. On x86 you can run anything.
There are some projects to port UEFI to boards like Orange Pi and Raspberry Pi. You can install a normal OS once you have flashed that.
https://github.com/tianocore/edk2-platforms/tree/master/Plat...
https://github.com/edk2-porting/edk2-rk3588
There also seems to be a plan to add uefi support to u-boot[1]. Many of these kinds of boards have u-boot implementations, so could then boot uefi kernel.
However many of these ARM chips have their own sub-architecture in the Linux source tree, I'm not sure that it's possible today to build a single image with them all built in and choose the subarchitecture at runtime. Theoretically it could be done, of course, but who has the incentive to do that work?
(I seem to remember Linus complaining about this situation to the Arm maintainer, maybe 10-20 years ago)
[1] https://docs.u-boot.org/en/v2021.04/uefi/uefi.html
The orange pi 6 plus supports UEFI from the get go.
Per TFA, the Orange Pi 6 Plus ships with UEFI, but the SoC requires a vendor specific kernel.
Using vendor kernels is standard in embedded development. Upstreaming takes a long time so even among well-supported boards you either have to wait many years for everything to get upstreamed or find a board where the upstreamed kernel supports enough peripherals that you're not missing anything you need.
I think it's a good thing that people are realizing that these SBCs are better used as development tools for people who understand embedded dev instead of as general purpose PCs. For years now you can find comments under every Raspberry Pi or other SBC thread informing everyone that a mini PC is a better idea for general purpose compute unless you really need something an SBC offers, like specific interfaces or low power.
I have always found it perplexing. Why is that required?
Is it the lack of drivers in upstream? Is it something to do with how ARM devices seemingly can't install Linux the same way x86 machines can (something something device tree)?
Yeah lack of peripheral drivers upstream for all the little things on the board, plus (AIUI) ARM doesn't have the same self-describing hardware discovery mechanisms that x86 computers have. Basically, standardisation. They're closer to MCUs in that way, is how I found it (though my knowledge is way out of date now, been years since I was doing embedded)
I've just been doing some reading. The driver situation in Linux is a bit dire.
On the one hand there is no stable driver ABI because that would restrict the ability for Linux to optimize.
On the other hand vendors (like Orange Pi, Samsung, Qualcomm, etc etc) end up maintaining long running and often outdated custom forks of Linux in an effort to hide their driver sources.
Seems..... broken
Somehow, this isn't a problem in the desktop space, even though new hardware regularly gets introduced there too which require new drivers.
x86 hardware has a standard way to boot and bring up the hardware, usually to at least a minimum level of functionality.
ARM devices aren't even really similar to one another. As a weird example, the Raspberry Pi boots from the GPU, which brings up the rest of the hardware.
It's not just about booting though. We solve this with hardware-specific devicetrees, which is less nice in a way than runtime discovery through PCI/ACPI/UEFI/etc, but it works. But we're not just talking about needing a hardware-specific devicetree; we're talking about needing hardware-specific vendor kernels. That's not due to the lack of boot standardization and runtime discovery.
Please forgive this naive question from someone with zero knowledge in the area: What's stopping ARM/RISCV-based stuff from using ACPI/UEFI?
Nothing, and there has been a push for more standardization including adopting UEFI in the ARM server space. It's just not popular in the embedded space. You'd have to ask Qualcomm or Rockchip about why.
So we can hope for a future where cheap ARM/RISC-V SBCs are as pleasant to use as any bog standard x86?
You can hope but I don't think it'll happen any time soon.
The lack of standardized boot and runtime discovery isn't such a big issue; u-boot deals with the former and devicetrees deal with the latter, we could already have an ecosystem where you download a bog standard Ubuntu ARM image plus a bootloader and devicetree for your SBC and install them. It wouldn't be quite as elegant as in x86 but it wouldn't be that far off; you wouldn't have to use SBC-specific distros, you could get your packages and kernels straight from Canonical (or Debian or whatever).
The reason we don't have that today is that drivers for important hardware just isn't upstream. It remains locked away in Qualcomm's and Rockchip's kernel forks for years. Last I checked, you still couldn't get HDMI working for the popular RK3588 SoC for example with upstream Linux because the HDMI PHY driver was missing, even though the 3588 had been out for many years and the PHY driver had been available under the GPL for years in Rockchip's fork of Linux.
Even if we added UEFI and ACPI today, Canonical couldn't ship a kernel with support for all SBCs. They'd have to ship SBC-specific kernels to get the right drivers.
Right. Thanks!
The "somehow" is Microsoft, who defines what the hardware architecture of what a x86-64 desktop/laptop/server is and builds the compatibility test suite (Windows HLK) to verify conformance. Open source operating systems rely on Microsoft's standardization.
Microsoft's standardization got AMD and Intel to write upstream Linux GPU drivers? Microsoft got Intel to maintain upstream xHCI Linux drivers? Microsoft got people to maintain upstream Linux drivers for touchpads, display controllers, keyboards, etc?
I doubt this. Microsoft played a role in standardizing UEFI/ACPI/PCI which allows for a standardized boot process and runtime discovery, letting you have one system image which can discover everything it needs during and after boot. In the non-server ARM world, we need devicetree and u-boot boot scripts in lieu of those standards. But this does not explain why we need vendor kernels.
I think they're related. You can't have a custom kernel if you can't rebuild the device tree. You can't rebuild blobs.
> You can't have a custom kernel if you can't rebuild the device tree.
What is this supposed to mean? There is no device tree to rebuild on x86 platforms yet you can have a custom kernel on x86 platforms. You sometimes need to use kernel forks there too to work with really weird hardware without upstream drivers, there's nothing different about Linux's driver model on x86. It's just that in the x86 world, for the vast, vast majority of situations, pre-built distro kernels built from upstream kernel releases has all the necessary drivers.
It's legacy of IBM PC compatible standard, that has multiple vendors building computers, peripherals that work with each other. Microsoft tried their EEE approach with ACPI that made suspend flaky in linux in early years.
This does not explain why the drivers for all the hardware is upstreamed almost immediately in the x86 world but remains locked away in vendor trees for years or forever in the ARM world. Vendor kernels don't exist due to the lack of standardized boot and runtime discovery.
Or you can just upstream what you need yourself.
What's the feasibility these days of using AI assistanted software maintenance for drivers? Does this somewhat bridge the unsupported gap by doing it yourself or is this not really a valid approach?
That's just the new normal. Everyone is doing AI assisted work, but that doesn't mean the work goes away.
Someone still has to put in meaningful effort to get the AI to do it and ship it.
I've found AI tools to be pretty awful for low level work. So much of it requires making small changes to poorly documented registers. AI is very good at confidently hallucinating what register value you should use, and often is wrong. There's often such a big develop -> test cycle in embedded, and AI really only solves a very small part of it.
> At some point SBCs that require a custom linux image will become unacceptable, right?
The flash images contain information used by the bios to configure and bring up the device. It's more than just a filesystem. Just because it's not the standard consoomer "bios menu" you're used to doesn't mean it's wrong. It's just different.
These boards are based off of solutions not generally made available to the public. As a result, they require a small amount of technical knowledge beyond what operating a consumer PC might require.
So, packaging a standard arm linux install into a "custom" image is perfectly fine, to be honest.
If the image contains information required to bring up the device, why isn't that data shipped in firmware?
In some cases the built-in firmware is very barebones, just enough to get U-boot to load up and do the rest of the job.
> If the image contains information required to bring up the device, why isn't that data shipped in firmware?
the firmware is usually an extremely minimal set of boot routines loaded on the SOC package itself. to save space and cost, their goal is to jump to an external program.
so, many reasons
- firmware is less modular, meaning you cant ship hardware variants without also shipping firmware updates (the boot blob contains the device tree). also raises cost (see next)
- requires flash, which adds to BOM. intended designs of these ultra low cost SOCs would simply ship a single emmc (which the SD card replaces)
- no guaranteed input device for interactive setup. they'd have to make ui variants, including for weird embedded devices (such as a transit kiosk). and who is that for? a technician who would just reimage the device anyways?
- firmware updates in the field add more complexity. these are often low service or automatic service devices
anyways if you're shipping a highly margin sensitive, mass market device (such as a set top box, which a lot of these chipsets were designed for), the product is not only the SOC but the board reference design. when you buy a pi-style product, you're usually missing out on a huge amount of normally-included ecosystem.
that means that you can get a SBC for cheap using mass produced merchant silicon, but the consumer experience is sub-par. after all, this wasn't designed for your use case :)
Something in me wants to buy every SBC and/or microcontroller that is advertised to me.
Yeah I have this problem (?) too. They are just so neat. I also really like tiny laptops and recreations of classic computers.
Clockwork Pi if you haven't seen it. Beautiful little constructions.
These are awesome.
Even though all can be replaced by a decent mini pc with beefy memory, with lots of VMs.
Yeah, I ended up using an old mac mini for my Home Assistant needs. It draws a whopping 7W from the wall at idle (and it's near always idle), but the price of a new RPi is the same as 13k hours of electric usage for this.
Using whatever compute you have sitting in a drawer usually makes the most sense (including an old phone).
SBC is good for usbip endpoints for those vms. I use them to push devices around my home network.
It should be noted that the CIX P1(this board's SoC) has ongoing efforts to be upstreamed. Last I checked, the GPU drivers were still not available(due to them not supporting ACPI? I may be wrong on this) and power draw being weird and stuck at 10-15ish watts. It seems like this blog confirms nothing has changed on those 2 points.
With that being said, CIX and their main board partner, Radxa, have been open with the UEFI.
I am not an expert in low-level environments such as the kernel or the UEFI, but if these tidbits sound interesting I would encourage anyone who is to look further into the CIX P1. To my untrained eyes, CIX looks like a company that is working towards a desktop/laptop chip with real UEFI/ACPI support. I look forward to the day it is polished up a bit.
> lspci is a bit more revealing, especially because you get to see where the dual 5GbE setup and Wi-Fi controller are placed–each seems to get its own PCI bridge:
That's how PCIe works. A PCIe port - both upstream and downstream - is a "PCI bridge". The link is one bus. A switch chip's "interior" is another bus. The next links are each their own bus again. One per port. There's no switch here, bus 0 ( / 30 / 60)is "in" the CPU, each port is it's own bus.
The more interesting thing is the PCI domain, the first 4 digits:
This generally (caveat emptor) means the ports aren't handled in some common PCIe subsystem, rather each port is independently connected to the CPU crossbar. The ports may also not be able to access each other, or non-transparent mapping rules apply.
Doesn't have to, though; it might be due to some technicality, driver bug, misunderstanding, whatever else.
One or two USB-C 3.2 Gen2 ports are all that's required - can then plug in a hub or dock. eg: https://us.ugreen.com/collections/usb-hub?sort_by=price-desc...
Can also plug in a power bank. https://us.ugreen.com/collections/power-bank?sort_by=price-d...
The advantage is that if the machine breaks or is upgraded, the dock and pb can be retained. Would also distribute the price.
The dock and pb can also be kept away to lower heat to avoid a fan in the housing, ideally.
Better hardware should end up leading to better software - its main problem right now.
This 10-in-1 dock even has an SSD enclosure for $80 https://us.ugreen.com/products/ugreen-10-in-1-usb-c-hub-ssd (no affiliation) (no drivers required)
I'd have another dock/power/screen combo for traveling and portable use.
Unfortunately most of my attempts to power both embedded PC and robot motors from the same power bank result in unexpected reboots (even on what is effectively an RC car). And yes, I'm a CS Msc, not an electrical engineer, but ...
Yeah, I've got a new mini PC that reboots at random. It kind of "drops out." Works better with Ubuntu than Manjaro. Failed device. I was thinking of setting up a forum of some sort to discuss various devices that people may be tinkering with. The domain is tinkeriDOTng - just need to do something with it - could end up looking like Discogs with all the variations on gadgets and devices out there - sort of like music. And then there's the whole "builds" dimension.
How much otherwise highly useful stuff are people sitting on that they can't get going (or going well) for one reason or another? Also, recycling I expect will end up important at some point, and knowing who has what (and where) could expedite this process.
I bought a NanoPi R6C in the past in the hope that it's going to be a nice mini pc to run all my containers with super low power usage or router. But the software was bad, really bad. I found https://github.com/Joshua-Riek/ubuntu-rockchip/ , it was godsend but still had some shortcomings. after 2 years, it's bit stable but I just keep it around as a backup route to access my homelab incase the main machines go down.
I'm a big fan of raspberry pi, I have many, in fact I have so many I have:
``` alias findpi='sudo nmap -sP 192.168.1.0/24 | awk '\''/^Nmap/{ip=$NF}/B8:27:EB|DC:A6:32|E4:5F:01|28:CD:C1/{print ip}'\''' ```
On every `.bashrc` i have.
But I just don't get... everything, I don't get the org, I don't get the users on hn, I'm like skinner in the 'no the kids are wrong' meme.
It's a lambda. It's a cheap, plug in, ssh, forget. And it's bloody wonderful.
If you buy a 1 or 2 off ebay, ok maybe a 3.
After that? Get a damn computer.
Want more bandwidth on the rj45? Get a computer.
Want faster usb? Get a computer.
Want ssd? Get a computer
Want a retro computing device? Get a computer.
Want a computer experience? Etc etc etc, i don't need to labour this.
Want something that will sit there, have ssh and run python scripts for years without a reboot? Spend 20 quid on ebay.
People demanded faster horses. And the raspi org, for some, damn fool, reason, tried to give them.
There are people bemoaning the fact that raspberry pi's aren't able to run LLM's. And will then, without irony, complain that the prices are too high. For the love of God, raspi org, stop listening to dickheads on the Internet. Stop paying youtubers to shill. Stop and focus.
You won't win this game
TFA is about an Orange Pi, with a 12-core Arm chip, a bit more than a Raspberry Pi.
They are chasing the same waterfalls though jeff
As opposed to the rivers and the lakes that they’re used to?
This resonates. I still have a Pi 3B running pihole and it's been up for years. No updates needed, just works. The newer boards trying to compete with mini PCs feels like a different product category entirely.
> ``` alias findpi='sudo nmap -sP 192.168.1.0/24 | awk '\''/^Nmap/{ip=$NF}/B8:27:EB|DC:A6:32|E4:5F:01|28:CD:C1/{print ip}'\''' ```
> On every `.bashrc` i have.
You might want to try mDNS / avahi
Right. In trying to become everything, it stopped being the cheap little computer people loved in the first place!
> People demanded faster horses. And the raspi org, for some, damn fool, reason, tried to give them.
It's like commercial success is a three step tragedy:
(1) solve 1 problem well
(2) pivot to trying to solve all problems for all users, undermining (1) but chasing mass adoption
(3) pivot back to solving 1 problem again, this time for a very specific whale customer with very specific needs, undermining (1) and (2)
I would say Arduino is at step (3) and RPI is at (2)
Looks like the SoC (CIX P1) has Cortex-A720/A520 cores which are Armv9.2, nice.
I've still been on the hunt for a cheap Arm board with a Armv8.3+ or Arvm9.0+ SoC for OSDev stuff, but it's hard to find them in hobbyist price range (this board included, $700-900 USD from what I see).
The NVIDIA Jetson Orin Nanos looked good but unfortunately SWD/JTAG is disabled unless you pay for the $2k model...
And it doesn't seem like anything newer than ARMv9.2 is available either, no matter the price point.
> and you end up diving far more into boot chains, vendor GPU blobs and inference runtimes than you ever intended.
Yep, I'll pass. I'm done dealing with that kind of crap that is spread like a nasty STD through the ARM world. I'm sticking with x64 unless/until ARM gets this crap together.
Excellent! There is an OrangePi Zero 3W. That means my radxa zero tv-computer now has some competition. It is sad that rasspberry pi abandoned the small, zero computer. Will keep the OrangePi Zero 3W in mind next time I need to cobble together a new tv computer.
This seems to be an overkill for most of my workloads that require an SBC. I would choose Jetson for anything computationally intensive, as Orange Pi 6 Plus's NPU is not even utilized due to lack of software support. For other workloads, this one seems a bit too large in terms of formfactor and power consumption, and older RK3588 should still be sufficient
this has much better IO than equivalent RK3588 compute modules / boards though. Useful if you want to build a distributed storage system.
I can for the love of whatever is holy not understand why they put 5GbE on that thing.
Massively simplified, 2.5G is 1G sped up while 5G is 10G slowed down. It makes no sense and the market agrees. The ladder of popularity goes:
1000base-T, <long break>, 10Gbase-T, 2.5Gbase-T, <long break>, 5Gbase-T. (Depends on context ofc, 2.5G is quite popular on APs for example.)
And note a lot of 10Gbase-T hardware is not Nbase-T compatible, and there are chips that do only 1G, 2.5G and 10G - no 5G.
I guess if your design doesn't work at 10GbT you try with 5? Ugh.
Disappointing on the NPU. I have found it's a point where industry wide improvement is necessary. People talk tokens/sec, model sizes, what formats are supported... But I rarely see an objective accuracy comparison. I repeatedly see that AI models are resilient to errors and reduced precision which is what allows the 1 bit quantization and whatnot.
But at a certain point I guess it just breaks? And they need an objective "I gave these tokens, I got out those tokens". But I guess that would need an objective gold standard ground truth that's maybe hard to come by.
just try to find some benchmark top_k, temp, etc parameters for llama.cpp. There's no consistent framing of any of these things. Temp should be effectively 0 so it's atleast deterministic in it's random probabilities.
Right. There are countless parameters and seeds and whatnots to tweak. But theoretically if all the inputs are the same the outputs should be within Epsilon of a known good. I wouldn't even mandate temperature or any other parameter be a specific value, just that it's the same. That way you can make sure even the pseudorandom processes are the same, so long as nothing pulls from a hardware rng or something like that. Which seems reasonable for them to do so idk maybe an "insecure rng" mode
>Temp should be effectively 0 so it's atleast deterministic in it's random probabilities.
Is this a thing? I read an article about how due to some implementation detail of GPUs, you don't actually get deterministic outputs even with temp 0.
But I don't understand that, and haven't experimented with it myself.
By default CUDA isn't deterministic because of thread scheduling.
The main difference comes from rounding order of reduction difference.
It does make a small difference. Unless you have an unstable floating point algorithm, but if you have an unstable floating point algorithm on a GPU at low precision you were doomed from the start.
The even more confounding factor is there are specific builds provided by every vendor of these Cix P1 systems: Radxa, Orange Pi, Minisforum, now MetaComputing... it is painful to try to sort it out, as someone who knows where to look.
I couldn't imagine recommending any of these boards to people who aren't already SBC tinkerers.
I was also onboard until he got to the NPU downsides. I don't care about use for an LLM, but I would like to see the ability to run smallish ONNX models generated from a classical ML workflow. Not only is a GPU overkill for the tasks I'm considering, but I'm also concerned that unattended GPUs out on the edge will be repurposed for something else (video games, crypto mining, or just straight up ganked)
>But I rarely see an objective accuracy comparison.
There are some perplexity comparison numbers for the previous gen - Orange pi 5 in link below.
Bit of a mixed bag, but doesn't seem catastrophic across the board. Some models are showing minimal perplexity loss at Q8...
https://github.com/invisiofficial/rk-llama.cpp/blob/rknpu2/g...
There are a couple outfits making M.2 AI accelerators. Recently I noticed this one: DeepX DX-M1M 25 TOPS (INT8) M.2 module from Radxa[1]: https://radxa.com/products/aicore/dx-m1m
If you're in the business of selling unbundled edge accelerators, you're strongly incentivized to modularize your NPU software stack for arbitrary hosts, which increases the likelihood that it actually works, and for more than one particular kernel.
If I had an embedded AI use case, this is something I'd look at hard.
So, this is slightly off topic, but out of curiousity, what are NPUs good for right this very second? What software uses them? What would this NPU be able to run if it was in fact accessible?
This is an honest, neutral question, and it's specifically about what can concretely be done with them right now. Their theoretical use is clear to me. I'm explicitly asking only about their practical use, in the present time.
(One of the reasons I am asking is I am wondering if this is a classic case of the hardware running too far ahead of the actual needs and the result is hardware that badly mismatches the actual needs, e.g., an "NPU" that blazingly accelerates a 100 million parameter model because that was "large" when someone wrote the specs down, but is uselessly small in practice. Sometimes this sort of thing happens. However I'm still honestly interested just in what can be done with them right now.)
Can you please upload your work with the custom image on GitHub?
Complaining about vendor blobs while showing a picture of a machine hooked up to a China produced RAT box is certainly a statement.
Unfortunately only available atm for extremely high prices. I'd like to pick some up to create a ceph cluster (with 1x 18tb hdd osd per node in an 8 node cluster with 4+2 erasure coding)