loloquwowndueo 2 days ago

“Storage is cheap” goes the saying. Other people’s storage has a cost of zero, so why not just fill it up with 100 copies of the same dependency.

These package formats (I’m looking at you snap as well) are disrespectful of users’ computers to the point of creating a problem where due to size, things take so long and bog the computer down so much, that the resource being used is no longer storage, but time (system and human time). And THAT is not cheap at all.

Don’t believe me, install a few dozen snaps, turn the computer off for a week, and watch in amazement as you turn it back on and see it brought to its knees as your computer and network are taxed to the max downloading and applying updates.

  • wtarreau 2 days ago

    Not to mention the catastrophic security that comes with these systems. On a local ubuntu, I've had exactly 4 different versions of the sudo binary. One in the host OS and 3 in different snaps (some were the same but there were a total of 4 different). If they had a reason to be different, it's likely for bug fixes, but not all of them were updated, meaning that even after my main OS was updated, there were still 3 bogus binaries exposed to users and waiting for an exploit to happen. I find that this is the most shocking aspect of these systems (and I'm really not happy with the disrespect of my storage, like you mention).

    • brlin2021 2 days ago

      The sudo binaries in the snaps are likely to have their SUID bit stripped, so they won't cause any trouble even if they have known vulnerabilities.

  • codedokode a day ago

    Snap/flatpak style application packaging is absolutely necessary. Please let me explain this.

    Every platform needs a way to distribute and install third-party applications. It is unlikely that the authors of the platform wrote all existing software so you need some way to install applications not included into an OS.

    On Windows, there is such a mechanism - pack your application into a large installer exe. It is awful and not secure but at least it has been working for ages. For comparison, on Linux there is no such mechanism at all. Typical Linux distribution traditionally has only two supported ways to install software:

    a) install more components from OS repository

    b) write and compile the software yourself

    While there is lot of third-party software for Linux, there is no universal distribution mechanism. Typically software developer only supports the release they are using. There may be ports - by other developers or by the distribution maintainers - but they are often broken and don't even start (for example, port of waydroid in Fedora).

    Some port installation methods are outright dangerous, for example: running a curled script through sudo or adding a third-party repository to your package manager. That is a sure method to get a broken system, and it will be totally your fault because your distribution doesn't support such installation method so you got the result that was expected.

    In my experience, Fedora, Ubuntu and Debian are especially bad for supporting applications other than Gimp or Libreoffice: the ports for these distributions are often broken, buggy, crash or even do not exist. For example: waydroid, carla, ardour.

    The reason is that there are hundreds of releases of different distributions and nobody is going to port, test and maintain all applications for all of them. So the obvious solution is to choose a standard platform and write applications against this platform. I see no other solution. If I am writing some GUI application I am not interested in learning how Linux distributions are different from each other. I would rather ship an Alpine virtual machine image and not waste my time.

    Flatpak/snap might become such platform but why they are so buggy? As I remember, Steam in flatpak is broken and cannot properly use GPU, VS Code is broken etc.

    • loloquwowndueo a day ago

      No, this is an orthogonal concern. Bundling all the dependencies and killing users hard disks is not a solution to the “Every platform needs a way to distribute and install third-party applications” problem. If anything the proliferation of packaging formats and useless quibbles between snap and flatpak move us farther from the solution as there are now several competing dependency-bundling installers which require installing a package handler anyway.

      In that respect flatpak is better because it’s more universally available than the Ubuntu-specific snaps (attempts at making snap cross-distro are laughable and failed) but it still introduces a bunch of tangential issues which are too much of a bother for the average user versus just doing “apt install” and moving on with their lives. PPAs cover this acceptably if suboptimally for Deb packages.

  • m4rtink 2 days ago

    Snaps do zero deduplication and bundle everything AFAIK - flatpak at least does some deduplication on file level and has shared runtimes.

    • brlin2021 2 days ago

      This statement is false as snaps also have shared runtimes known as "content snaps".

      A common example is the ones with the gnome- prefix and the ones that end with -themes suffix.

      • loloquwowndueo 2 days ago

        Wherein snaps found themselves reinventing shared libraries - at which point, what’s really the point.

        • Seattle3503 2 days ago

          I think the point is that maintainers and developers now have a choice of whether they want to share libraries or not. Before the only choice was to share dependencies.

          • hedora 2 days ago

            There has been a choice between shared and static linking since before the Linux kernel existed.

            What system are you talking about?

            • patrakov 2 days ago

              Any modern distro. Static libraries are simply not built anymore, so you have nothing to link with, unless you rebuild the whole subsystem yourself.

              • hedora 3 hours ago

                If that's true, it'll be the second thing I check after making sure a distro I'm considering isn't using systemd.

                Anyway, you had me worried for a minute, but:

                https://packages.debian.org/source/bookworm/libs/

                Click on a C library. Click on the *-dev package. There's a 1:1 mapping of .so to .a in the packages I spot checked.

                Now I'll go back to enjoying Devuan.

  • neuroelectron 2 days ago

    For a long time, storage was getting cheaper all the time but we've hit scaling walls in both CPUs and drives. I remember when I was a kid and bought Mechwarrior 2 a game that could use up to 500mb! The guy working the video game locker warned me "are you sure you have enough hard drive space?" after having just bought a 2gb drive for like $60, or something, I don't remember exactly. A concern that would have been valid maybe a year earlier.

  • dheera 2 days ago

    To be fair, shared libraries have been problematic since the beginning of time.

    In the Python world, something wants numpy>=2.5.13, another wants numpy<=2.5.12, yet Python has still not come up with a way to just do "import numpy==2.5.13" and have it pluck exactly that version and import it.

    In the C++ world, I've seen code that spits out syntax errors if you use a newer version of gcc, others that spit out syntax errors if you use an older version of gcc, apt-get overwrites the shared library you depended on with a newer version, lots of other issues. Install CUDA 11.2, it tries to uninstall CUDA 11.1, never mind that you had something linked to it, and that everything else in that ecosystem disobeys semantics and doesn't work with later minor revisions.

    It's such a shitshow that it fully makes sense to bundle all your dependencies if you want to ship something that "just works".

    For your customer, storage is cheaper than employee time wasted getting something to work.

    • loloquwowndueo 2 days ago

      Right but snaps don’t solve dependency hell (see content snaps which are shared library bundles).

    • o11c 2 days ago

      That's what everybody uses `venv` for. Or `virtualenv` if you're stuck on old Python.

      But as a rule, `<=` dependencies mean there's either a disastrous fault with the library, or else the caller is blatantly passing all the "do not enter" signs. `!=` dependencies by contrast are meaningful just to avoid a particular bug.

      • int_19h 2 days ago

        Virtual environments don't solve the problem of two dependencies that you need having conflicting requirements.

        • o11c 2 days ago

          That subproblem is fundamentally unfixable unless you're willing to allow incompatible objects to be passed around, which is a really bad idea.

          If your code does:

            o = foo_v1.get_obj()
            foo_v2.use_obj(o)
          
          then this is almost always undefined behavior, since the authors of `foo` will almost certainly not write their APIs to accept instances with different internals.
          • Dylan16807 a day ago

            With a more advanced system you can look at whether a package's use of foo is exposed in its API or completely internal, and then allow different isolated islands of packages to use different versions of foo.

      • dheera 2 days ago

        Venvs also suck because the user has to create and activate them.

        It would be nice if a foo.py file could just deal with all of that.

        A while ago I created this thing as a thought experiment. It likely doesn't work with more recent version of python though: https://news.ycombinator.com/item?id=24735303

        It was partly in jest but I was not entirely un-serious; I think "dealing with dependencies" is quite possibly the biggest reason why Python isn't used to distribute end-user applications. It's a wonderful language but a really shitty import system. There are various attempts to make the experience better (pipx, poetry) but you still can't ship a .py file to someone, have them double click and run it. They still have to conda something, venv something, pipx something.

  • musicnarcoman 2 days ago

    "Storage is cheap" if you do not have to pay for it. It is not so cheap when you are the one paying for the organizations storage.

    • CoolCold 2 days ago

      You have savings from not using Windows in such a org - likely your Linux will be free or cheaper one

  • zdragnar 2 days ago

    It would be fantastic if there was a way for these to declare what libraries they needed bundled, and a manager that would install the necessary dependencies into a shared location, so that only what wasn't already installed got downloaded.

    Oh wait...

    • Gigachad 2 days ago

      Flatpaks do that. The difference is that they let you pick any version of libraries rather than locking everything to fixed versions. So you end up with software that’s less broken, updates sooner, but has multiple copies of libraries on your computer.

    • codedokode a day ago

      Nice idea, but there is a small problem.

      There are probably tens of thousand (if not more) of third-party applications for Linux-like systems, and hundreds of distributions, making new releases twice a year. This multiplies to 10 000 × 100 × 2 = 2 000 000 ports per year.

      Who is going to write 2 million ports per year, thoroughly test it and fix the compatibility bugs?

    • gjsman-1000 2 days ago

      Sure, but we’ve tried that technique for about 20 years.

      We learned that most app developers hate it; to the point they don’t even bother supporting the platform unless they are FOSS diehards.

      Those that do screech about not using the packaged version on almost all of their developer forums, most often because they are out of date and users blame them for bugs that were already fixed.

      This actually is infuriating - imagine fixing a bug, but 2 years later, the distribution isn’t shipping your update, and users are blaming you and still opening bug reports. The distribution also will not be persuaded, because it’s the “stable” branch for the next 3 years.

      Basically, Linux sucks terribly, either way, with app distribution. Linux distributions have nobody to blame but themselves for being ineffectual here.

      • dredmorbius 2 days ago

        ...imagine fixing a bug, but 2 years later, the distribution isn’t shipping your update...

        This grossly misstates the concept of a stable distribution (e.g., Debian stable, with which I'm most familiar).

        Debian stable isn't "stable" in that packages don't change, to the point that updates aren't applied at all, it's stable in that functionality and interfaces are stable. The user experience (modulo bugs and security fixes) does not change.

        Stable does receive updates that address bugs and security issues. What Stable does not do is radically revise programs, applications, and libraries.

        Though it's more nuanced than that even: stable provides several options for tracking rapidly-evolving software, the most notorious and significant of which are Web browsers with the major contenders updating quite frequently (quarterly or monthly, for example, for Google Chrome "stable" and "dev" respectively). That's expanded further with Flatpack, k8s, and other options, in recent years.

        The catch is that updates require package maintainers to work on integrating and backporting fixes to code. More prominent and widely-used packages do this. The issue of old bugs being reported to upstream ... is a breakage of the system in several ways: distro's bug-tracking systems (BTSes) should catch (and be used by) their users, upstream BTSes arguably should reject tickets opened on older (and backported) versions. The solutions are neither purely technical nor social, which makes solutions challenging. But in reality we should admit that:

        - Upstream developers don't like dealing with the noise of stale bugs.

        - Users are going to rant to upstream regardless of distro-level alternatives.

        - Upstreams' BTSes should anticipate this and automate redirection of bugs to the appropriate channel with as little dev intervention as possible. Preferably none.

        - Distros should increase awareness and availability of their own BTS systems to address bugs specific to the context of that distro.

        - Distro maintainers should be diligent about being aware of and backporting fixes and only fixes.

        - Distros should increase awareness and availability of alternatives for running newer versions of software which aren't in the distro's own stable repos.

        Widespread distance technological education is a tough nut regardless, there will be failings. The key is that to the extent possible those shouldn't fall on upstream devs. Though part of that responsibility, and awareness of the overall problem, does fall on those upstream devs.

        • dredmorbius 2 days ago

          And just to provide some references so you don't have to take my word for it, from the Debian FAQ:

          "2.2. Are there package upgrades in "stable"?"

          Generally speaking, no new functionality is added to the stable release. Once a Debian version is released and tagged "stable" most packages will only get security updates.... there are some cases in which packages will be updated in stable ... When an urgent update is required to ensure the software continues working. The package is a data package and the data must be updated in a timely manner. The package needs to be current to useful to end user (e.g. some security software, such as anti-malware products)....

          And:

          Users that wish to run updated versions of the software in stable have the option to use "backports". Backports are recompiled packages from testing (mostly) and unstable (in a few cases only, e.g. security updates), so they will run without new libraries (wherever it is possible) on a stable Debian distribution.

          Debian is strict in interpreting security updates:

          Security updates serve one purpose: to supply a fix for a security vulnerability. They are not a method for sneaking additional changes into the stable release without going through normal point release procedure.

          <https://www.debian.org/doc/manuals/debian-faq/getting-debian...>

          • Dylan16807 2 days ago

            Your first comment says "Stable does receive updates that address bugs and security issues."

            But your quote about stable says "most packages will only get security updates".

            So assuming their bug fix isn't security-relevant, it sounds like their original complaint is valid? I don't see how it's "grossly misstating" how stable works.

            Backports are useful but the default is that the user is on the buggy stable version.

            • dredmorbius a day ago

              In practice, Debian stable gets bugfixes. I included (and quoted) the Debian FAQ on this matter largely so that I wasn't simply arguing by assertion.

              See for example the Debian 12.10 release notes. This is an update to the Debian Stable v.12 release:

              The Debian project is pleased to announce the tenth update of its stable distribution Debian 12 (codename "bookworm"). This point release mainly adds corrections for security issues, along with a few adjustments for serious problems. Security advisories have already been published separately and are referenced where available.

              <https://www.debian.org/News/2025/20250315.en.html>

              Strong emphasis on "mainly adds corrections", but "along with a few adjustments for serious problems."

              That is, in practice, Debian stable receives both security and bugfix updates.

              You and other readers are strongly encouraged to look through Debian sources (release notes, Policy, Constitution, and individual package updates within stable repos) to verify other questions before posting further concerns here.

        • codedokode a day ago

          > Users are going to rant to upstream regardless of distro-level alternatives.

          This might be because upstream uses Github and the distribution bug tracker requires some high-level programming skills simply to post anything there. For example, try to post a bug to Debian.

          Also simply obtaining a backtrace with symbols is a pain in most distributions.

      • rlpb 2 days ago

        > The distribution also will not be persuaded, because it’s the “stable” branch for the next 3 years.

        This is exactly what users want, though. Eg. if they want to receive updates more frequently on Ubuntu then they can use the six monthly releases, but most Ubuntu users deliberately choose the LTS over that option because they don't want everything updated.

        • gjsman-1000 2 days ago

          But if you’re a developer, that doesn’t change that many users do not understand, will not understand, and will open bug reports regularly.

          When that happens, guess what you do? You trademark your software’s name and use the law to force distributions to not package unless concessions are granted. We’re beginning to see this with OBS, but Firefox also did this for a while.

          As Fedora quickly found, when trademark law gets involved, any hope of forcing developers to allow packaging through a policy or opinion vote becomes hilariously, comically ineffectual.

          The other alternative is to just not support Linux. Almost all major software has been happily taking that path, and the whole packaging mess gives no incentive to change.

          • rlpb 2 days ago

            > You trademark your software’s name and use the law to force distributions to not package unless concessions are granted.

            It isn't clear if this behaviour is legally enforceable. Distributions typically try to avoid the conflict. But they could argue that "we modified Firefox to meet our standards and here is the result" is a legally permitted use of that trademark. To my knowledge, this has never been tested.

          • mananaysiempre 2 days ago

            > When that happens, guess what you do?

            Ban the user that did not read go to the distro’s maintainers first.

        • martinald 2 days ago

          At the end of the day the 'traditional' Linux packaging system in where distributions do it all for you is totally outdated. Tbh I can remember in the early/mid 2000s being extremely annoyed with this so I don't know if it was ever a good model.

          On SaaS/mobile apps you have often daily new versions of software coming out. That's what users/developers want. They do not want 3 year+ stale versions of their software being 'supported' by a third party distro. I put supported in comments as it only really applies to security and what not; not terrible bugs in the software that are fixed in later versions.

          Even on servers where it arguably makes more sense it has been entirely supplanted by Docker which ships the _entire OS_ more or less as the 'app'. And even more damingly, most/nearly all people will use a 3rd party Docker repo to manage the docker 'core' software updates itself.

          And the reason noone uses the six monthly releases is because the upgrade process is too painful and regresses too much. But - even if it was 100% bulletproof, noone wants to be running 6-12 month out of date software on that either. Chrom(ium) is updated monthly and has a lot of important new features in it. You don't really want to be running 6-9 months out of date on that.

          • HdS84 2 days ago

            Exactly. In theory, the original windows 10 model is the one most users want: a perpetually up to date os which runs also up to date software. Yes, of there might be reasons the pin something to an older version, but it this pc is on a network, security alone tells you to update ASAP. Don't get me wrong, a working package manager is a very good addition to this model. But currently, most of the time setting up a ltm Linux system I spent on updating ancient git/docker whatever versions.

            • hedora 2 days ago

              Which users want UIs to change (and often break) multiple times a month?

              Do you have any evidence to back that statement up?

    • eikenberry 2 days ago

      It would be even more fantastic if there was a way to compile everything into a single binary and distribute that so that there are no dependencies (other than the kernel).

      Oh wait...

      • hedora 2 days ago

        Yeah, but what if you wanted multiple copies of a library and also wanted to let more than one program share each version?

        You’d need some sort of system that stores files and lets you name those files and also makes it possible for software to look them up by name.

        Oh wait…

        • eikenberry 2 days ago

          Shared libraries suck. It is much better to have a single binary w/ zero library dependencies. The only reason we can't have this is because of sub-standard tooling that makes compiling so expensive that we must suffer shared libraries.

          • tgpc 2 days ago

            shared libs happened because memory was scarce

            pretty cool that we are undoing it as a design decision

            but it will take a long time

            • hedora 3 hours ago

              You missed the joke.

              Flatpak is not getting rid of shared libraries. They're just reimplementing them in a way that's strictly worse in multiple ways.

              Re-read the section "How Flatpak Tries to Save Space: Runtimes and Deduplication" in the article.

          • 01HNNWZ0MV43FF 2 days ago

            Not really. More like, there are some languages that don't support static builds (Python, glibc) and other languages that don't support dynamic libraries (rust) and I'm so tired and this unit was built to frolic in meadows

    • ImJamal 2 days ago

      That is how Flatpak works right now? If you read the article you can read about two different ways of handling it, runtimes and deduplication.

      The problem is the applications have to use the exact same version of a library to get the benefits. With traditional package managers they usually only have 1 version available. With Flatpak you can choose your own version which results in many versions and as such they do not share dependencies. If distros had multiple versions of libraries you would end up with the exact same problem.

  • peter-m80 2 days ago

    Flatpack deduplicates dependencies and shares runtimes between apps

  • api 2 days ago

    There are things like content defined chunking and content based lookup. Evidently that’s too hard.

    • XorNot 2 days ago

      The problem on Linux is that hard links are exactly what you don't want.

      If hard links from the get go were copy on write, then I suspect content defined storage would've become the standard because it would be easy.

      Instead we have this construct which makes it hard and dangerous (hard links hide data dependencies) on most Linux filesystems and no good answers (even ZFS can't easily handle a cp --reflink operation, and the problem is it's not the default anyway).

      • api 2 hours ago

        Use either symbolic links (which is fine in most cases) or overlay filesystems. I agree that hard links are usually not what you want.

  • throwaway314155 2 days ago

    > watch in amazement as you turn it back on and see it brought to its knees as your computer and network are taxed to the max downloading and applying updates.

    A touch overly dramatic...

    • loloquwowndueo 21 hours ago

      If you think it’s dramatic it’s because you have never seen it happen - I have. My description is accurate :)

  • 2OEH8eoCRo0 2 days ago

    Devil's advocate- We have hundreds of stupid distros making choices and the less I need to deal with their builds the better.

    • api 2 days ago

      Containerization is and always was the ultimate “fuck it” answer to these problems.

      “Fuck it, just distribute software in the form of tarballs of the entire OS.”

    • delusional 2 days ago

      Yeah, I only trust the random developers that are probably running windows to package my Linux software.

      The people making those "stupid distros" are (most likely by number) volunteers working hard to give us an integrated experience, and they deserve better than to be called "stupid".

      • codedokode a day ago

        Yes but they are working in the wrong direction. Instead of trying to make ports for existing software they should just provide a standard environment for third-party apps.

        • delusional a day ago

          How would you provide a "standard" environment when the third party apps all depend on other third party apps. What would the "standard" environment be for streamlink when it itself is part of the "standard" environment of streamlink-twitch-gui? How about curl, mpv, ffmpeg, libc? How do you evolve such a "standard" environment without the assistance of the third parties (because many of them will never help you).

          Debian puts in a massive effort to be a long term stable "standard" system. I've found that effort to be wasteful. I'd much rather just take the occasional (although surprisingly rare) breakage than live with the limitations of old packages. That's a matter of taste though, and greatly depends on the effort of the upstream developers.

INTPenis 2 days ago

I use flatpaks daily but not many apps. Because I've been on Atomic Linux for a couple years now flatpak has become part of my daily life.

On this work laptop I have three flatpaks, Signal, Chromium and Firefox. They all take 1.6GiB in total.

On my gaming PC I have Signal, Flatseal, Firefox, PrismLauncher, Fedora MediaWriter and Steam, and obviously they take over 700G because of the games in Steam, but if I count just the other flatpaks they're 2.2GiB.

So yeah, not great, but on the other hand I don't care because I love the packaging of cgroups based software and I don't need many of them. I mean my container images take up a lot more space than my flatpaks.

qbane 2 days ago

I hope articles like this can at least provide some hints when the size of a flatpak store grows without bound. It is definitely more involved than "it bundles everything like a node_modules directory hence..."

[Bug]: /var/lib/flatpak/repo/objects/ taking up 295GB of space: https://github.com/flatpak/flatpak/issues/5904

Why flatpak apps are so huge in size: https://forums.linuxmint.com/viewtopic.php?t=275123

Flatpak using much more storage space than installed packages: https://discussion.fedoraproject.org/t/flatpak-using-much-mo...

  • catlikesshrimp 2 days ago

    Your comment probably took more effort for this article than the prompter of the AI that produced said article.

    Conclusion: Thank you for the links

compsciphd 5 hours ago

I'll repeat myself, but this is because Docker (and all its descendents) didn't understand my work (or at least took the easy way out).

https://www.usenix.org/conference/usenix-atc-10/apiary-easy-...

I argued (and built years prior to docker), a container oriented file system infrastructure that combined the best of linux style package management and union file systems. Where instead of "packages", you had "layers" (analogous to packages) and an "image" that was just a set of layers. I imagined, instead of a linux distribution having an archive of installable packages, it would provide a mirror of usable layers (and PoCd this by converting a large enough set of Debian packages into layers to cover the applications needed for my PoC).

In such a world, you don't waste (directly at least) any additional space, as you are sharing the packages directly (and therefore the underlying files, which can also have memory benefits in terms of easier sharing of ro code pages, due to being the same page on disk).

You do still have a concept of version sprawl, as different images can be using different versions of the same package, but its not very visible. Each image enumerates directly what "shared" components its using. One could argue that just like upgrading a regular deb/rpm environment is relatively straight forward, "upgrading" (or in reality, creating a new image version from an existing version) in such a world is also easy. Just upgrade the shared layer versions in the image manifest/definition.

I was trying to create a world where you could upgrade the container easily (ex: move the running container's private RW layer to a new container on upgrade, or in a sense resolve the container's layers from version A to version B by swapping around the layers that have changed), but one might argue that today that isn't viewed as valuable, and I might agree. I was trying to demonstrate a system that supported what I called persistent and ephemeral containers, with persistent containers being what became called pets and ephemeral containers being what became called cattle and the world today wants everything to be cattle.

massysett 2 days ago

“ The name "Flatpak" is even a nod to IKEA's flatpacking,”

Which is hilarious: an IKEA flat pack takes up less space than the finished product. Linux flatpack is the exact opposite.

jasonpeacock 2 days ago

The article mentions that Flatpack is not suitable for servers because it uses desktop features.

Does anyone know what those features are or have more details?

Linux generally draws a thin line between server and desktop, having “desktop only” dependencies is unusual less it’s something like needing the KDE or Gnome GUI libraries?

  • mananaysiempre 2 days ago

    This may refer to xdg-desktop-portal[1], which is usable without Flatpak, but Flatpak forces you to go through it to access anything outside the app’s private sandbox. In particular, access to user files is mediated through a powerbox (trusted file dialog) [2] provided by the desktop environment. In a sense, Flatpak apps are normal Linux apps to about the same extent that WinRT/UWP apps are normal Windows apps—close, but more limited, and you’re going to need significant porting in either direction.

    (This has also made an otherwise nice music player[3] unusable to me other than by dragging and dropping individual files from the file manager, as all of my music lives in git-annex, and accesses through git-annex symlinks are indistinguishable from sandbox escape attempts. On one hand, understandable; on the other, again, the software is effectively useless because of this.)

    [1] https://wiki.archlinux.org/title/XDG_Desktop_Portal

    [2] https://wiki.c2.com/?PowerBox

    [3] https://apps.gnome.org/Amberol

    • circularfoyers 2 days ago

      > On one hand, understandable; on the other, again, the software is effectively useless because of this.

      Just in case you didn't already know, you can use Flatseal[1] to add the symlinked paths outside of those in the default whitelisted paths.

      I think it's a good thing Flatpak have followed a security permissions system similar to Android, as I think it's great for security, but I definitely think they need to make this process more integrated and user friendly.

      [1] https://flathub.org/apps/com.github.tchx84.Flatseal

      • Vilian a day ago

        I can change those permission directly in the KDE settings, with the need to download flatseal, others DE need to implement their own

    • Vilian a day ago

      You can allow an application complete access to a folder or your home directory, use flatseal for that

  • ponorin 2 days ago

    It assumes that you have a DE running and depends on features like D-Bus. So it's not designed to run headless except for building flatpak packages.

  • LtWorf 2 days ago

    AFAIK it cannot do CLI applications at all.

    • jeroenhd 2 days ago

      It can, but because the Flatpak system depends on APIs like D-Bus getting those to work in headless environments (SSH, framebuffer console, raw TTY) is a pain.

      Flatpak will even helpfully link binaries you install to a directory you can add to your $PATH to make command line invocation easy.

account-5 2 days ago

I can't really comment about snap since I don't use Ubuntu but I thought flatpaks would work similar to how portable apps on windows do. Clearly I'm wrong, but how is it that windows can have portable apps of a similar size to their installable versions and Linux cannot? I know I'm missing something fundamental here, like how people blame Linux for lack of hardware support without acknowledging that hardware vendors do the work for windows to work correctly.

Either way disk space is cheap and abundant now. If I need thenlastest version of something I will use flatpaks.

  • blahaj 2 days ago

    Just a guess, but Windows executables probably depend on a bunch of Windows APIs that are guaranteed to be there, while Linux systems are much more modular and do not have a common, let alone stable ABI interface in the userspace. You can probably get small graphically capable binaries if you depend on QT and just assume it to be present, but Flatpak precisely does not do that and bundles all the dependencies to be independent from shared dependencies of the OS outside of its control. The article also mentions that AppImages can be smaller probably because they assume some common dependencies to be present.

    And of course there are also tons of huge Windows software that come with all sorts of their own dependencies.

    Edit: I think I somewhat misread your comment and progval is more spot on. On Linux you usually install software with a package manager that resolves dependencies and only installs the unsatisfied dependencies resulting in small install size for many cases while on Windows that is not really a thing and installers just package all the dependencies they cannot expect to be present and the portable version just does the same.

  • badsectoracula 2 days ago

    The equivalent of "Windows portable apps" on Linux isn't flatpaks (these add a bunch of extra stuff and need some sort of support from the OS) but AppImages[0]. AppImages are still not 100% the same (and can never be as Windows applications can rely on A LOT more stuff to be there than Linux desktop apps) but functionally/UX-wise they're the closest: you download some program, chmod +x it and run it like any other binary you'd have on your PC.

    Personally i vastly prefer AppImages to flatpaks (in fact i do not use flatpaks at all, i'd rather build the program from source - or not use it if the build process is too convoluted - instead).

    [0] https://appimage.org/

    • codedokode a day ago

      Looking at their architecture they seem to be a pain to run safely (sandboxed). For example, you cannot take away access to mount syscall due to them mounting themselves using FUSE.

      Also are they easy to debug? Do they ship with debugging symbols? Googling around shows that it might be tricky.

  • kmeisthax 2 days ago

    It's a matter of standardization and ABI stability. Linux itself promises an eternally stable syscall ABI, but everything else around it changes constantly. Windows is basically the opposite: no public syscall ABI, but you can always get a window on screen by linking USER.dll and poking it with the correct structures. As a result, Windows apps can assume more, while desktop Linux apps have to ship more.

    • codedokode a day ago

      Linux is moving to Windows model, by shipping userspace libraries. For example, ALSA has a library, DRM has a library.

  • dismalaf 2 days ago

    "Portable" apps on Windows just don't write into the registry or save state in a system directory. They can still assume every Windows DLL since the beginning of time will be there.

    Versus Linux where you have Gnome vs. KDE vs. Other and there's less emphasis on backwards compatibility and more on minimalism, so they need to package a lot more dependencies (potentially).

    If you only install Gnome Flatpaks they end up smaller since they can share a bunch of components.

  • progval 2 days ago

    Installable versions of Windows apps still bundle most of the libraries like portable apps do, because Windows does not have a package manager to install them.

    • maccard 2 days ago

      Windows does have a package manager and has for the last 5 years.

      • kbolino 2 days ago

        Apart from the Microsoft Visual C++ Runtime, there's not much in the way of third-party dependencies that you as a developer would want to pull in from there. Winget is great for installing lots of self-contained software that you as an end user want to keep up to date. But it doesn't really provide a curated ecosystem of compatible dependencies in the way that the usual Linux distribution does.

        • maccard 2 days ago

          Ok but that’s a different argument to “windows doesn’t have a package manager”

          • kbolino 2 days ago

            No, this is directly relevant to the comparison, especially since the original context of this discussion is about how Windows portable apps are no bigger than their locally installed counterparts.

            A typical Linux package manager provides applications and libraries. It is very common for a single package install with yum/dnf, apt, pacman, etc. to pull in dozens of dependencies, many of which are shared with other applications. Whereas, a single package install on Windows through winget almost never pulls in any other packages. This is because Windows applications are almost always distributed in self-contained format; the aforementioned MSVCRT is a notable exception, though it's typically bundled as part of the installer.

            So yes, Windows has a package manager, and it's great for what it does, but it's very different from a Linux package manager in practice. The distinction doesn't really matter to end users, but it does to developers, and it has a direct effect on package sizes. I don't think this situation is going to change much even as winget matures. Linux distributions carefully manage their packages, while Microsoft doesn't (and probably shouldn't).

            • maccard 2 days ago

              I never said that WinGet was a drop in replacement for yum - but the parents claim that windows doesn’t have a package manager isn’t true.

              There are plenty of padkages that require you to add extra sources to your package manager, that are not maintained by the distro. Docker [0] has official instructions to install via their package source. WinGet allows third party sources, so there’s no reason you can’t use it. It natively supports dependencies too. The fact that applications are packaged in a way that doesn’t utilise this for WinGet is true - but again, I was responding to the claim that windows doesn’t have a package manager.

              [0] https://docs.docker.com/engine/install/fedora/#install-using...

              • Dylan16807 a day ago

                > I never said that WinGet was a drop in replacement for yum - but the parents claim that windows doesn’t have a package manager isn’t true.

                Context matters. They were talking about a type of package manager.

                But even without caring about context, the sentence was not "Windows does not have a package manager". The sentence ended with "Windows does not have a package manager to install them" and "them" refers to things that winget generally does not have.

          • homebrewer 2 days ago

            Not as understood by users of every other operating system, even macOS. It's more of an "application manager". Microsoft has a history of developing something and reusing the well-understood term to mean something completely different.

      • keyringlight 2 days ago

        Assuming you're talking about winget, that seems to operate either as an alternative CLI interface to the MS store with a separate database developers would need to add their manifests to, or to download and run normal installers in silent mode. For example if you do winget show "adobe acrobat reader (64-bit) you can see what it will grab. It's a far cry from how most linux package managers operate

      • mjevans 2 days ago

        Windows 2020 - Welcome to Linux 1999 where the distro has a package manager that has just about everything most users will ever need as options to install from the web.

        • maccard 2 days ago

          I can say the same thing about Linux - it’s 2025 and multi monitor, Bluetooth and WiFi support still doesn’t work.

          • yjftsjthsd-h 2 days ago

            Er, yes they do? I guess things could be spotty if you don't have drivers (which... is true of any OS), but IME that's rare. But I have to ask because I keep hearing variations of this: What exactly is wrong with */Linux handling of multi-monitor? The worst I think I've ever had with it is having to go to the relevant settings screen and tell it how my monitors are laid out and hitting apply.

            • maccard 2 days ago

              >I guess things could be spotty if you don’t have drivers

              Sure, and this unfortunately isn’t uncommon.

              > What exactly is wrong with */Linux handling of multi-monitor?

              X11’s support for multiple monitors that have mismatched resolutions/refresh rates is… wrong. Wayland improves upon this but doesn’t support g sync with nvidia cards (even in the proprietary drivers) You might say that’s not important to you and that’s fine, but it’s a deal breaker to me.

              • mjevans 2 days ago

                Maybe they're using a Desktop Environment that poorly expresses support for it?

                I have limited a sample size, but xrandr on the command line and GUI tools in KDE Plasma and (not as recently) LXQt (it might have been lxde) work just fine in the laptop + TV or Projector case.

                • yjftsjthsd-h 2 days ago

                  > I have limited a sample size, but xrandr on the command line and GUI tools in KDE Plasma and (not as recently) LXQt (it might have been lxde) work just fine in the laptop + TV or Projector case.

                  I'm fond of arandr; nice GUI, but also happily saves xrandr scripts once you've done the configuration.

          • account-5 2 days ago

            The only thing you can say in the context of the few bleeding edge hardware that isn't supported by Linux is that:

            1. The hardware vendors are still not providing support the way they do for windows.

            2. The Linux Devs haven't managed to adapt to these new hardwares.

          • mjevans 2 days ago

            FUD (Fear Uncertainty Doubt).

            Every OS has it's quirks, things you might not recall as friction points because they're expected.

            I haven't found any notable issues with quality hardware, possibly with some need to verify support in the case of radio transmitter devices. You'd probably have the same issue for E.G. Mac OS X.

            As consumers we'd have an easier time if: 1) The main chipset and 'device type ID' had to be printed on the box. 2) Model numbers had to change in a visible way for material changes to the Bill of Materials (any components with other specifications, including different primary chipset control methods). 3) Manufacturers at least tried one flavor of Linux, without non-GPL modules (common firmware blobs are OK) and gave a pass / fail on that.

            • maccard 2 days ago

              I don’t think I am spreading FUD. Hardware issues with Linux on non well trodden paths is a well known issue. X11 (still widely used on many distros) has a myriad of problems with multi monitor setups - particularly when resolutions and refresh rates don’t match.

              You’re right that the manufacturers could provide better support, but they don’t.

      • wmf 2 days ago

        Unfortunately a lot of Windows devs are targeting 10 year old versions.

  • johnny22 2 days ago

    > Clearly I'm wrong, but how is it that windows can have portable apps of a similar size to their installable versions and Linux cann

    They can't depend on many apis existing or at the right version. Linux distros are made from a collection of various third party projects and distros just integrate those. Each of these third party projects has it's own development speed and ABI and API stability policies.

    Each distro also has it's own development speed and release policy, which means they might have things that could either be too new or to old. Most distros try to avoid packaging multiple versions of the same project when they can avoid it to ease maintenance as well.

    Heck, you can't even guarantee that you have the exact same libc. Most distros use glibc, but there are plenty of systems that use musl.

  • int_19h 2 days ago

    If you're targeting Windows, you can assume the following things to be present:

    - the entirety of Win32 API

    - all the Windows Runtime APIs

    - .NET Framework 4.7+

    This is a lot of functionality. For example, the list above includes four different widget toolkits alone (Win32, WinForms, WPF, WinRT XAML), several libraries to handle networking (including HTTP), USB, 2D and 3D graphics including text rendering, HTML renderer etc.

    And all of this has a highly stable ABI, so long as you do everything by the book. COM/WinRT and .NET provide a stable ABI for high-level object-oriented APIs above and beyond what the basic C ABI can offer.

    • surajrmal 2 days ago

      The real problem in Linux is the lack of a stable abi for anything other than the kernels UAPI. Perhaps one day we will standardize enough of the lower layers of Linux and provide real stability. There is definitely a line between packaging all of your dependencies and packaging none of them. We probably want something in the middle, or else perhaps more functionality which is currently provided via shared libraries should be moved behind stable IPC instead.

  • account-5 2 days ago

    I'm replying to myself in reply to everyone who replied to me.

    Thanks all for the explanations, much appreciated, I thought I was missing something. I really should have known though, Ive been using portable apps for over 20 years on windows and remember.net apps not being considered portable way back when, which are now considered portable since the run time is on all modern windows.

mcv 2 days ago

I don't know much about package managers, but instead of demanding every app uses the same version of every library, or including all libraries in every app, wouldn't it make more sense to allow different versions of libraries to exist next to each other, and every app simply picks the more up to date version that it supports?

I think that's how npm does it. Not that npm doesn't have its own dependency hell, but that's because different dependencies within the same application can end up requiring different versions of the same sub-dependency. But that's a problem only for developers.

  • Vilian a day ago

    ...that's literally what flatpak does?, and it deduplicate everything too

    • mcv 4 minutes ago

      Is it? But if its only duplication is different versions of the same dependency, and it never has real duplication, then what exactly is the problem?

      As I said, I don't know much about these things. I thought flatpak had the app and all its dependencies together in a single package.

wltr 2 days ago

That was so useless and the style was so bad, I’m pretty sure it was written with (if not by) LLMs. Not even sure if I’m disappointed finding this low effort content here, or rather not surprised at all. I wish the content here would be more interesting, but maybe I’d want to find some other community for that.

I mean, the comments are much more interesting than this piece of content, but the content itself is almost offending. At least the discussion is much more valuable than what I’ve just read by following that link.

ReptileMan 2 days ago

Why does it seems that we try to both avoid and reinvent the static linker poorly with every new technology and generation. Windows has been fighting with dll hell for 30 years now. Linux seems to not be able to produce alternative to dll hell. Not sure how osx world is.

butz 2 days ago

If you are space concious, you should try to select Flatpak apps that are using the same runtime (Freedesktop, GNOME or KDE), and make sure all of them are using exactly the same version of runtime. Correct me if I'm wrong, but only two versions of Flatpak runtimes are supported at a time - current and previous. So during times when transitioning happens to newer runtime, some application upgrades are not done at once, and user ends up using more than one (and sometimes more than two) runtimes. In addition to higher disk space usage, one must account for usual updates too. The more programs and runtimes you have, more updates to download. Good thing, at least updates are partial.

_Soulou 2 days ago

Something I have been wondering with Flatpak is about Ram usage. As sharing dynamic libraries allow loading it into RAM only once, while if I use Signal, Chromium and different others Flatpaks, all libs will be loaded multiple times (often with their own version). So maybe disk is cheap but RAM may be more limited, which looks kind of a limit in the generalization of this method of distribution. (You could tell me it's the same with containers)

Am I right to think that? Has someone measured that difference on their workstation?

  • codedokode a day ago

    Web browser (Signal and Chromium) wastes so much memory that shared libraries size is not important. Also, shared libraries can always be dropped from memory.

  • Vilian a day ago

    The same libraries are shared between flatpak apps, and everything deduplicated, so I don't thinknso, but someone would have to test

gjsman-1000 2 days ago

It feels, to me, like the Linux desktop has become an overly complicated behemoth, never getting anywhere due to its weight.

I still feel the pinnacle for modern OS design might be Horizon, by Nintendo of all people. A capability-based microkernel OS that updates in seconds, fits into under 400 MB (WebKit and NVIDIA drivers included), is fast enough for games, and hasn’t had an unsigned code exploit in half a decade. (The OS is extremely secure, but NVIDIA’s boot code wasn’t.)

Why can’t we build something like that?

  • wk_end 2 days ago

    We can't build something quite like that because we demand a whole lot more from our general-purpose computing devices than we demand from our Switches.

    For instance, the Switch - and I don't know where in the stack this limitation lies - won't even let you run multiple programs that use the network. You can't, say, download a new game while playing another one that happens to have online connectivity - even if you aren't using it!

    On a computer, we want to be able to run dozens of programs at the same time, freely and seamlessly; we want them to be able to interoperate: share data, resources, libraries, you name it; we want support for a vast array of hardware and peripherals. And on and on.

    A Switch, fundamentally, is a device that plays games. Simpler requirements leads to simpler software.

    • gjsman-1000 2 days ago

      This isn’t actually true, as you can use the Nintendo Switch Online app, or the eShop, while downloading games.

      You just can’t play games at the same time one is downloading. That’s a deliberate storage speed and network use optimization than a software limitation. You can also tell this by the notifications about online players from the system, even as you are playing an online game.

      (Edit for posting too fast: The Switch does have a web browser, full WebKit even, which is used for the eShop and for logging in to captive portal Wi-Fi. Exploits are found occasionally, but the sandboxing has so far rendered these exploits mostly useless. Personally, I support this, as then Nintendo doesn’t have to worry about website parental controls.)

      • m4rtink 2 days ago

        But AFAIK it still does not have a web browser, because they are scared of all the exploits webkit exploits people used to enable custom software on the PlayStation Vita. So rather than that they released Switch without a built-in web browser, even if it would be perfectly usable on the hardware and very useful in many cases.

  • yjftsjthsd-h 2 days ago

    > fits into under 400 MB (WebKit and NVIDIA drivers included),

    I don't think that's particularly hard if you only include support for one set of hardware and a single API/ABI for applications. Notably, no general-purpose OS does either of these things and people would probably not be pleased if one tried.

  • Vilian a day ago

    Really? Are you saying that an OS for a console is a good substitute for a desktopOS, at that point i'm gonna argue that the best OS is the xbox360 and we all should be using it

  • anthk 2 days ago

    Alpine Linux?

    • gjsman-1000 2 days ago

      Close; but the security still isn’t anywhere close.

      On Alpine, if there’s a zero day in WebKit, you’d better check how your security is set up, and hope there’s not an escalation chain.

      On Horizon, dozens of bugs in WebKit, the Broadcom Bluetooth stack, and the games have been found; they are still found regularly. They are also boring and completely useless, because the sandboxing is so tight.

      You also can’t update Alpine in 5 seconds flat, even between a dozen major versions. That alone is amazing.

      • yjftsjthsd-h 2 days ago

        > Close; but the security still isn’t anywhere close. [...]

        I think a lot of the security comes down to what compromises you're willing to make. Horizon doesn't have to support the same breadth of hardware or software as we expect out of normal OSs, so they can afford to reinvent the world on a secure microkernel. If we want to maintain backwards-compatibility (and we do, because otherwise it's dead on arrival) then we have to take smaller steps. Of course, we can take those steps; if you care about security then you should run your browser in a sandbox (firejail, bubblewrap, docker/podman) at which point a zero-day in the browser is lower impact (not zero risk, true, but again I don't see any way to fix that without throwing out performance or compatibility).

        > You also can’t update Alpine in 5 seconds flat, even between a dozen major versions. That alone is amazing.

        I rather assumed that the Switch doesn't actually install OS updates in 5s either? The obvious way to do what they're doing is A/B updates in the background, after which you "apply" by rebooting, which Linux can do in 5s.

        • gjsman-1000 2 days ago

          You should pay attention to when a Switch updates. There is no A/B system - the installation literally is 5 seconds after the download is finished, followed by a reboot. It makes other operating systems look downright shameful.

          As for backwards compatibility, Flatpak solves it, mostly. The underlying system doesn’t necessarily need to be Linux if it provides the right environment to run Flatpaks, maybe in a container.

  • jeroenhd 2 days ago

    Linux has supported online replacement for a while now, and can be compiled to dozens of megabytes in size. Whatever cruft Nvidia adds in their binary drivers will push the OS beyond 400MiB, but building a Linux equivalent isn't exactly impossible.

    The problem with it is that it's a lot of work (just getting secure boot to work is a massive pain in itself) and there are a lot of drivers you need to manually disable or settings to manually toggle to get a Switch equivalent system. The Switch includes only code paths necessary for the Switch, so anything that looks like a USB webcam should be completely useless. Bluetooth/WiFi chipset drivers are necessary, but of course you only need the BLOBs for the specific hardware you're using.

    Part of Nintendo's security strategy is the inability to get binary code onto the system that wasn't signed by Nintendo. You can replicate this relatively easily (basic GPG/etc. signature checks + marking externally accessible mount points as non-executable + only allowing execution of those mounts/copies from those mounts after full signature verification). Also add some kind of fTPM-based encryption mechanism to make sure your device storage can't be altered. You then need to figure out some method of signing all the software any user of your OS could possibly need to execute, but if you're an OEM that shouldn't be impossible.

    Once you've locked down the system enough, you can start enforcing whatever sandboxing you need on top of whatever UI you prefer so your games can't be hacked. Flatpak/Snap/Docker all provide APIs for this already.

    The tooling is all there, but there's no incentive for anyone to actually make it work. Some hardware OEMs do a pretty good job (Samsung's Tizen, for instance) but anything with a freely accessible debug interface or development interface is often quickly hacked. Most of the Linux user base want to use the same OS on their laptop and desktop and have all of their components work, and would also like the ability to run their own programs. To accomplish that, you have to give up a lot of security layers.

    I doubt Nintendo's kernel is that secure, but without access to the source code and without a way to attack it, exploiting it is much harder. Add to that the tendency of Nintendo to sue, harass, and intimidate people trying to get code execution on their devices, and they end up with hardware that looks pretty secure from the outside.

    Android and ChromeOS are also pretty solid operating systems in terms of general security, but their dependence on supporting a range of (vendor) drivers makes them vulnerable. Still, escalating from webkit to root on Android is quite the challenge, you'll need a few extra exploits for that, and those will probably only work on specific phones running specific software.

    For what it's worth, you can get a pretty solid system by installing an immutable OS (Silverblue style) without root privileges. That still has some security mechansism disabled for usability purposes, but it's a solid basis for an easily updateable, low-security-risk OS when installed correctly.

    • gjsman-1000 2 days ago

      For what it’s worth, Nintendo’s kernel has been reimplemented entirely as open-source, under the mesosphere/ folder of the Atmosphere project. Modded Switches use it over Nintendo’s original binary.

      The creator of it (SciresM) has publicly stated that after reverse engineering and re-implementing both it and the 3DS kernel, he firmly believes it has zero security flaws remaining. The last 6 or so years of not one software exploit in Horizon capable of enabling piracy also bears this out.

      https://x.com/SciresM/status/1327631019583836160?ref_src=tws...

      He also is giving the Switch 2, assuming it inherits the NVIDIA boot bug fix and new anti-glitching capabilities, possibly over a decade of being crack free at this rate. Even then, it will almost certainly be a hardware mod.

haunter 2 days ago

What made Flatpaks more popular than Appimage? I thought the latter is "vastly" superior and really portable?

  • kalaksi 2 days ago

    I don't claim to know the answer but flatpaks have easier distribution (flathub), package management, sandboxing, can share runtimes and are also portable in the sense that they work across linux distros. AppImage is a simple and even more portable format but not much else so I guess it's superior if you only want to maximize portability.

  • Vilian a day ago

    It's depends, flatpak is a lot more portable, it bundle everything, the libraries are shared, thing that appimage can't, it automatically updates, it's more secure, and have a stronger sandbox, and it integrate better with the system

    • akimbostrawman 11 hours ago

      >it's more secure, and have a stronger sandbox

      AppImages don't have a sandbox at all, they have full access to everything the user has.

Havoc a day ago

They took the “works on my computer - then we’ll just ship my computer” meme literally

pdimitar 2 days ago

[flagged]

  • dang 2 days ago

    Can you please follow the site guidelines when posting to HN? You broke them badly in this thread, and we've had to ask you this many times before.

    https://news.ycombinator.com/newsguidelines.html

    • pdimitar 2 days ago

      Apparently I did. Seems I underestimated the impact of what I perceived as a small rant.

      [no longer replying non-constructively to anyone in this sub-thread]

  • renewiltord 2 days ago

    But I don’t want to solve actual problems. I want to write the 3689th lisp interpreter in the world.

    • pdimitar 2 days ago

      Your right and prerogative, obviously.

      But out there, a stranger you care nothing about, will think less of you.

      Wish I had that free time and freedom though... The things I would do.

      • renewiltord 2 days ago

        You can have that free time. Stop posting on HN and write some code. I can do both but if I couldn’t I’d pick the latter.

  • yjftsjthsd-h 2 days ago

    Okay, fair enough. Which part are you working on and how far have you gotten?

    • pdimitar 2 days ago

      Elixir -> Rust -> SQLite library (FFI bridge). The FFI library is completed (without some of SQLite's advanced features that I don't deem important for a v1) and I am just adding more tests now, though the integration layer with Elixir's de-facto data mapper library (Ecto) has not been started yet. Which means that an announcement would be met with crickets, hence I'll work on that integration layer VerySoon™. Otherwise the whole thing wouldn't ever help anyone.

      I do feel strongly about it as I believe most apps don't need a full-blown database server. I've seen what SQLite can do and to me it's still a hidden gem (or a blind spot) to many programmers.

      So I am putting my sweat where my mouth is and will provide that to the Elixir community, for 100% free, no one-time payments and no subscription software.

      And yes, I do get annoyed by privileged people casually working on completely irrelevant stuff that's never going to move anything forward. Obviously everyone has the right to do whatever they like in their free time, but announcements on HN I can't combine with that and they do annoy me greatly. "Oh look, it's just a hobby project but I want you all to look at it!" -- don't know, it does not make any sense to me. Seems pretentious and ego-serving but I could be looking at it wrong. Triggers tend to remove nuance after all.

  • teddyh 2 days ago

    OK, say, for the sake of argument, that DwarFS solves the disk space problem. What about the RAM problem?

    • pdimitar 2 days ago

      Addressing this requires me being interested in all the details (which would get me half way there on the road to being a contributor, which I'm not aiming at). I was responding to the central point of the article + ranted a bit.

      I'm simply getting annoyed by the low effort that is put in such prominent open-source software.

      And here I am, working in my corner on a soon-to-be-released open-source library that will likely see 100 users for a year at most... agonizing on increasing test coverage that actually paid off and it uncovered bugs and I fixed them. And enlisted LLMs and professional acquaintances to minimize or eliminate memory copying in the FFI part of the code...

      ...and a very prominent open-source software maintainers have not even bothered to start looking at the lowest-hanging fruit: reducing disk usage.

      That frustrated me and I expressed it.