I ran a moderately large opensource service and my chronic back pain was cured the day I stopped maintaining the project.
Working for free is not fun. Having a paid offering with a free community version is not fun. Ultimately, dealing with people who don't pay for your product is not fun. I learnt this the hard way and I guess the MinIO team learnt this as well.
Completely different situations. None of the MinIO team worked for free. MinIO is a COSS company (commercial open source software). They give a basic version of it away for free hoping that some people, usually at companies, will want to pay for the premium features. MinIO going closed source is a business decision and there is nothing wrong with that.
I highly recommend SeaweedFS. I used it in production for a long time before partnering with Wasabi. We still have SeaweedFS for a scorching hot, 1GiB/s colocated object storage, but Wasabi is our bread and butter object storage now.
> > Working for free is not fun. Having a paid offering with a free community version is not fun. Ultimately, dealing with people who don't pay for your product is not fun.
> Completely different situations. None of the MinIO team worked for free. MinIO is a COSS company (commercial open source software).
MinIO is dealing with two out of the three issues, and the company is partially providing work for free, how is that "completely different"?
The MinIO business model was a freemium model (well, Open Source + commercial support, which is slightly different). They used the free OSS version to drive demand for the commercially licensed version. It’s not like they had a free community version with users they needed to support thrust upon them — this was their plan. They weren’t volunteers.
You could argue that they got to the point where the benefit wasn’t worth the cost, but this was their business model. They would not have gotten to the point where the could have a commercial-only operation without the adoption and demand generated from the OSS version.
Running a successful OSS project is often a thankless job. Thanks for doing it. But this isn’t that.
> Running a successful OSS project is often a thankless job. Thanks for doing it. But this isn’t that.
No, even if you are being paid, it's a thankless, painful job to deal with demanding, entitled free users. It's worse if you are not being paid, but I'm not sure why you are asserting dealing with bullshit is just peachy if you are being paid.
They wanted adoption and a funnel into their paid offering. They were looking out for their own self-interest, which is perfectly fine; however, it’s very different from the framing many are giving in this thread of a saintly company doing thankless charity work for evil homelab users.
Where did I say there were only downsides? There are definitely upsides to this business model, I'm just refuting the idea that because there are for profit motives the downsides go away.
I hate when people mistreat the people that provide services to them: doesn't matter if it's a volunteer, underpaid waitress or well paid computer programmer. The mistreatment doesn't become "ok" because the person being mistreated is paid.
“I don’t want to support free users” is completely different than “we’re going all-in on AI, so we’re killing our previous product for both open source and commercial users and replacing it with a new one”
I can also highly recommend SeaweedFS for development purposes, where you want to test general behaviour when using S3-compatible storage. That's what I mainly used MinIO before, and SeaweedFS, especially with their new `weed mini` command that runs all the services together in one process is a great replacement for local development and CI purposes.
Ironically rustfs.com is currently failing to load on Firefox, with 'Uncaught TypeError: can't access property "enable", s is null'. They shoulda used a statically checked language for their website...
It seems like the issue may be that I have WebGL disabled. The console includes messages like "Failed to create WebGL context: WebGL creation failed:
* AllowWebgl2:false restricts context creation on this system."
I’ve never heard of SeaweedFS, but Ceph cluster storage system has an S3-compatible layer (Object Gateway).
It’s used by CERN to make Petabyte-scale storage capable of ingesting data from particle collider experiments and they're now up to 17 clusters and 74PB which speaks to its production stability. Apparently people use it down to 3-host Proxmox virtualisation clusters, in a similar place as VMware VSAN.
Ceph has been pretty good to us for ~1PB scalable backup storage for many years, except that it’s a non-trivial system administration effort and needs good hardware and networking investment, and my employer wasn't fully backing that commitment. (We’re moving off it to Wasabi for S3 storage). It also leans more towards data integrity than performance, it's great at being massively-parallel and not so rapid at being single thread high-IOPs.
Ceph is a non-starter for me because you cannot have an existing filesystem on the disk. Previously I used GlusterFS on top of ZFS and made heavy use of gluster's async geo-replication feature to keep two storage arrays in sync that were far away over a slow link. This was done after getting fed up with rsync being so slow and always thrashing the disks having to scan many TBs every day.
While there is a geo-replication feature for Ceph, I cannot keep using ZFS at the same time, and gluster is no longer developed, so I'm currently looking for an alternative that would work for my use case if anyone knows of a solution.
> "Ceph is a non-starter for me because you cannot have an existing filesystem on the disk. Previously I used GlusterFS on top of ZFS"
I became a Ceph admin by accident so I wasn't involved in choosing it and I'm not familiar with other things in that space. It's a much larger project than a clustered filesystem; you give it disks and it distributes storage over them, and on top of that you can layer things like the S3 storage layer, its own filesystem (CephFS) or block devices which can be mounted on a Linux server and formatted with a filesystem (including ZFS I guess, but that sounds like a lot of layers).
> "While there is a geo-replication feature for Ceph"
Several; the data cluster layer can do it in two ways (stretch clusters and stretch pools), the block device layer can do it in two ways (journal based and snapshot based), the CephFS filesystem layer can do it with snapshot mirroring, and the S3 object layer can do it with multi-site sync.
I've not used any of them, they all have their trade-offs, and this is the kind of thing I was thinking of when saying it requires more skills and effort. for simple storage requirements, put a traditional SAN, a server with a bunch of disks, or pay a cheap S3 service to deal with it. Only if you have a strong need for scalable clusters, a team with storage/Linux skills, a pressing need to do it yourself, or to use many of its features, would I go in that direction.
> the software is under AGPL. Go forth and forkify.
No, what was minio is now aistor, a closed-source proprietary software. Tell me how to fork it and I will.
> they wanted to be the only commercial source of the software
The choice of AGPL tells me nothing more than what is stated in the license. And I definitely don't intend to close the source of any of my AGPL-licensed projects.
The fact that new versions aren't available does nothing to stop you from forking versions that are. Or were - they'll be available somewhere, especially if it got packaged for OS distribution.
There's nothing wrong at all with charging for your product. What I do take issue with, however, is convincing everyone that your product is FOSS, waiting until people undertake a lot of work to integrate your product into their infrastructure, and then doing a bait-and-switch.
Just be honest since the start that your product will eventually abandon its FOSS licence. Then people can make an informed decision. Or, if you haven't done that, do the right thing and continue to stand by what you originally promised.
> What I do take issue with, however, is convincing everyone that your product is FOSS, waiting until people undertake a lot of work to integrate your product into their infrastructure, and then doing a bait-and-switch.
But FOSS means “this particular set of source files is free to use and modify”. It doesn’t include “and we will forever keep developing and maintaining it forever for free”.
It’s only different if people, in addition to the FOSS license, promise any further updates will be under the same license and then change course.
And yes, there is a gray area where such a promise is sort-of implied, but even then, what do you prefer, the developers abandoning the project, or at least having the option of a paid-for version?
> what do you prefer, the developers abandoning the project, or at least having the option of a paid-for version?
It's not a binary choice. I prefer the developers releasing the software under a permissive license. I agree that relying on freemium maintenance is naive. The community source lives on, perhaps the community should fork and run with it for the common good absorbing the real costs of maintenance.
While I agree with the sentiment, keep in mind that circumstances change over the years. What made sense (and what you've believed in) a few years ago may be different now. This is especially true when it comes to business models.
When your product entered mainstream with integration that would yield millions when virtually obliged to get a license is typically what happens.
When backed by a company there is an ethical obligation to keep, at least maintenance. Of course legally they can do what they wish. It isn't unfair to call it bad practice.
If offering a tie in thing supposedly free of charge without warning that would end once it serves a party less profit purpose then yes.
Ethics are not obligations, they are moral principles. Not having principles doesn't send you to prison that is why it isn't law. It makes you lose moral credit though.
That is ridiculous. If you buy a sandwich for a homeless person, you do not need to warn them that you won't give them another one tomorrow. If you think generosity is an obligation of slavery, you have your morals backwards.
However, almost every open source license actually DOES warn that support may end. See the warranty clause.
> If offering a tie in thing supposedly free of charge without warning that would end once it serves a party less profit purpose then yes
Claiming that you’re entitled to free R&D forever because someone once gave you something of value seems like a great way to ensure that nobody does that again. You got over a decade of development by a skilled team, it’s not exactly beyond the pale that the business climate has changed since then.
Those might be your moral principles, but others reject this nonsense of an obligation to perpetual free labor you think you're entitled to, and don't grant you this moral high ground you assume you have.
> When backed by a company there is an ethical obligation to keep, at least maintenance.
You're saying that a commercial company has an ethical obligation to do work for you in future, for free? That doesn't follow from any workable ethical system.
Everyone is quick to assert rights granted by the license terms and fast to say the authors should have chosen a better license from the start in case the license doesnt fit the current situation.
License terms don't end there. There is a no warranty clause too in almost every open source license and it is as important as the other parts of the license. There is no promise or guarantees for updates or future versions.
They're not saying they violated the license, they're saying they're assholes. It may not be illegal to say you'll do something for free and then not do it, but it's assholish, especially if you said it to gain customers.
They gave code for free, under open source, but you call them assholes if they do not release more code for free. So who is the asshole here? You or them?
Continued updates is not and never has been a part of FOSS, either implicitly or explicitly, you simply have a misconception. FOSS allows you to change the software. That's what it has always meant.
There's no broken promise though. It's the users who decide+assume, on their own going in, that X project is good for their needs and they'll have access to future versions in a way they're comfortable with. The developers just go along with the decision+assumption, and may choose to break it at any point. They'd only be assholes if they'd explicitly promised the project would unconditionally remain Y for perpetuity, which is a bs promise nobody should listen to, cuz life.
I think this is where the problem/misunderstanding is. There's no "I will do/release" in OSS unless promised explicitly. Every single release/version is "I released this version. You are free to use it". There is no implied promise for future versions.
Released software is not clawed back. Everyone is free to modify(per license) and/or use the released versions as long as they please.
I'm noticing this argument a lot these days, and I think it stems from something I can't define - "soft" vs. "hard" or maybe "high-trust" vs "low-trust".
I always warned people that if they "buy" digital things (music, movies) it's only a license, and can be taken away. And people intellectually understand that, but don't think it'll really happen. And then years go by, and it does, and then there's outrage when Amazon changes Roald Dahl's books, or they snatch 1984 right off your kindle after you bought it.
So there's a gap between what is "allowed" and what is "expected". I find this everywhere in polite society.
Was just talking to a new engineer on my team, and he had merged some PRs, but ignored comments from reviewers. And I asked him about that, and he said "Well, they didn't block the PR with Request Changes, so I'm free to merge." So I explained that folks won't necessarily block the PR, even though they expect a response to their questions. Yes, you are allowed to merge the PR, but you'll still want to engage with the review comments.
I view open source the same way. When a company offers open source code to the community, releasing updates regularly, they are indeed allowed to just stop doing that. It's not illegal, and no one is entitled to more effort from them. But at the same time, they would be expected to engage responsibly with the community, knowing that other companies and individuals have integrated their offering, and would be left stranded. I think that's the sentiment here: you're stranding your users, and you know it. Good companies provide a nice offramp when this happens.
Customers are the ones that continue to pay. If they continue to pay they will likely receive maintenance from the devs. If they don't, they are no longer or never have been customers.
It would be interesting to see if there could be a sustainable OSS model where customers are required to pay for the product, and that was the only way to get support for it as well.
Even if the source was always provided (and even if it were GPL), any bug reports/support requests etc. would be limited to paying customers.
I realize there is already a similar model where the product/source itself is always free and then they have a company behind it that charges for support... but in those cases they are almost always providing support/accepting bug reports for free as well. And maybe having the customer pay to receive the product itself in the first place, might motivate the developers to help more than if they were just paying for a support plan or something.
Well, I think this is what SchedMD do with Slurm? GPL code. You can sign up to the bug tracker & open an issue, but if you don't have a support contract they close the issue. And only those customers get advanced notice of CVEs etc. I'd expect nearly everyone who uses it in production has a support contract.
The only meaningful informed decision, but sadly much less known (and I think we should talk and insist more on it), is to be wary if you see a CLA. Not all do, but most perform Copyright Assignment, and that's detrimental to the long-term robustness of Open Source.
Having a FOSS license is NOT enough. Idealy the copyright should be distributed across all contributors. That's the only way to make overall consensus a required step before relicensing (except for reimplementation).
Pick FOSS projects without CLAs that perform Copyright Assignment to an untrusted entity (few exceptions apply, e.g. the FSF in the past)
You should be wary always. CLA or not, nothing guarantees that the project you depend on will receive updates, not even if you pay for them and the project is 100% closed source.
What you’re suggesting is perpetuating the myth that open source means updates available forever for free. This is not and never has been the case.
Was I, really? Maybe, if you feel so... but I'd have to say that I had no idea.
What I'm suggesting is that a FOSS project without CLAs and a healthy variety of contributors does belong to the broad open source community that forms around it, while a FOSS project with such CLA is just open to a bait-and-switch scheme because the ownership stays in a single hand that can change course at a moments notice.
Whether the project stops receiving updates or not, is an orthogonal matter.
>Just be honest since the start that your product will eventually abandon its FOSS licence. Then people can make an informed decision.
"An informed decision" is not a black or white category, and it definitely isn't when we're talking about risk pricing for B2B services and goods, like what MinIO largely was for those who paid.
Any business with financial modelling worth their salt knows that very few things which are good and free today will stay that way tomorrow. The leadership of a firm you transact with may or may not state this in words, but there are many other ways to infer the likelihood of this covertly by paying close attention.
And, if you're not paying close attention, it's probably just not that important to your own product. What risks you consider worth tailing are a direct extension of how you view the world. The primary selling point of MinIO for many businesses was, "it's cheaper than AWS for our needs". That's probably still true for many businesses and so there's money to be made at least in the short term.
"Informed decisions" mean you need to have the information.
Like with software development, we often lack the information on which we have to decide architectural, technical or business decisions.
The common solution for that is to embrace this. Defer decisions. Make changing easy once you do receive the information. And build "getting information" into the fabric. We call this "Agile", "Lean", "data driven" and so on.
I think this applies here too.
Very big chance that MinIO team honestly thought that they'd keep it open source but only now gathered enough "information" to make this "informed decision".
Isn't this the normal sales anyhow for many products? One attracts a customer with unreasonable promises and features, makes him sign a deal to integrate, then issues appear once in production that make you realize you will need to invest more.
When you start something (startup, FOSS project, damn even marriage) you might start with the best intentions and then you can learn/change/loose interest. I find it unreasonable to "demand" clarity "at the start" because there is no such thing.
Turning it around, any company that adopts a FOSS project should be honest and pay for something if it does not accept the idea that at some point the project will change course (which obviously, does not guarantee much, because even if you pay for something they can decide to shut it down).
> I find it unreasonable to "demand" clarity "at the start" because there is no such thing.
Obviously you cannot "demand" stuff but you can do your due dilligence as the person who chooses a technical solution. Some projects have more clarity than others, for example the Linux foundation or CNCF are basically companies sharing costs for stuff they all benefit from like Linux or Prometheus monitoring and it is highly unlikely they'd do a rug pull.
On the other end of the spectrum there are companies with a "free" version of a paid product and the incentive to make the free product crappier so that people pay for the paid version. These should be avoided.
At this point I don’t trust any company that offers a core free tool with an upsell. Trials or limited access is one thing, but a free forever product that needs active maintaining, be skeptical.
It’s been tough for us at https://pico.sh trying to figure out the right balance between free and paid and our north star is: how much does it cost us to maintain and support? If the answer scales with the number of users we have then we charge for it. We also have a litmus test for abuse: can someone abuse the service? We are putting it behind a paywall.
I hear this perspective a lot in relation to open source projects.
What it fails to recognize is the reality that life changes. Shit happens. There's no way to predict the future when you start out building an open source project.
(Coming from having contributed to and run several open source projects myself)
FOSS is not a moral contract. People working for free owe nothing to no one. You got what's on the tin - the code is as open source once they stop as when they started.
The underlying assumption of your message is that you are somehow entitled to their continued labour which is absolutely not the case.
It was not expectation when they started, did a lot to lure many into the ecosystem. When you release it free, wait for the momentum to build, then you cut off, it is something else. And the worse is they did it in a very short time. Check out elasticsearch, the same route but did not abandon the 7 release like this.
I know all about ElasticSearch, MongoDB, Redis, etc. Yes, what they did sucks. No, it doesn't make the maintainers bad or anything. It's still on the user to know that anything can happen to that spiffy project they've been using for a while, and so be prepared to migrate at any time.
Maybe this is the case, but why is your presumption of entitlement to free labor of others the assumed social contract, the assumed "moral" position, rather than the immoral one?
Why is the assumed social contract that is unwritten not that you can have the free labor we've released to you so far, but we owe you nothing in the future?
There's too much assumption of the premise that "moral" and "social contract" are terms that make the entitled demands of free-loaders the good guys in this debate. Maybe the better "morality" is the selfless workers giving away the product of their labor for free are the actual good guys.
Where is this mythical social contract found? I stand by my point: it's a software license, not a marriage.
Free users certainly would like it to be a social contract like I would like to be gifted a million dollars. Sadly, I still have to work and can't infinitely rely on the generosity of others.
I always preferred people who didn’t, when I worked in retail. It generates a nice chill task (wander around the parking lot looking for carts). But if you want to do a favor for the faceless retailer, go for it. Mostly I chuck my cart in the corral to get it out of my way, but this sees more like a morally-neutral action to me.
Your analogy doesn't make sense. You are getting benefits from using the shopping cart and you bring back as it's expected as part of the exchange. You bring the cart back to where you took which is a low effort commitment entirely proportional to what you got from it.
Free software developers are gifting you something. Expecting indefinite free work is not mutual respect. That's entitlement.
The common is still there. You have the code. Open source is not a perpetual service agreement. It is not indentured servitude to the community.
Stop trying to guilt trip people into giving you free work.
MinIO accepted contributions from people outside the company who did work on it for free, usually because they expected that minio would keep the software open source.
In this context the social contract would be an expectation that specifically software developers must return the shopping cart for you, but you would never expect the same from cashiers, construction workers, etc.
If the software developer doesn't return your cart, he betrayed the social contract.
Expectations are maybe fine maybe not, but it's funny that people can slap the word moral onto their expectation of others being obligated to do free work for them, and it's supposed to make them be the good guys here.
Why do you presume to think your definition of morals is shared by everyone? Why is entitlement to others labor the moral position, instead of the immoral position?
Everyone is keying on forced free labor, but that's not really the proposed solution when an open-source project ends. The fact that it ends is a given, the question then is what to do about all the users. Providing an offramp (migration tools that move to another solution that's similar, or even just suggested other solutions, even including your own commercial offering) before closing up shop seems like a decent thing to do.
Nobody sensible is upset when a true FOSS “working for free” person hangs up their boots and calls it quits.
The issue here is that these are commercial products that abuse the FOSS ideals to run a bait and switch.
They look like they are open source in their growth phase then they rug pull when people start to depend on their underlying technology.
The company still exists and still makes money, but they stopped supporting their open source variant to try and push more people to pay, or they changed licenses to be more restrictive.
It has happened over and over, just look at Progress Chef, MongoDB, ElasticSearch, Redis, Terraform, etc.
In this particular case, it's the fault of the "abused" for even seeing themselves as such in the first place. Many times it's not even a "bait-and-switch", but reality hitting. But even if it was, just deal with it and move on.
This is definitely the case because the accusations and supposed social contract seem extremely one-sided towards free riding.
Nobody here is saying they should donate the last version of MinIO to the Apache software foundation under the Apache license. Nobody is arguing for a formalized "end of life" exit strategy for company oriented open source software or implying that such a strategy was promised and then betrayed.
The demand is always "keep doing work for me for free".
I’m not even claiming that people who feel thar feel that a social contract has been violated are correct.
I’m saying that the open source rug pull is at this point a known business tactic that is essentially a psychological dark pattern used to exploit.
These companies know they’ll get more traction and sales if they have “open source” on their marketing material. They don’t/never actually intend to be open source long term. They expect to go closed source/source available business lines as soon as they’ve locked enough people into the ecosystem.
Open source maintainers/organizations like the GNU project are happy and enthusiastic about delivering their projects to “freeloaders.” They have a sincere belief that having source code freedom is beneficial to all involved. Even corporate project sponsors share this belief: Meta is happy to give away React because they know that ultimately makes their own products better and more competitive.
I’m not even claiming that the “abused” are correct to be upset.
The core of my claim is that it’s a shady business tactic because the purpose of it is to gain all the marketing benefits of open source on the front-end (fast user growth, unpaid contributions from users, “street cred” and positive goodwill), then change to source available/business license after the end of the growth phase when users are locked in.
This is not much different than Southwest Airlines spending decades bragging about “bags fly free” and no fees only to pull the rug and dump their customer goodwill in the toilet.
Totally legal to do so, but it’s also totally legal for me to think that they’re dishonest scumbags.
Except in this case, software companies, in my opinion, have this rug pull plan in place from day 1.
It's part of the due diligence process for users to decide if they can trust a project.
I use a few simple heuristics:
- Evaluate who contributes regularly to a project. The more diverse this group is, the better. If it's a handful of individuals from 1 company, see other points. This doesn't have to be a show stopper. If it's a bit niche and only a handful of people contribute, you might want to think about what happens when these people stop doing that (like is happening here).
- Look at required contributor agreements and license. A serious red flag here is if a single company can effectively decide to change the license at any point they want to. Major projects like Terraform, Redis, Elasticsearch (repeatedly), etc. have exercised that option. It can be very disruptive when that happens.
- Evaluate the license allows you do what you need to do. Licenses like the AGPLv3 (which min.io used here) can be problematic on that front and comes with restrictions that corporate legal departments generally don't like. In the end choosing to use software is a business decision you take. Just make sure you understand what you are getting into and that this is OK with your company and compatible with business goals.
- Permissive licenses (MIT, BSD, Apache, etc.) are popular with larger companies and widely used on Github. They facilitate a neutral ground for competitors to collaborate. One aspect you should be aware off is that the very feature that makes them popular also means that contributors can take the software and create modifications under a different license. They generally can't re-license existing software or retroactively. But companies like Elasticsearch have switched from Apache 2.0 to closed source, and recently to AGPLv3. Opensearch remains Apache 2.0 and has a thriving community at this point.
- Look at the wider community behind a project. Who runs it; how professional are they (e.g. a foundation), etc. How likely would it be to survive something happening to the main company behind a thing? Companies tend to be less resilient than the open source projects they create over time. They fail, are subject to mergers and acquisitions, can end up in the hands of hedge funds, or big consulting companies like IBM. Many decades old OSS projects have survived multiple such events. Which makes them very safe bets.
None of these points have to be decisive. If you really like a company, you might be willing to overlook their less than ideal licensing or other potential red flags. And some things are not that critical if you have to replace them. This is about assessing risk and balancing the tradeoff of value against that.
Forks are always an option when bad things happen to projects. But that only works if there's a strong community capable of supporting such a fork and a license that makes that practical. The devil is in the details. When Redis announced their license change, the creation of Valkey was a foregone conclusion. There was just no way that wasn't going to happen. I think it only took a few months for the community to get organized around that. That's a good example of a good community.
The other heuristic I would add is one that works for commercial/closed source too: evaluate the project exactly as it is now. Do you still want to use it even if 0% of the roadmap ever materializes?
With open source, the good news is that that the version you currently have will always be available to you in perpetuity - including all the bugs, missing features, and security flaws. If you're ok with that, then the community around the thing doesn't even matter.
Easy. If you see open source software maintained by a company, assume they will make it closed source or enshittify the free version. If it's maintained by an individual assume he will get bored with it. Plan accordingly. It may not happen and then you'll be pleasantly surprised
I don’t feel that way at all. I’ve been maintaining open source storage systems for few years. I love it. Absolutely love it. I maintain TidesDB it’s a storage engine. I also have back pain but that doesn’t mean you can’t do what you love.
Isn't most (presumably the overwhelming majority) of opensource development is funded by for profit companies? That has been the case for quite a while too...
A little side project might grow and become a chore / untenable, especially with some from the community expecting handouts without respect.
Case in point, reticulum. Also Nolan Lawson has a very good block post on it.
I don't think your position is reasonable even if I believe you just want to say that writing open source shouldn't be a main source of the income). I think it's perfectly okay to be rewarded for time, skill, effort, and a software itself.
it's not about the money. for large open source projects you need to allocate time to deal with the community. for someone that just wants to put code out there that is very draining and unpleasant.
> for someone that just wants to put code out there that is very draining and unpleasant.
I never understood this. Then why publish the code in the first place? If the goal is to help others, then the decent thing would be to add documentation and support the people who care enough to use your project. This doesn't mean bending to all their wishes and doing work you don't enjoy, but a certain level of communication and collaboration is core to the idea of open source. Throwing some code over the fence and forgetting about it is only marginally better than releasing proprietary software. I can only interpret this behavior as self-serving for some reason (self-promotion, branding, etc.).
Most open source projects start small. The author writes code that solves some issue they have. Likely, someone else has the same problem and they would find the code useful. So it's published. For a while it's quiet, but one day a second user shows up and they like it. Maybe something isn't clear or they have a suggestion. That's reasonable and supporting one person doesn't take much.
Then the third user shows up. They have an odd edge case and the code isn't working. Fixing it will take some back and forth but it still can be done in a respectable amount of time. All is good. A few more users might show up, but most open source projects will maintain a small audience. Everyone is happy.
Sometimes, projects keep gaining popularity. Slowly at first, but the growth in interest is there. More bug reports, more discussions, more pull requests. The author didn't expect it. What was doable before takes more effort now. Even if the author adds contributors, they are now a project and a community manager. It requires different skills and a certain mindset. Not everyone is cut out for this. They might even handle a small community pretty well, but at a certain size it gets difficult.
The level of communication and collaboration required can only grow. Not everyone can deal with this and that's ok.
All of that sounds reasonable. But it also doesn't need to be a reason to find maintaining OSS very draining or unpleasant, as GP put it.
First of all, when a project grows, its core team of maintainers can also grow, so that the maintenance burden can be shared. This is up to the original author(s) to address if they think their workload is a problem.
Secondly, and coming back to the post that started this thread, the comment was "working for free is not fun", implying that if people paid for their work, then it would be "fun". They didn't complain about the amount of work, but about the fact that they weren't financially compensated for it. These are just skewed incentives to have when working on an open source project. It means that they would prioritize support of paying customers over non-paying users, which indirectly also guides the direction of the project, and eventually leads to enshittification and rugpulls, as in MinIO's case.
The approach that actually makes open source projects thrive is to see it as an opportunity to build a community of people who are passionate about a common topic, and deal with the good and the bad aspects as they come. This does mean that you will have annoying and entitled users, which is the case for any project regardless of its license, but it also means that your project will be improved by the community itself, and that the maintenance burden doesn't have to be entirely on your shoulders. Any successful OSS project in history has been managed this way, while those that aren't remain a footnote in some person's GitHub profile, or are forked by people who actually understand open source.
Honestly, I don't see how you're adding anything here other than inflated expectations and a strange anti-individual pro-mega-corporation ideology.
Fundamentally your post boils down to this: All contributions should be self funded by the person making them.
This might seem acceptable at first glance, but it has some really perverse implications that are far worse than making a product customers are willing to pay for.
To be granted the right to work on an open source project, you must have a day job that isn't affiliated with the project. You must first work eight hours a day to ensure your existence, only after those eight hours are up, are you allowed to work on the open source project.
Every other form of labor is allowed to charge for money, even the street cleaner or the elderly janitor stocking up on his pension, everyone except the open source developer and that everyone includes people who work on behalf of a company that directly earns money off the open source project, including software developers hired by said company even if those software developers work full time on the open source project. This means that you can run into absurd scenarios like SF salaries being paid for contributors, while the maintainer, who might be happy with an average polish developer salary doesn't even get the little amount they would need to live a hermit life doing nothing but working on the project. No, that maintainer is expected, I mean obligated, to keep working his day job to then be granted the privilege of working for free.
Somehow the maintainer is the selfish one for wanting his desire to exist be equally as important as other people's desire for the project to exist. The idea that people value the project but not the process that brings about the project sounds deeply suspect.
Your complaint that prioritizing paid feature is bad is disturbing, because of the above paragraph. The maintainer is expected to donate his resources for the greater good, but in instances where the maintainer could acquire resources to donate to the public at large through the project itself, he must not do so, because he must acquire the donation resources through his day job. To be allowed to prioritize the project, he must deprioritize the project.
The strangest part by far though is that if you are a company that produces and sells proprietary software, you're the good guy. As I said in the beginning. This feels like a very anti OSS stance since open source software is only allowed to exist in the shadow of proprietary software that makes money. The argument is always that certain types of software should not exist and that the things that are supposedly being withheld are more important than the things being created.
I personally think this type of subtractive thinking is very insidious. You can have the best intentions in the world and still be branded the devil. Meanwhile the devil can do whatever he wants. There is always this implicit demand that you ought to be an actual devil for the good of everyone.
> I never understood this. Then why publish the code in the first place? If the goal is to help others, then the decent thing would be to add documentation and support the people who care enough to use your project.
Because these things take entirely different skill sets and the latter might be a huge burden for someone who is good at the former.
The person "throwing" the software has 0 obligation to any potential or actual users of said software. Just the act of making it available, even without any kind of license, is already benevolent. Anything further just continues to add to that benevolence, and nothing can take away from it, not even if they decide to push a malware-ridden update.
There is obligation to a given user only if it's explicitly specified in a license or some other communication to which the user is privy.
I've been involved with free software for coming on 30 years, have maintained several reasonably popular free software projects and have 100% enjoyed it every time. Developing relationships with the community members and working with them toward a common goal is very rewarding. Not much more to say about this as these are subjective interpretations of our experiences and the experiences could be very different. But it definitely can be fun.
> Ultimately, dealing with people who don't pay for your product is not fun.
I find it the other way around. I feel a bit embarrassed and stressed out working with people who have paid for a copy of software I've made (which admittedly is rather rare). When they haven't paid, every exchange is about what's best for humanity and the public in general, i.e. they're not supposed to get some special treatment at the expense of anyone else, and nobody has a right to lord over the other party.
People who paid for your software don't really have a right to lord you around. You can chose to be accommodating because they are your customers but you hold approximately as much if not more weight in the relationship. They need your work. It's not so much special treatment as it is commissioned work.
People who don't pay are often not really invested. The relationship between more work means more costs doesn't exist for them. That can make them quite a pain in my experience.
I'm probably projecting the idea I have of myself here but if someone says
> every exchange is about what's best for humanity and the public in general
it means that they are the kind of individual who deeply care for things to work, relationships to be good and fruitful and thus if they made someone pay for something, they think they must listen to them and comply their requests, because well, they are a paying customer and the customer is always right, they gave me their money etc etc
You can care about the work and your customer will still setting healthy boundaries and accepting that wanting to do good work for them doesn't mean you are beside them.
Business is fundamentally about partnership, transactional and moneyed partnerships, but partnership still. It's best when both suppliers and customers are aware of that and like any partnership, it structured and can be stopped by both partners. You don't technically owe them more than what's in the contract and that puts a hard stop which is easy to identify if needed.
Legally speaking, accepting payment makes it very clear that there is a contract under which you have obligations, both explicitly spelled out and implied.
> People who paid for your software don't really have a right to lord you around.
Of course I realize that, rationally, but:
* They might feel highly entitled because they paid.
* I feel more anxious to satisfy than I should probably be feeling. Perhaps even guilty for having taken money. I realize that is not a rational frame of mind to be in; it would probably change if that happened frequently. I am used to two things: There is my voluntary work, which I share freely and without expecting money; and there is my 'job' where I have to bow my head to management and do not get to pursue the work as I see fit, and I devote most of my time to - but I get paid (which also kind of happens in the background, i.e. I never see the person who actually pays me). Selling a product or a service is a weird third kind of experience which I'm not used to.
You can achieve something like this with a pricing strategy.
As DHH and Jason Fried discuss in both the books REWORK, It Doesn’t Have to Be Crazy at Work, and their blog:
> The worst customer is the one you can’t afford to lose. The big whale that can crush your spirit and fray your nerves with just a hint of their dissatisfaction.
(It Doesn’t Have to Be Crazy at Work)
> First, since no one customer could pay us an outsized amount, no one customer’s demands for features or fixes or exceptions would automatically rise to the top. This left us free to make software for ourselves and on behalf of a broad base of customers, not at the behest of any single one. It’s a lot easier to do the right thing for the many when you don’t fear displeasing a few super customers could spell trouble.
But, this mechanism proposed by DHH and Fried only remove differences amongst the paying-customers. I Not between "paying" and "non-paying".
I'd think, however, there's some good ideas in there to manage that difference as well. For example to let all the customers, paying- or not-paying go through the exact same flow for support, features, bugs, etc. So not making these the distinctive "drivers" why people would pay. E.g. "you must be paying customer to get support". Obviously depends on the service, but maybe if you have other distinctive features that people would pay for (e.g. hosted version) that could work out.
However, I understood GP's mention of "embarrassment" to speak more to their own feelings of responsibility. Which would be more or less decoupled from the pressure that a particular client exerts.
Maybe open source developers should stop imagining the things they choose to give away for free as "products". I maintain a small open source library. It doesn't make any money, it will never make any money, people are free to use or not as they choose. If someone doesn't like the way I maintain the repository they are free to fork it.
Agreed, but that's only half of it. The second half is that open source users should stop imagining the things they choose to use for free as "products".
Users of open source often feel entitled, open issues like they would open a support ticket for product they actually paid for, and don't hesitate to show their frustration.
Of course that's not all the users, but the maintainers only see those (the happy users are usually quiet).
I have open sourced a few libraries under a weak copyleft licence, and every single time, some "people from the community" have been putting a lot of pressure on me, e.g. claiming everywhere that the project was unmaintained/dead (it wasn't, I just was working on it in my free time on a best-effort basis) or that anything not permissive had "strings attached" and was therefore "not viable", etc.
The only times I'm not getting those is when nobody uses my project or when I don't open source it. I have been open sourcing less of my stuff, and it's a net positive: I get less stress, and anyway I wasn't getting anything from the happy, quiet users.
It used to be that annoying noobs were aggressively told to RTFM, their feelings got hurt and they would go away. That probably was too harsh. But then came corporate OSS and with it corporate HR in OSS. Being the BOFH was now bad, gatekeeping was bad. Now everyone feels entitled to the maintainer time and maintainers burn out.
I think this gets complicated when you have larger open source projects where contributors change over time. By taking over stewardship of something that people depend on you should have some obligation to not intentionally fuck those people over even if you are not paid for it.
This is also true to some extend when it's a project you started. I don't think you should e.g. be able to point to the typical liability disclaimer in free software licenses when you add features that intentionally harm your users.
> By taking over stewardship of something that people depend on you should have some obligation
No. If it's free and open source, all it says is what you can do with the code. There is no obligation towards the users whatsoever.
If you choose to depend on something, it's your problem. The professional way to do it is either to contractually make sure that the project doesn't "fuck you over" (using your words), or to make sure that you are able to fork the project if necessary.
If you base your business on the fact that someone will be working for you, for free, forever, then it's your problem.
It's remarkable how many people wrongly assume that open source projects can't be monetized. Business models and open source are orthogonal but compatible concepts. However, if your primary goal while maintaining an open source project is profiting financially from it, your incentives are skewed. If you feel this way, you should also stop using any open source projects, unless you financially support them as well.
I'm the author of another option (https://github.com/mickael-kerjean/filestash) which has a S3 gateway that expose itself as a S3 server but is just a proxy that forward your S3 call onto anything else like SFTP, local FS, FTP, NFS, SMB, IPFS, Sharepoint, Azure, git repo, Dropbox, Google Drive, another S3, ... it's entirely stateless and act as a proxy translating S3 call onto whatever you have connected in the other end
I clicked settings, this appeared, clicking away hid it but now I cant see any setting for it.
The nasty way of reading that popup, my first way of reading it, was that filestash sends crash reports and usage data, and I have the option to have it not be shared with third parties, but that it is always sent, and it defaults to sharing with third parties. The OK is always consenting to share crash reports and usage.
I'm not sure if it's actually operating that way, but if it's not the language should probably be
Help make this software better by sending crash reports and anonymous usage statistics.
Your data is never shared with a third party.
[ ] Send crash reports & anonymous usage data.
[ OK ]
I was looking at running [versitygw](https://github.com/versity/versitygw) but filestash looks pretty sweet! Any chance you're familiar with Versity and how the S3 proxy may differ?
I did a project with Monash university who were using Versity on their storage to handle multi tiers storage on their 12PB cluster, with glacier like capabilities on tape storage with a robot picking up data on their tape backup and a hot storage tier for better access performance, lifecycle rules to move data from hot storage to cold, etc.... The underlying storage was all Versity and they had Filestash working on top, effectively we did some custom plugins so you could recall the data on their own selfhosted glacier while using it through the frontend so their user had a Dropbox like experience. Depending on what you want to do they can be very much complimentary
rustfs have promise, supports a lot of features, even allows to bring your own secret/access keys (if you want to migrate without changing creds on clients) but it's very much still in-development; and they have already prepared for bait-and-switch in code ( https://github.com/rustfs/rustfs/blob/main/rustfs/src/licens... )
Ceph is closest feature wise to actual S3 feature-set wise but it's a lot to setup. It pretty much wants few local servers, you can replicate to another site but each site on its own is pretty latency sensitive between storage servers. It also offers many other features aside, as S3 is just built on top of their object store that can be also used for VM storage or even FUSE-compatible FS
Garage is great but it is very much "just to store stuff", it lacks features on both S3 side (S3 have a bunch of advanced ACLs many of the alternatives don't support, and stuff for HTTP headers too) and management side (stuff like "allow access key to access only certain path on the bucket is impossible for example). Also the clustering feature is very WAN-aware, unlike ceph where you pretty much have to have all your storage servers in same rack if you want a single site to have replication.
Not sure what you mean about Ceph wanting to be in a single rack.
I run Ceph at work. We have some clusters spanning 20 racks in a network fabric that has over 100 racks.
In a typical Leaf-Spine network architecture, you can easily have sub 100 microsecond network latency which would translate to sub millisecond Ceph latencies.
We have one site that is Leaf-Spine-SuperSpine, and the difference in network latency is barely measurable between machines in the same network pod and between different network pods.
I don't think this is a problem. The CLA is there to avoid future legal disputes. It prevents contributors from initiating IP lawsuits later on, which could cause significantly more trouble for the project.
Hypothetically, isn't one of the "legal disputes" that's avoided is if the projects relicenses to a closed-source model without compensating contributors, the contributors can't sue because the copyright of the contributions no longer belongs to be them?
Apart from Minio, we tried Garage and Ceph. I think there's definitely a need for something that interfaces using S3 API but is just a simple file system underneath, for local, testing and small scale deployments. Not sure that exists? Of course a lot of stuff is being bolted onto S3 and it's not as simple as it initially claimed to be.
SeaweedFS's new `weed mini` command[0] does a great job at that. Previously our most flakey tests in CI were due to MinIO sometimes not starting up properly, but with `weed mini` that was completely resolved.
Minio started like that but they migrated away from it. It's just hard to keep it up once you start implementing advanced S3 features (versioning/legal hold, metadata etc.) and storage features (replication/erasure coding)
Yes I'm looking for exactly that and unfortunately haven't found a solution.
Tried garage, but they require running a proxy for CORS, which makes signed browser uploads a practical impossibility for the development machine. I had no idea that such a simple popular scenario is in fact too exotic.
From what I can gather, S3Proxy[1] can do this, but relies on a Java library that's no longer maintained[2], so not really much better.
I too think it would be great with a simple project that can serve S3 from filesystem, for local deployments that doesn't need balls to the walls performance.
The problem with that approach is that S3 object names are not compatible with POSIX file names. They can contain characters that are not valid on a filesystem, or have special meaning (like "/")
A simple litmus test I like to do with S3 storages is to create two objects, one called "foo" and one called "foo/bar". If the S3 uses a filesystem as backend, only the first of those can be created
Would be cool to understand the tradeoffs of the various block storage implementations.
I'm using seaweedfs for a single-machine S3 compatible storage, and it works great. Though I'm missing out on a lot of administrative nice-to-haves (like, easy access controls and a good understanding of capacity vs usage, error rates and so on... this could be a pebcak issue though).
Ceph I have also used and seems to care a lot more about being distributed. If you have less than 4 hosts for storage it feels like it scoffs at you when setting up. I was also unable to get it to perform amazingly, though to be fair I was doing it via K8S/Rook atop the Flannel CNI, which is an easy to use CNI for toy deployments, not performance critical systems - so that could be my bad. I would trust a ceph deployment with data integrity though, it just gives me that feel of "whomever worked on this, really understood distributed systems".. but, I can't put that feeling into any concrete data.
I believe the Minio developers are aware of the alternatives, having only their own commercial solution listed as alternatives might be a deliberate decision. But you can try merging the PR, there's nothing wrong with it
I don't think this is a problem. The CLA is there to avoid future legal disputes. It prevents contributors from initiating IP lawsuits later on, which could cause significantly more trouble for the project.
Had great experience with garage for an easy to setup distributed s3 cluster for home lab use (connecting a bunch of labs run by friends in a shared cluster via tailscale/headscale). They offer a "eventual consistency" mode (consistency_mode = dangerous is the setting, so perhaps don't use it for your 7-nines SaaS offering) where your local s3 node will happily accept (and quickly process) requests and it will then duplicate it to other servers later.
Overall great philosophy (target at self-hosting / independence) and clear and easy maintenance, not doing anything fancy, easy to understand architecture and design / operation instructions.
From my experience, Garage is the best replacement to replace MinIO *in a dev environment*. It provides a pretty good CLI that makes automatic setup easier than MinIO. However in a production environment, I guess Ceph is still the best because of how prominent it is.
Yep I know, I had to build a proxy for s3 which supports custom pre-signed URLs.
In my case it was worth it because my team needs to verify uploaded content for security reasons. But for most cases I guess that you can't really bother deploying a proxy just for CORS.
First off, I don't think there is anything wrong with MinIO closing down its open source. There are simply too many people globally who use open source without being willing to pay for it.
I started testing various alternatives a few months ago, and I still believe RustFS will emerge as the winner after MinIO's exit. I evaluated Garage, SeaweedFS, Ceph, and RustFS. Here are my conclusions:
1. RustFS and SeaweedFS are the fastest in the object storage field.
2. The installation for Garage and SeaweedFS is more complex compared to RustFS.
3. The RustFS console is the most convenient and user-friendly.
4. Ceph is too difficult to use; I wouldn't dare deploy it without a deep understanding of the source code.
Although many people criticize RustFS, suggesting its CLA might be "bait," I don't think such a requirement is excessive for open source software, as it helps mitigate their own legal risks.
Furthermore, Milvus gave RustFS a very high official evaluation. Based on technical benchmarks and other aspects, I believe RustFS will ultimately win.
Maintainer of Milvus here. A few thoughts from someone who lives this every day:
1. The free user problem is real, and AI makes it worse. We serve a massive community of free Milvus users — and we're grateful for them, they make the project what it is. But we also feel the tension MinIO is describing. You invest serious engineering effort into stability and bug fixes, and most users will never become paying customers. In the AI era this ratio only gets harder — copy with AI becomes easier than ever
2. We need better object storage options. As a heavy consumer of object storage, Milvus needs a reliable, performant, and truly open foundation. RustFS is a solid candidate — we've been evaluating it seriously. But we'd love to see more good options emerge. If the ecosystem can't meet our needs long-term, we may have to invest in building our own.
3. Open source licensing deserves a serious conversation. The Apache 2.0 / Hadoop-era model served us well, but cracks are showing. Cloud vendors and AI companies consume enormous amounts of open-source infrastructure, and the incentives to contribute back are weaker than ever. I don't think the answer is closing the source — but I also don't think "hope enterprises pay for support" scales forever. We need the community to have an honest conversation about what sustainable open source looks like in the AI era. MinIO's move is a symptom worth paying attention to.
AGPL doesn't help when you want to kill your free offering to move people onto the paid tier. But quite frankly, that isn't a problem GPL is meant to solve.
Huge thanks for your contributions to the open-source world! Milvus is an incredibly cool product and a staple in my daily stack.
It’s been amazing to watch Milvus grow from its roots in China to gaining global trust and major VC backing. You've really nailed the commercialization, open-source governance, and international credibility aspects.
Regarding RustFS, I think that—much like Milvus in the early days—it just needs time to earn global trust. With storage and databases, trust is built over years; users are naturally hesitant to do large-scale replacements without that long track record.
Haha, maybe Milvus should just acquire RustFS? That would certainly make us feel a lot safer using it!
3. Start it with `garage server` or just have an AI write an init script or unit file for you. (You can pkill -f /usr/local/sbin/garage to shut it down.)
Also, NVIDIA has a phenomenal S3 compatible system that nobody seems to know about named AIStore: https://aistore.nvidia.com/ It's a bit more complex, but very powerful and fast (faster than MinIO - slightly less space efficient than MinIO because it maintains a complete copy of an object on a single node so that the object doesn't have to be reconstituted as it would on MinIO.) It also can be a proxy in front of other S3 systems, including AWS S3 or GCS etc and offer a single unified namespace to your clients.
IMO, Seaweedfs is still too much of a personal project, it's fast for small files, but keep good and frequent backups in a different system if you choose it.
I personally will avoid RustFS. Even if it was totally amazing, the Contributor License Agreement makes me feel like we're getting into the whole Minio rug-pull situation all over again, and you know what they say about doing the same thing and expecting a different result..
Garage is indeed an excellent project, but I think it has a few drawbacks compared to the alternatives:
Metadata Backend: It relies on SQLite. I have concerns about how well this scales or handles high concurrency with massive datasets.
Admin UI: The console is still not very user-friendly/polished.
Deployment Complexity: You are required to configure a "layout" (regions/zones) to get started, whereas MinIO doesn't force this concept on you for simple setups.
Design Philosophy: While Garage is fantastic for edge/geo-distributed use cases, I feel its overall design still lags behind MinIO and RustFS. There is a higher barrier to entry because you have to learn specific Garage concepts just to get it running.
As someone about to learn the basics of Terraform, with an interest in geo-distributed storage, and with some Hetzner credit sitting idle... I came across the perfect comment this morning.
> RustFS and SeaweedFS are the fastest in the object storage field.
I'm not sure if SeaweedFS is comparable. It's based on Facebook's Haystack design, which is used to address a very specific use case: minimizing IO, in particular the metadata lookup, to access individual objects. This leads to many trade-offs. For instance, its main unit of operations is on volumes. Data is appended to a volume. Erasure coding is done per volume. Updates are done at volume level, and etc.
On the other hand, a general object store goes beyond needle-in-a-haystack type of operations. In particular, people use an object store as the backend for analytics, which requires high-throughput scans.
> Although many people criticize RustFS, suggesting its CLA might be "bait," I don't think such a requirement is excessive for open source software, as it helps mitigate their own legal risks.
MinIO was more for the "mini" use case (or more like "anything not large scale", with a very broad definition of large scale). Here "works out of the box" is paramount.
And Ceph is more for the maxi use case. Here in depth fine tuning, highly complex setups, distributed setups and similar are the norm. Hence out of the box small scale setup experience is bearly relevant.
So they really don't fill out the same space, even through their functionality overlaps.
> too many people globally who use open source without being willing to pay for it.
That's an odd take... open source is a software licensing model, not a business model.
Unless you have some knowledge that I don't, MinIO never asked for nor accepted donations from users of their open source offerings. All of their funding came from sales and support of their enterprise products, not their open source one. They are shutting down their own contributions to the open source code in order to focus on their closed enterprise products, not due to lack of community engagement or (as already mentioned) community funding.
I respectfully disagree with the notion that open source is strictly a licensing model and not a business model. For an open-source project to achieve long-term reliability and growth, it must be backed by a sustainable commercial engine.
History has shown that simply donating a project to a foundation (like Apache or CNCF) isn't a silver bullet; many projects under those umbrellas still struggle to find the resources they need to thrive. The ideal path—and the best outcome for users globally—is a "middle way" where:
The software remains open and maintained.
The core team has a viable way to survive and fund development.
Open code ensures security, transparency, and a trustworthy software supply chain.
However, the way MinIO has handled this transition is, in my view, the most disappointing approach possible. It creates a significant trust gap. When a company pivots this way, users are left wondering about the integrity of the code—whether it’s the potential for "backdoors" or undisclosed data transmission.
I hope to see other open-source object storage projects mature quickly to provide a truly transparent and reliable alternative.
Actually, Linux reinforces my point. It isn't powered solely by volunteers; it thrives because the world's largest corporations (Intel, Google, Red Hat, etc.) foot the bill. The Linux Foundation is massively funded by corporate members, and most kernel contributors are paid engineers. Without that commercial engine, Linux would not have the dominance it does today. Even OpenAI had to pivot away from its original non-profit, open principles to survive and scale.
There is nothing wrong with making money while sustaining open source. The problem is MinIO's specific approach. Instead of a symbiotic relationship, they treated the community as free QA testers and marketing pawns, only to pull up the ladder later. That’s a "bait-and-switch," not a sustainable business model.
> Even OpenAI had to pivot away from its original non-profit, open principles to survive and scale.
Uh, no, OpenAI didn't pivot from being open in order to survive.
They survived for 7 years before ChatGPT was released. When it was, they pivoted the _instant_ it became obvious that AI was about to be a trillion-dollar industry and they weren't going to miss the boat of commercialization. Yachts don't buy themselves, you know!
Not many open source projects are Linux-sized. Linux is worth billions of dollars and enabled Google and Redhat to exist, so they can give back millions, without compulsion, and in a self-interested way.
Random library maintainer dude should not expect their (very replaceable) library to print money. The cool open source tool/utility could be a 10-person company, maybe 100 tops, but people see dollar-signs in their eyes based on number of installs/GitHub stars, and get VC funding to take a swing for billions in ARR.
I remember when (small scale) open source was about scratching your own itch without making it a startup via user-coercion. It feels like the 'Open source as a growth-hack" has metastasized into "Now that they are hooked, entire user base is morally obligated to give me money". I would have no issue if a project included this before it gets popular - but that may prevent popular adoption. So it rubs me the wrong way when folk want to have their cake and eat it.
I want to like RustFS, but it feels like there's so much marketing attached to the software it turns me off a little. Even a little rocket emoji and benchmark in the Github about page. Sometimes less is more. Look at the ty Github home page - 1 benchmark on the main page, the description is just "An extremely fast Python type checker and language server, written in Rust.".
Haha, +1. I really like RustFS as a product, but the marketing fluff and documentation put me off too. It reads like non-native speakers relying heavily on AI, which explains a lot.
Honestly, they really need to bring in some native English speakers to overhaul the docs. The current vibe just doesn't land well with a US audience.
I successfully migrated from MinIO to Ceph, which I highly recommend. Along the way, I tested SeaweedFS, which looked promising. However, I ran into a strange bug, and after diagnosing it with the help of Claude, I realized the codebase was vibe-coded and riddled with a staggering number of structural errors. In my opinion, SeaweedFS should absolutely not be used for anything beyond testing — otherwise you're almost certain to lose data.
Laughed reading this. We pretend Claude can't code because we don't like to acknowledge what code always turns out looking like, which is exactly what it's trained on
Ceph is the OG. Every now and then different attempts to replace it pop up, work well for some use cases, and then realise how hard the actual problem they are trying to solve is. Ceph always wins in the end.
Ceph solves the distributed consistent block storage problem very well. But I hardly ever need that problem solved, it's way more often that I need a distributed highly available blob storage, and Ceph makes the wrong tradeoffs for this task.
I just bit the bullet last week and figured we are going to migrate our self hosted minio servers to ceph instead. So far 3 server ceph cluster has been setup with cephadm and last minio server is currently mirroring its ~120TB buckets to new cluster with a whopping 420MB/s - should finish any day now. The complexity of ceph and it's cluster nature of course if a bit scary at first compared to minio - a single Go binary with minimal configuration, but after learning the basics it should be smooth sailing. What's neat is that ceph allows expanding clusters, just throw more storage servers at it, in theory at least, not sure where the ceiling is for that yet. Shame minio went that way, it had a really neat console before they cut it out. I also contemplated le garage, but it seem elasticsearch is not happy with that S3 solution for snapshots, so ceph it is.
It's complex, but Ceph's storage and consensus layer is battle-tested and a much more solid foundation for serious use. Just make sure that your nodes don't run full!
Make sure you have solid Linux system monitoring in general. About 50% of running Ceph successfully at scale is just basic, solid system monitoring and alerting.
This line of advice basically comes down to: have a competent infrastructure team. Sometimes you gotta move fast, but this is where having someone on infrastructure that knows what they are doing comes in and pays dividends. No competent infra guy is going to NOT set up linux monitoring. But you see some companies hit 100 people and get revenue then this type of thing blows up in their face.
The thing that strikes me about this thread is how many people are scrambling to evaluate alternatives they've never tested in production. That's the real risk with infrastructure dependencies — it's not that they might go closed-source, it's that the switching cost is so high that you don't maintain a credible plan B.
With application dependencies you can swap a library in a day. With object storage that's holding your data, you're looking at a migration measured in weeks or months. The S3 API compatibility helps, but anyone who's actually migrated between S3-compatible stores knows there are subtle behavioral differences that only surface under load.
I wonder how many MinIO deployments had a documented migration runbook before today.
Yes, the difference is the latter means "it is no longer maintained", and the former is "they claim to be maintaining it but everyone knows it's not really being maintained".
Given the context is a for-profit company who is moving away from FOSS, I'm not sure the distinction matters so much, everyone understands what the first one means already.
We all saw that coming. For quite some time they have been all but transparent or open, vigorously removing even mild criticism towards any decisions they were making from github with no further explanation, locking comments, etc. No one that's been following the development and has been somewhat reliant on min.io is surprised. Personally the moment I saw the "maintenance" mode, I rushed to switch to garage. I have a few features I need to pack in a PR ready but I haven't had time to get to that. I should probably prioritize that.
Why should these guys bother with people who won't pay for their offering ? The community is not skilled enough to contribute to this type of project. Honestly most serious open source is industry backed and solves very challenging distributed systems problems. A run of the mill web dev doesnt know these things I am sorry to say.
If you are struggling with observability solutions which require object storage for production setups after such news (i.e. Thanos, Loki, Mimir, Tempo), then try alternatives without this requirement, such as VictoriaMetrics, VictoriaLogs and VictoriaTraces. They scale to petabytes of data on regular block storage, and they provide higher performance and availability than systems, which depend on manually managed object storage such as MinIO.
In French the adjective follows the name so AI is actually IA.
On AWS S3, you have a storage level called "Infrequent Access", shortened IA everywhere.
A few weeks ago I had to spend way too much time explaining to a customer that, no, we weren't planning on feeding their data to an AI when, on my reports, I was talking about relying on S3 IA to reduce costs...
COSS companies want it both ways. Free community contributions and bug reports during the growth phase. Then closed source once they've captured enough users. The code you run today belongs to you. The roadmap belongs to their investors.
This is timely news for me - I was just standing up some Loki infrastructure yesterday & following Grafana's own guides on object storage (they recommend minio for non-cloud setups). I wasn't previously experienced with minio & would have completely missed the maintenance status if it wasn't for Checkov nagging me about using latest tags for images & having to go searching for release versions.
Sofar I've switched to Rustfs which seems like a very nice project, though <24hrs is hardly an evaluation period.
Why do you need non-trivial dependency on the object storage for the database for logs in the first place?
Object storage has advantages over regular block storage if it is managed by cloud, and if it has a proven record on durability, availability and "infinite" storage space at low costs, such as S3 at Amazon or GCS at Google.
Object storage has zero advantages over regular block storage if you run it on yourself:
- It doesn't provide "infinite" storage space - you need to regularly monitor and manually add new physical storage to the object storage.
- It doesn't provide high durability and availability. It has lower availability comparing to a regular locally attached block storage because of the complicated coordination of the object storage state between storage nodes over network. It usually has lower durability than the object storage provided by cloud hosting. If some data is corrupted or lost on the underlying hardware storage, there are low chances it is properly and automatically recovered by DIY object storage.
- It is more expensive because of higher overhead (and, probably, half-baked replication) comparing to locally attached block storage.
- It is slower than locally attached block storage because of much higher network latency compared to the latency when accessing local storage. The latency difference is 1000x - 100ms at object storage vs 0.1ms at local block storage.
- It is much harder to configure, operate and troubleshoot than block storage.
So I'd recommend taking a look at other databases for logs, which do not require object storage for large-scale production setups. For example, VictoriaLogs. It scales to hundreds of terabytes of logs on a single node, and it can scale to petabytes of logs in cluster mode. Both modes are open source and free to use.
Disclaimer: I'm the core developer of VictoriaLogs.
> Object storage has zero advantages over regular block storage if you run it on yourself
Worth adding, this depends on what's using your block storage / object storage. For Loki specifically, there are known edge-cases with large object counts on block storage (this isn't related to object size or disk space) - this obviously isn't something I've encountered & I probably never will, but they are documented.
For an application I had written myself, I can see clearly that block storage is going to trump object storage for all self-hosted usecases, but for 3P software I'm merely administering, I have less control over its quirks & those pros -vs- cons are much less clear cut.
Initially I was just following recommendations blindly - I've never run Loki off-cloud before so my typical approach to learning a system would be to start with defaults & tweak/add/remove components as I learn it. Grafana's docs use object storage everywhere, so it's a lot easier with you're aligned, you can rely more heavily on config parity.
While I try to avoid complexity, idiomatic approaches have their advantages; it's always a trade-off.
That said my first instinct when I saw minio's status was to use filestorage but the rustfs setup has been pretty painless sofar. I might still remove it, we'll see.
What is the best alternative that can run as a Docker image that mimics AWS S3 to enable local only testing without any external cloud connections?
For me, my only use for Minio was to simulate AWS S3 in docker compose so that my applications were fully testable locally. I never used it it production or as a middle ware. It has not sat well with me to use alternative strategies like Ruby on Rails' local file storage for testing as it behaves differently than when the app is deployed. And using actual cloud services creates its own hurdles of either credential sharing among developers and gets rid of the "docker magic" of being to run a single set up script and be up and running to change code and run the full test suite.
My use case is any developer on the team can do a Git clone and run the set up script and then be fully up and running within minutes locally without any special configuration on their part.
S3 is evolving rapidly. While sticking with the old MinIO image might work for the immediate short term, I believe it is not a viable strategy for the long haul.
New standards and features are emerging constantly—such as S3 over RDMA, S3 Append, cold storage tiers, and S3 vector buckets.
In at most two or three years, relying on an unmaintained version of MinIO will likely become a liability that drags down your project as your production environment evolves. Finding an actively maintained open-source alternative is a must.
Anyone interested in keeping access should fork this open source repository now and make a local archived copy. That way when this organization deletes this repository there can still be access to this open source code.
In the Ruby on Rails space, we had this happen recently with the prawn_plus Gem where the original author yanked all published copies and deleted the GitHub repository.
Free
For developers, researchers, enthusiasts, small organizations, and anyone comfortable with a standalone deployment.
Full-featured, single-node deployment architecture
Self-service community Slack and documentation support
Free of charge
I could use that if it didn't have hidden costs or obligations.
This is becoming a predictable pattern in infrastructure tooling: build community on open source, get adoption, then pivot to closed source once you need revenue. Elastic, Redis, Terraform, now MinIO.
The frustrating part isn't the business decision itself. It's that every pivot creates a massive migration burden on teams who bet on the "open" part. When your object storage layer suddenly needs replacing, that's not a weekend project. You're looking at weeks of testing, data migration, updating every service that touches S3-compatible APIs, and hoping nothing breaks in production.
For anyone evaluating infrastructure dependencies right now: the license matters, but the funding model matters more. Single-vendor open source projects backed by VC are essentially on a countdown timer. Either they find a sustainable model that doesn't require closing the source, or they eventually pull the rug.
Community-governed projects under foundations (Ceph under Linux Foundation, for example) tend to be more durable even if they're harder to set up initially. The operational complexity of Ceph vs MinIO was always the tradeoff - but at least you're not going to wake up one morning to a "THIS REPOSITORY IS NO LONGER MAINTAINED" commit.
I guess we need a new type of Open Source license. One that is very permissive except if you are a company with a much larger revenue than the company funding the open source project, then you have to pay.
While I loath the moves to closed source you also can't fault them the hyperscalers just outcompete them with their own software.
Various projects have invented licenses like that. Those licenses aren't free, so the FOSS crowd won't like them. Rather than inventing a new one, you're probably better grabbing whatever the other not-free-but-close-enough projects are doing. Legal teams don't like bespoke licenses very much which hurts adoption.
An alternative I've seen is "the code is proprietary for 1 year after it was written, after that it's MIT/GPL/etc.", which keeps the code entirely free(ish) but still prevents many businesses from getting rich off your product and leaving you in the dust.
You could also go for AGPL, which is to companies like Google like garlic is to vampires. That would hurt any open core style business you might want to build out of your project though, unless you don't accept external contributions.
That would be interesting to figure out. Say you are single guy in some cheaper cost of living region. And then some SV startup got say million in funding. Surely that startup should give at least couple thousand to your sole proprietorship if they use your stuff? Now how you figure out these thresholds get complex.
Server Side Public License? Since it demands any company offering the project as a paid product/service to also open source the related infrastructure, the bigger companies end up creating a maintained fork with a more permissive license. See ElasticSearch -> OpenSearch, Redis -> Valkey
Inflicting pain is most likely worth it in the long run. Those internal projects now have to fight for budget and visibility and some won't make it past 2-5 years.
2. You're forgetting bureaucracy and general big company overhead. Hyperscalers have tried to kill a lot of smaller external stuff and frequently they end up their own chat apps, instead.
I would say what we need is more of a push for software to become GPLed or AGPLed, so that it (mostly) can't be closed up in a 'betrayal' of the FOSS community around a project.
Redis is the odd one out here[1]: Garantia Data, later known as Redis Labs, now known as Redis, did not create Redis, nor did it maintain Redis for most of its rise to popularity (2009–2015) nor did it employ Redis’s creator and then-maintainer 'antirez at that time. (He objected; they hired him; some years later he left; then he returned. He is apparently OK with how things ended up.) What the company did do is develop OSS Redis addons, then pull the rug on them while saying that Redis proper would “always remain BSD”[2], then prove that that was a lie too[3]. As well as do various other shady (if legal) stuff with the trademarks[4] and credits[5] too.
Things are a bit more complicated. Actually Redis the company (Redis Labs, and previously Garantia Data) offered since the start to hire me, but I was at VMWare, later at Pivotal, and just didn't care, wanted to stay actually "super partes" because of idealism. But actually Pivotal and Redis Labs shared the same VC, It made a lot more sense to move to Redis Labs, and work there under the same level of independence, so this happened. However, once I moved to Redis Labs a lot of good things happened, and made Redis maturing much faster: we had a core team all working at the core, I was no longer alone when there were serious bugs, improvements to make, and so forth. During those years many good things happened, including Streams, ACLs, memory reduction stuff, modules, and in general things that made Redis more solid. In order to be maintained, an open source software needs money, at scale, so we tried hard in the past to avoid going away from BSD. But eventually in the new hyperscalers situation it was impossible to avoid it, I guess. I was no longer with the company, I believe the bad call was going SSPL, it was a license very similar to AGPL but not accepted by the community. Now we are back to AGPL, and I believe that in the current situation, this is a good call. Nobody ever stopped to: 1. Provide the source on Github and continue the development. 2. Release it under a source available license (not OSI approved but practically very similar to AGPL). 3. Find a different way to do it... and indeed Redis returned AGPL after a few months I was back because maybe I helped a bit, but inside the company since the start there was a big slice that didn't accept the change. So Redis is still open source software and maintained. I can't see a parallel here.
> For anyone evaluating infrastructure dependencies right now: the license matters, but the funding model matters more. Single-vendor open source projects backed by VC are essentially on a countdown timer. Either they find a sustainable model that doesn't require closing the source, or they eventually pull the rug.
I struggle to even find example of VC-backed OSS that didn't go "ok closing down time". Only ones I remember (like Gitlab) started with open core model, not fully OSS
This is the newer generations re-discovering why various flavours of Shareware and trial demos existed since the 1980's, even though sharing code under various licenses is almost as old as computing.
I think the landscape has changed with those hyperscalers outcompeting open-source projects with alternative profit avenues for the money available in the market.
From my experience, Ceph works well, but requires a lot more hardware and dedicated cluster monitoring versus something like more simple like Minio; in my eyes, they have a somewhat different target audience. I can throw Minio into some customer environments as a convenient add-on, which I don't think I could do with Ceph.
Hopefully one of the open-source alternatives to Minio will step in and fill that "lighter" object storage gap.
What is especially annoying is the aistor/minio business model, either get the „free“ version or pay about 100k… How about accepting some small dollars and keeping the core concept?
However this seems to be the business type of enshitification. Instead of slapping everything with ads, you either pay ridiculous dollars or move on.
Well, anyone using the product of an open source project is free to fork it and then take on the maintenance. Or organize multiple users to handle the maintenance.
AGPL is dead as a copy-left measure. LLMs do not understand, and would not care anyway, about regurgitating code that you have published to the internet.
I wonder how many of the 504 contributors listed on GitHub would still have contributed their (presumably) free labor if they had known the company would eventually abandon the open source version like this while continuing to offer their paid upgraded versions.
Tangentially related, since we are on the subject of Minio. Minio has or rather had an option to work as an FTP server! That is kind of neat because CCTV cameras have an option to upload a picture of motion detected to an FTP server and that being a distributed minio cluster really was a neat option, since you could then generate an event of a file uploaded, kick off a pipeline job or whatever. Currently instead of that I use vsftpd and inotify to detect file uploads but that is such a major pain in the ass operate, it would be really great to find another FTP to S3 gateway.
This is pretty predictable at this point. VC-backed open source with a single vendor always ends up here eventually. The operational tradeoff was always MinIO being dead simple versus Ceph being complex but foundation-governed. Turns out "easy to set up" doesn't matter much when you wake up to a repo going dark. The real lesson is funding model matters more than license. If there's no sustainable path that doesn't involve closing the source, you're just on a longer timeline to the same outcome.
I've moved my SaaS I'm developing to SeaweedFS, it was rather painless to do it. I should also move away from minio-go SDK to just use the generic AWS one, one day. No hard feelings from my side to MinIO team though.
This has been on the cards for at least a year, with the increasingly doomy commits noted by HN.
Unfortunately I don't know of any other open projects that can obviously scale to the same degree. I built up around 100PiB of storage under minio with a former employer. It's very robust in the face of drive & server failure, is simple to manage on bare hardware with ansible. We got 180Gbps sustained writes out of it, with some part time hardware maintenance.
Don't know if there's an opportunity here for larger users of minio to band together and fund some continued maintenance?
I definitely had a wishlist and some hardware management scripts around it that could be integrated into it.
Ceph can scale to pretty large numbers for both storage, writes and reads. I was running 60PB+ cluster few years back and it was still growing when I left the company.
"Long Island Iced Tea Corp [...] In 2017, the corporation rebranded as Long Blockchain Corp [...] Its stock price spiked as much as 380% after the announcement."
As long as there's at least one gullible in the pack, all the other ones will behave the same because they now know there's one idiot that will happily hold the bag when it comes crashing down. They're all banking on passing the bag onto someone else before the crash.
A Ponzi can be a good investment too (for a certain definition of "good") as long as you get out before it collapses. The whole tech market right now is a big Ponzi with everyone hoping to get out before it crashes. Worse, dissent risks crashing it early so no talks of AI limitations or the lack of actual, sustainable productivity improvements are allowed, even if those concerns do absolutely happen behind closed doors.
On my side, I feel disappointed on two different counts.
- Obviously, when your selling point against competitor and alternative services was that you were Open Source, and you do a rug pull once you got enough traction, that is not great.
- But also they also switched of target. The big added value of Minio initially is that it was totally easy to run, targeting the possibility to have an s3 server running in a minute, on single instances and so... That was the perfect solution for rapid tests, local setups and automatic testing. Then, again once they started to get enough traction, they didn't just move to add more "scaling" solutions to Minio, but they kind of twisted it completely to be a complex deployment scalable solution like any other one that you find in the market. Without that much added value on that count to be honest.
I didn't find an alternative that I liked as much as MinIO and I, unfortunately, ended up creating a my own. It includes just the most basic features and cannot be compared to the larger projects, but is simple and it is efficient.
Yes, indeed. The list operation is expensive. The S3 spec says that the list output needs to be sorted.
1. All filenames are read.
2. All filenames are sorted.
3. Pagination applied.
It doesn't scale obviously, but works ok-ish for a smaller data set. It is difficult to do this efficiently without introducing complexity. My applications don't use listing, so I prioritised simplicity over performance for the list operation.
Go for Garage, you can check the docker-compose and the "setup" crate of this project https://github.com/beep-industries/content. There are a few tricks to make it work locally so it generates an API key and bucket declaratively but in the end it does the job
OS's file system? Implementation cost has been significantly decreased these day. We can just prompt 'use S3 instead of local file system' if we need to use a S3 like service.
We just integrated minio with our strapi cms, and our internal tool admin dashboard. And have about two months worth of pictures and faxes hosted there. Shit. Ticking time bomb now.
I will have to migrate, the cost of "self hosting" what a pain!
I ran a moderately large opensource service and my chronic back pain was cured the day I stopped maintaining the project.
Working for free is not fun. Having a paid offering with a free community version is not fun. Ultimately, dealing with people who don't pay for your product is not fun. I learnt this the hard way and I guess the MinIO team learnt this as well.
Completely different situations. None of the MinIO team worked for free. MinIO is a COSS company (commercial open source software). They give a basic version of it away for free hoping that some people, usually at companies, will want to pay for the premium features. MinIO going closed source is a business decision and there is nothing wrong with that.
I highly recommend SeaweedFS. I used it in production for a long time before partnering with Wasabi. We still have SeaweedFS for a scorching hot, 1GiB/s colocated object storage, but Wasabi is our bread and butter object storage now.
> > Working for free is not fun. Having a paid offering with a free community version is not fun. Ultimately, dealing with people who don't pay for your product is not fun.
> Completely different situations. None of the MinIO team worked for free. MinIO is a COSS company (commercial open source software).
MinIO is dealing with two out of the three issues, and the company is partially providing work for free, how is that "completely different"?
The MinIO business model was a freemium model (well, Open Source + commercial support, which is slightly different). They used the free OSS version to drive demand for the commercially licensed version. It’s not like they had a free community version with users they needed to support thrust upon them — this was their plan. They weren’t volunteers.
You could argue that they got to the point where the benefit wasn’t worth the cost, but this was their business model. They would not have gotten to the point where the could have a commercial-only operation without the adoption and demand generated from the OSS version.
Running a successful OSS project is often a thankless job. Thanks for doing it. But this isn’t that.
> Running a successful OSS project is often a thankless job. Thanks for doing it. But this isn’t that.
No, even if you are being paid, it's a thankless, painful job to deal with demanding, entitled free users. It's worse if you are not being paid, but I'm not sure why you are asserting dealing with bullshit is just peachy if you are being paid.
If that is the case why did minio start with the open source version? If there were only downsides? Sounds like stupid business plan
They wanted adoption and a funnel into their paid offering. They were looking out for their own self-interest, which is perfectly fine; however, it’s very different from the framing many are giving in this thread of a saintly company doing thankless charity work for evil homelab users.
Where did I say there were only downsides? There are definitely upsides to this business model, I'm just refuting the idea that because there are for profit motives the downsides go away.
I hate when people mistreat the people that provide services to them: doesn't matter if it's a volunteer, underpaid waitress or well paid computer programmer. The mistreatment doesn't become "ok" because the person being mistreated is paid.
> No, even if you are being paid, it's a thankless, painful job to deal with demanding, entitled free users.
So… aren't they providing (paid) support? same thing…
Absurd comparison.
“I don’t want to support free users” is completely different than “we’re going all-in on AI, so we’re killing our previous product for both open source and commercial users and replacing it with a new one”
I can also highly recommend SeaweedFS for development purposes, where you want to test general behaviour when using S3-compatible storage. That's what I mainly used MinIO before, and SeaweedFS, especially with their new `weed mini` command that runs all the services together in one process is a great replacement for local development and CI purposes.
I've been using rustfs for some very light local development and it looks.. fine: )
Ironically rustfs.com is currently failing to load on Firefox, with 'Uncaught TypeError: can't access property "enable", s is null'. They shoulda used a statically checked language for their website...
My Firefox access is working fine. The version is 147.0.3 (aarch64)
I'm running Firefox 145.0.2 on amd64.
It seems like the issue may be that I have WebGL disabled. The console includes messages like "Failed to create WebGL context: WebGL creation failed: * AllowWebgl2:false restricts context creation on this system."
Oh well, guess I can't use rustfs :}
can vouch for SeaweedFS, been using it since the time it was called weedfs and my managers were like are you sure you really want to use that ?
Wasabi looks like a service.
Any recommendation for an in-cluster alternative in production?
Is that SeaweedFS?
I’ve never heard of SeaweedFS, but Ceph cluster storage system has an S3-compatible layer (Object Gateway).
It’s used by CERN to make Petabyte-scale storage capable of ingesting data from particle collider experiments and they're now up to 17 clusters and 74PB which speaks to its production stability. Apparently people use it down to 3-host Proxmox virtualisation clusters, in a similar place as VMware VSAN.
Ceph has been pretty good to us for ~1PB scalable backup storage for many years, except that it’s a non-trivial system administration effort and needs good hardware and networking investment, and my employer wasn't fully backing that commitment. (We’re moving off it to Wasabi for S3 storage). It also leans more towards data integrity than performance, it's great at being massively-parallel and not so rapid at being single thread high-IOPs.
https://ceph.io/en/users/documentation/
https://docs.ceph.com/en/latest/
https://indico.cern.ch/event/1337241/contributions/5629430/a...
Ceph is a non-starter for me because you cannot have an existing filesystem on the disk. Previously I used GlusterFS on top of ZFS and made heavy use of gluster's async geo-replication feature to keep two storage arrays in sync that were far away over a slow link. This was done after getting fed up with rsync being so slow and always thrashing the disks having to scan many TBs every day.
While there is a geo-replication feature for Ceph, I cannot keep using ZFS at the same time, and gluster is no longer developed, so I'm currently looking for an alternative that would work for my use case if anyone knows of a solution.
> "Ceph is a non-starter for me because you cannot have an existing filesystem on the disk. Previously I used GlusterFS on top of ZFS"
I became a Ceph admin by accident so I wasn't involved in choosing it and I'm not familiar with other things in that space. It's a much larger project than a clustered filesystem; you give it disks and it distributes storage over them, and on top of that you can layer things like the S3 storage layer, its own filesystem (CephFS) or block devices which can be mounted on a Linux server and formatted with a filesystem (including ZFS I guess, but that sounds like a lot of layers).
> "While there is a geo-replication feature for Ceph"
Several; the data cluster layer can do it in two ways (stretch clusters and stretch pools), the block device layer can do it in two ways (journal based and snapshot based), the CephFS filesystem layer can do it with snapshot mirroring, and the S3 object layer can do it with multi-site sync.
I've not used any of them, they all have their trade-offs, and this is the kind of thing I was thinking of when saying it requires more skills and effort. for simple storage requirements, put a traditional SAN, a server with a bunch of disks, or pay a cheap S3 service to deal with it. Only if you have a strong need for scalable clusters, a team with storage/Linux skills, a pressing need to do it yourself, or to use many of its features, would I go in that direction.
https://docs.ceph.com/en/latest/rados/operations/stretch-mod...
https://docs.ceph.com/en/latest/rbd/rbd-mirroring/
https://docs.ceph.com/en/latest/cephfs/cephfs-mirroring/
https://docs.ceph.com/en/latest/radosgw/multisite/
Ceph is a non-starter because you need a team of people managing it constantly
Yeah sure. I manage a ceph cluster (4PB) and have a few other responsibilities at the same time.
I can tell you that ceph is something I don't need to touch every month. Other things I have to baby more regularly
Nothing wrong? Does minio grant the basic freedoms of being able to run the software, study it, change it, and distribute it?
Did minio create the impression to its contributors that it will continue being FLOSS?
Yes the software is under AGPL. Go forth and forkify.
The choice of AGPL tells you that they wanted to be the only commercial source of the software from the beginning.
> the software is under AGPL. Go forth and forkify.
No, what was minio is now aistor, a closed-source proprietary software. Tell me how to fork it and I will.
> they wanted to be the only commercial source of the software
The choice of AGPL tells me nothing more than what is stated in the license. And I definitely don't intend to close the source of any of my AGPL-licensed projects.
> Tell me how to fork it and I will.
https://github.com/minio/minio/fork
The fact that new versions aren't available does nothing to stop you from forking versions that are. Or were - they'll be available somewhere, especially if it got packaged for OS distribution.
So fork the last minio, and work from there... nobody is stopping you.
There's nothing wrong at all with charging for your product. What I do take issue with, however, is convincing everyone that your product is FOSS, waiting until people undertake a lot of work to integrate your product into their infrastructure, and then doing a bait-and-switch.
Just be honest since the start that your product will eventually abandon its FOSS licence. Then people can make an informed decision. Or, if you haven't done that, do the right thing and continue to stand by what you originally promised.
> What I do take issue with, however, is convincing everyone that your product is FOSS, waiting until people undertake a lot of work to integrate your product into their infrastructure, and then doing a bait-and-switch.
But FOSS means “this particular set of source files is free to use and modify”. It doesn’t include “and we will forever keep developing and maintaining it forever for free”.
It’s only different if people, in addition to the FOSS license, promise any further updates will be under the same license and then change course.
And yes, there is a gray area where such a promise is sort-of implied, but even then, what do you prefer, the developers abandoning the project, or at least having the option of a paid-for version?
> what do you prefer, the developers abandoning the project, or at least having the option of a paid-for version?
It's not a binary choice. I prefer the developers releasing the software under a permissive license. I agree that relying on freemium maintenance is naive. The community source lives on, perhaps the community should fork and run with it for the common good absorbing the real costs of maintenance.
> Just be honest since the start
While I agree with the sentiment, keep in mind that circumstances change over the years. What made sense (and what you've believed in) a few years ago may be different now. This is especially true when it comes to business models.
When your product entered mainstream with integration that would yield millions when virtually obliged to get a license is typically what happens.
When backed by a company there is an ethical obligation to keep, at least maintenance. Of course legally they can do what they wish. It isn't unfair to call it bad practice.
There's no way that maintaining something is an ethical obligation, regardless of popularity. There is only legal obligation, for commercial products.
If offering a tie in thing supposedly free of charge without warning that would end once it serves a party less profit purpose then yes.
Ethics are not obligations, they are moral principles. Not having principles doesn't send you to prison that is why it isn't law. It makes you lose moral credit though.
That is ridiculous. If you buy a sandwich for a homeless person, you do not need to warn them that you won't give them another one tomorrow. If you think generosity is an obligation of slavery, you have your morals backwards.
However, almost every open source license actually DOES warn that support may end. See the warranty clause.
https://github.com/minio/minio/blob/master/LICENSE#L587
If you give them a free sandwich every day for 500 days.....yeah, you should probably tell them you're not coming tomorrow.
> If offering a tie in thing supposedly free of charge without warning that would end once it serves a party less profit purpose then yes
Claiming that you’re entitled to free R&D forever because someone once gave you something of value seems like a great way to ensure that nobody does that again. You got over a decade of development by a skilled team, it’s not exactly beyond the pale that the business climate has changed since then.
Those might be your moral principles, but others reject this nonsense of an obligation to perpetual free labor you think you're entitled to, and don't grant you this moral high ground you assume you have.
There is no ethical obligation. You just want them to release new work under open source licence.
They already had. And for what purpose you think?
That's your first mistake. Thinking any company truly gives a shit about ethics when it negatively impacts what it is they actually want to do.
> When backed by a company there is an ethical obligation to keep, at least maintenance.
You're saying that a commercial company has an ethical obligation to do work for you in future, for free? That doesn't follow from any workable ethical system.
Everyone is quick to assert rights granted by the license terms and fast to say the authors should have chosen a better license from the start in case the license doesnt fit the current situation.
License terms don't end there. There is a no warranty clause too in almost every open source license and it is as important as the other parts of the license. There is no promise or guarantees for updates or future versions.
They're not saying they violated the license, they're saying they're assholes. It may not be illegal to say you'll do something for free and then not do it, but it's assholish, especially if you said it to gain customers.
They gave code for free, under open source, but you call them assholes if they do not release more code for free. So who is the asshole here? You or them?
Continued updates is not and never has been a part of FOSS, either implicitly or explicitly, you simply have a misconception. FOSS allows you to change the software. That's what it has always meant.
There's no broken promise though. It's the users who decide+assume, on their own going in, that X project is good for their needs and they'll have access to future versions in a way they're comfortable with. The developers just go along with the decision+assumption, and may choose to break it at any point. They'd only be assholes if they'd explicitly promised the project would unconditionally remain Y for perpetuity, which is a bs promise nobody should listen to, cuz life.
> say you'll do something for free
I think this is where the problem/misunderstanding is. There's no "I will do/release" in OSS unless promised explicitly. Every single release/version is "I released this version. You are free to use it". There is no implied promise for future versions.
Released software is not clawed back. Everyone is free to modify(per license) and/or use the released versions as long as they please.
I'm noticing this argument a lot these days, and I think it stems from something I can't define - "soft" vs. "hard" or maybe "high-trust" vs "low-trust".
I always warned people that if they "buy" digital things (music, movies) it's only a license, and can be taken away. And people intellectually understand that, but don't think it'll really happen. And then years go by, and it does, and then there's outrage when Amazon changes Roald Dahl's books, or they snatch 1984 right off your kindle after you bought it.
So there's a gap between what is "allowed" and what is "expected". I find this everywhere in polite society.
Was just talking to a new engineer on my team, and he had merged some PRs, but ignored comments from reviewers. And I asked him about that, and he said "Well, they didn't block the PR with Request Changes, so I'm free to merge." So I explained that folks won't necessarily block the PR, even though they expect a response to their questions. Yes, you are allowed to merge the PR, but you'll still want to engage with the review comments.
I view open source the same way. When a company offers open source code to the community, releasing updates regularly, they are indeed allowed to just stop doing that. It's not illegal, and no one is entitled to more effort from them. But at the same time, they would be expected to engage responsibly with the community, knowing that other companies and individuals have integrated their offering, and would be left stranded. I think that's the sentiment here: you're stranding your users, and you know it. Good companies provide a nice offramp when this happens.
Customers are the ones that continue to pay. If they continue to pay they will likely receive maintenance from the devs. If they don't, they are no longer or never have been customers.
It would be interesting to see if there could be a sustainable OSS model where customers are required to pay for the product, and that was the only way to get support for it as well.
Even if the source was always provided (and even if it were GPL), any bug reports/support requests etc. would be limited to paying customers.
I realize there is already a similar model where the product/source itself is always free and then they have a company behind it that charges for support... but in those cases they are almost always providing support/accepting bug reports for free as well. And maybe having the customer pay to receive the product itself in the first place, might motivate the developers to help more than if they were just paying for a support plan or something.
Well, I think this is what SchedMD do with Slurm? GPL code. You can sign up to the bug tracker & open an issue, but if you don't have a support contract they close the issue. And only those customers get advanced notice of CVEs etc. I'd expect nearly everyone who uses it in production has a support contract.
[dead]
The only meaningful informed decision, but sadly much less known (and I think we should talk and insist more on it), is to be wary if you see a CLA. Not all do, but most perform Copyright Assignment, and that's detrimental to the long-term robustness of Open Source.
Having a FOSS license is NOT enough. Idealy the copyright should be distributed across all contributors. That's the only way to make overall consensus a required step before relicensing (except for reimplementation).
Pick FOSS projects without CLAs that perform Copyright Assignment to an untrusted entity (few exceptions apply, e.g. the FSF in the past)
You are correct. Signing a CLA is in effect saying you approve this project doing a rug-pull and becoming closed-source in the future.
Bad advice.
You should be wary always. CLA or not, nothing guarantees that the project you depend on will receive updates, not even if you pay for them and the project is 100% closed source.
What you’re suggesting is perpetuating the myth that open source means updates available forever for free. This is not and never has been the case.
Was I, really? Maybe, if you feel so... but I'd have to say that I had no idea.
What I'm suggesting is that a FOSS project without CLAs and a healthy variety of contributors does belong to the broad open source community that forms around it, while a FOSS project with such CLA is just open to a bait-and-switch scheme because the ownership stays in a single hand that can change course at a moments notice.
Whether the project stops receiving updates or not, is an orthogonal matter.
>Just be honest since the start that your product will eventually abandon its FOSS licence. Then people can make an informed decision.
"An informed decision" is not a black or white category, and it definitely isn't when we're talking about risk pricing for B2B services and goods, like what MinIO largely was for those who paid.
Any business with financial modelling worth their salt knows that very few things which are good and free today will stay that way tomorrow. The leadership of a firm you transact with may or may not state this in words, but there are many other ways to infer the likelihood of this covertly by paying close attention.
And, if you're not paying close attention, it's probably just not that important to your own product. What risks you consider worth tailing are a direct extension of how you view the world. The primary selling point of MinIO for many businesses was, "it's cheaper than AWS for our needs". That's probably still true for many businesses and so there's money to be made at least in the short term.
"Informed decisions" mean you need to have the information.
Like with software development, we often lack the information on which we have to decide architectural, technical or business decisions.
The common solution for that is to embrace this. Defer decisions. Make changing easy once you do receive the information. And build "getting information" into the fabric. We call this "Agile", "Lean", "data driven" and so on.
I think this applies here too.
Very big chance that MinIO team honestly thought that they'd keep it open source but only now gathered enough "information" to make this "informed decision".
Isn't this the normal sales anyhow for many products? One attracts a customer with unreasonable promises and features, makes him sign a deal to integrate, then issues appear once in production that make you realize you will need to invest more.
When you start something (startup, FOSS project, damn even marriage) you might start with the best intentions and then you can learn/change/loose interest. I find it unreasonable to "demand" clarity "at the start" because there is no such thing.
Turning it around, any company that adopts a FOSS project should be honest and pay for something if it does not accept the idea that at some point the project will change course (which obviously, does not guarantee much, because even if you pay for something they can decide to shut it down).
> I find it unreasonable to "demand" clarity "at the start" because there is no such thing.
Obviously you cannot "demand" stuff but you can do your due dilligence as the person who chooses a technical solution. Some projects have more clarity than others, for example the Linux foundation or CNCF are basically companies sharing costs for stuff they all benefit from like Linux or Prometheus monitoring and it is highly unlikely they'd do a rug pull.
On the other end of the spectrum there are companies with a "free" version of a paid product and the incentive to make the free product crappier so that people pay for the paid version. These should be avoided.
It's not only less likely to have rug pulls in open source foundations, it's not really possible. Some foundations like CNCF have stood up when companies even tried this: https://www.cncf.io/blog/2025/05/01/protecting-nats-and-the-...
> Just be honest since the start that your product will eventually abandon its FOSS licence.
How does this look? How does one "just" do this? What if the whole thing was an evolution over time?
Almost every FOSS license has a warranty disclaimer. You should have always been taking them seriously. They are there for a reason.
Commercial licenses ditto.
Gratis commercial licenses, yes. Paid commercial software typically have limited warranties.
At this point I don’t trust any company that offers a core free tool with an upsell. Trials or limited access is one thing, but a free forever product that needs active maintaining, be skeptical.
It’s been tough for us at https://pico.sh trying to figure out the right balance between free and paid and our north star is: how much does it cost us to maintain and support? If the answer scales with the number of users we have then we charge for it. We also have a litmus test for abuse: can someone abuse the service? We are putting it behind a paywall.
I hear this perspective a lot in relation to open source projects.
What it fails to recognize is the reality that life changes. Shit happens. There's no way to predict the future when you start out building an open source project.
(Coming from having contributed to and run several open source projects myself)
> then doing a bait-and-switch
FOSS is not a moral contract. People working for free owe nothing to no one. You got what's on the tin - the code is as open source once they stop as when they started.
The underlying assumption of your message is that you are somehow entitled to their continued labour which is absolutely not the case.
It's a social contract, which for many people is a moral contract.
Show me a FOSS license where a commitment to indefinite maintenance is promised.
Social contracts are typically unwritten so the license would be the wrong place to look for it.
If it's neither written nor explicitly spoken, then it's not a contract of any kind. It's just an - usually naive - expectation.
A social contract isn't a legal contract to begin with, but even for those "written or explicitly spoken" is not a hard requirement.
A social contract still has to be explicit in some way to be considered such. Otherwise it's just an accepted convention.
It was not expectation when they started, did a lot to lure many into the ecosystem. When you release it free, wait for the momentum to build, then you cut off, it is something else. And the worse is they did it in a very short time. Check out elasticsearch, the same route but did not abandon the 7 release like this.
I know all about ElasticSearch, MongoDB, Redis, etc. Yes, what they did sucks. No, it doesn't make the maintainers bad or anything. It's still on the user to know that anything can happen to that spiffy project they've been using for a while, and so be prepared to migrate at any time.
> Social contracts are typically unwritten
Maybe this is the case, but why is your presumption of entitlement to free labor of others the assumed social contract, the assumed "moral" position, rather than the immoral one?
Why is the assumed social contract that is unwritten not that you can have the free labor we've released to you so far, but we owe you nothing in the future?
There's too much assumption of the premise that "moral" and "social contract" are terms that make the entitled demands of free-loaders the good guys in this debate. Maybe the better "morality" is the selfless workers giving away the product of their labor for free are the actual good guys.
No, social contracts require some sort of mutual benefit.
Where is this mythical social contract found? I stand by my point: it's a software license, not a marriage.
Free users certainly would like it to be a social contract like I would like to be gifted a million dollars. Sadly, I still have to work and can't infinitely rely on the generosity of others.
The social contract is found (and implicitly negotiated) in the interactions between humans, ie: society.
Seems like the interaction that happened here was that they stopped supporting it
Sounds like you've misunderstood this particular social contract. Luckily several people in this thread have corrected you.
Where is the contract to return the shopping cart to the corral?
I always preferred people who didn’t, when I worked in retail. It generates a nice chill task (wander around the parking lot looking for carts). But if you want to do a favor for the faceless retailer, go for it. Mostly I chuck my cart in the corral to get it out of my way, but this sees more like a morally-neutral action to me.
Your analogy doesn't make sense. You are getting benefits from using the shopping cart and you bring back as it's expected as part of the exchange. You bring the cart back to where you took which is a low effort commitment entirely proportional to what you got from it.
Free software developers are gifting you something. Expecting indefinite free work is not mutual respect. That's entitlement.
The common is still there. You have the code. Open source is not a perpetual service agreement. It is not indentured servitude to the community.
Stop trying to guilt trip people into giving you free work.
MinIO accepted contributions from people outside the company who did work on it for free, usually because they expected that minio would keep the software open source.
The last open source version of MinIO didn't disappear when they stopped maintaining it.
In this context the social contract would be an expectation that specifically software developers must return the shopping cart for you, but you would never expect the same from cashiers, construction workers, etc.
If the software developer doesn't return your cart, he betrayed the social contract.
This sounds very manipulative and narcissistic.
Expectations are maybe fine maybe not, but it's funny that people can slap the word moral onto their expectation of others being obligated to do free work for them, and it's supposed to make them be the good guys here.
Why do you presume to think your definition of morals is shared by everyone? Why is entitlement to others labor the moral position, instead of the immoral position?
> Why is entitlement to others labor the moral position, instead of the immoral position?
You seem to be mistaking me for someone arguing that anyone is entitled to others' labour?
Everyone is keying on forced free labor, but that's not really the proposed solution when an open-source project ends. The fact that it ends is a given, the question then is what to do about all the users. Providing an offramp (migration tools that move to another solution that's similar, or even just suggested other solutions, even including your own commercial offering) before closing up shop seems like a decent thing to do.
it's still a bait and switch, considering they started removing features before the abandonment.
Users can fork it from point they started removing features. Fully inside social, moral and spiritual contract of open source.
This isn’t about people working for free.
Nobody sensible is upset when a true FOSS “working for free” person hangs up their boots and calls it quits.
The issue here is that these are commercial products that abuse the FOSS ideals to run a bait and switch.
They look like they are open source in their growth phase then they rug pull when people start to depend on their underlying technology.
The company still exists and still makes money, but they stopped supporting their open source variant to try and push more people to pay, or they changed licenses to be more restrictive.
It has happened over and over, just look at Progress Chef, MongoDB, ElasticSearch, Redis, Terraform, etc.
In this particular case, it's the fault of the "abused" for even seeing themselves as such in the first place. Many times it's not even a "bait-and-switch", but reality hitting. But even if it was, just deal with it and move on.
This is definitely the case because the accusations and supposed social contract seem extremely one-sided towards free riding.
Nobody here is saying they should donate the last version of MinIO to the Apache software foundation under the Apache license. Nobody is arguing for a formalized "end of life" exit strategy for company oriented open source software or implying that such a strategy was promised and then betrayed.
The demand is always "keep doing work for me for free".
I’m not even claiming that people who feel thar feel that a social contract has been violated are correct.
I’m saying that the open source rug pull is at this point a known business tactic that is essentially a psychological dark pattern used to exploit.
These companies know they’ll get more traction and sales if they have “open source” on their marketing material. They don’t/never actually intend to be open source long term. They expect to go closed source/source available business lines as soon as they’ve locked enough people into the ecosystem.
Open source maintainers/organizations like the GNU project are happy and enthusiastic about delivering their projects to “freeloaders.” They have a sincere belief that having source code freedom is beneficial to all involved. Even corporate project sponsors share this belief: Meta is happy to give away React because they know that ultimately makes their own products better and more competitive.
Make hay while the sun shines. Be glad that the project happened.
I’m not even claiming that the “abused” are correct to be upset.
The core of my claim is that it’s a shady business tactic because the purpose of it is to gain all the marketing benefits of open source on the front-end (fast user growth, unpaid contributions from users, “street cred” and positive goodwill), then change to source available/business license after the end of the growth phase when users are locked in.
This is not much different than Southwest Airlines spending decades bragging about “bags fly free” and no fees only to pull the rug and dump their customer goodwill in the toilet.
Totally legal to do so, but it’s also totally legal for me to think that they’re dishonest scumbags.
Except in this case, software companies, in my opinion, have this rug pull plan in place from day 1.
> bait and switch
Is it really though? They're replacing one product with another, and the replacement comes with a free version.
Free without commercial restrictions? With a real open source license?
It's part of the due diligence process for users to decide if they can trust a project.
I use a few simple heuristics:
- Evaluate who contributes regularly to a project. The more diverse this group is, the better. If it's a handful of individuals from 1 company, see other points. This doesn't have to be a show stopper. If it's a bit niche and only a handful of people contribute, you might want to think about what happens when these people stop doing that (like is happening here).
- Look at required contributor agreements and license. A serious red flag here is if a single company can effectively decide to change the license at any point they want to. Major projects like Terraform, Redis, Elasticsearch (repeatedly), etc. have exercised that option. It can be very disruptive when that happens.
- Evaluate the license allows you do what you need to do. Licenses like the AGPLv3 (which min.io used here) can be problematic on that front and comes with restrictions that corporate legal departments generally don't like. In the end choosing to use software is a business decision you take. Just make sure you understand what you are getting into and that this is OK with your company and compatible with business goals.
- Permissive licenses (MIT, BSD, Apache, etc.) are popular with larger companies and widely used on Github. They facilitate a neutral ground for competitors to collaborate. One aspect you should be aware off is that the very feature that makes them popular also means that contributors can take the software and create modifications under a different license. They generally can't re-license existing software or retroactively. But companies like Elasticsearch have switched from Apache 2.0 to closed source, and recently to AGPLv3. Opensearch remains Apache 2.0 and has a thriving community at this point.
- Look at the wider community behind a project. Who runs it; how professional are they (e.g. a foundation), etc. How likely would it be to survive something happening to the main company behind a thing? Companies tend to be less resilient than the open source projects they create over time. They fail, are subject to mergers and acquisitions, can end up in the hands of hedge funds, or big consulting companies like IBM. Many decades old OSS projects have survived multiple such events. Which makes them very safe bets.
None of these points have to be decisive. If you really like a company, you might be willing to overlook their less than ideal licensing or other potential red flags. And some things are not that critical if you have to replace them. This is about assessing risk and balancing the tradeoff of value against that.
Forks are always an option when bad things happen to projects. But that only works if there's a strong community capable of supporting such a fork and a license that makes that practical. The devil is in the details. When Redis announced their license change, the creation of Valkey was a foregone conclusion. There was just no way that wasn't going to happen. I think it only took a few months for the community to get organized around that. That's a good example of a good community.
The other heuristic I would add is one that works for commercial/closed source too: evaluate the project exactly as it is now. Do you still want to use it even if 0% of the roadmap ever materializes?
With open source, the good news is that that the version you currently have will always be available to you in perpetuity - including all the bugs, missing features, and security flaws. If you're ok with that, then the community around the thing doesn't even matter.
Easy. If you see open source software maintained by a company, assume they will make it closed source or enshittify the free version. If it's maintained by an individual assume he will get bored with it. Plan accordingly. It may not happen and then you'll be pleasantly surprised
exactly
I don’t feel that way at all. I’ve been maintaining open source storage systems for few years. I love it. Absolutely love it. I maintain TidesDB it’s a storage engine. I also have back pain but that doesn’t mean you can’t do what you love.
If your main motivation creating/maintaince a popular open source project was to make money then you don't really undersand the open source ethos.
Isn't most (presumably the overwhelming majority) of opensource development is funded by for profit companies? That has been the case for quite a while too...
"eating is for the greedy", noted.
A little side project might grow and become a chore / untenable, especially with some from the community expecting handouts without respect.
Case in point, reticulum. Also Nolan Lawson has a very good block post on it.
I don't think your position is reasonable even if I believe you just want to say that writing open source shouldn't be a main source of the income). I think it's perfectly okay to be rewarded for time, skill, effort, and a software itself.
Even if motivation isn't about making money, people still need to eat, and deal with online toxicity.
it's not about the money. for large open source projects you need to allocate time to deal with the community. for someone that just wants to put code out there that is very draining and unpleasant.
most projects won't ever reach that level though.
> it's not about the money
OP sure makes it sound like it's about the money.
> for someone that just wants to put code out there that is very draining and unpleasant.
I never understood this. Then why publish the code in the first place? If the goal is to help others, then the decent thing would be to add documentation and support the people who care enough to use your project. This doesn't mean bending to all their wishes and doing work you don't enjoy, but a certain level of communication and collaboration is core to the idea of open source. Throwing some code over the fence and forgetting about it is only marginally better than releasing proprietary software. I can only interpret this behavior as self-serving for some reason (self-promotion, branding, etc.).
Most open source projects start small. The author writes code that solves some issue they have. Likely, someone else has the same problem and they would find the code useful. So it's published. For a while it's quiet, but one day a second user shows up and they like it. Maybe something isn't clear or they have a suggestion. That's reasonable and supporting one person doesn't take much.
Then the third user shows up. They have an odd edge case and the code isn't working. Fixing it will take some back and forth but it still can be done in a respectable amount of time. All is good. A few more users might show up, but most open source projects will maintain a small audience. Everyone is happy.
Sometimes, projects keep gaining popularity. Slowly at first, but the growth in interest is there. More bug reports, more discussions, more pull requests. The author didn't expect it. What was doable before takes more effort now. Even if the author adds contributors, they are now a project and a community manager. It requires different skills and a certain mindset. Not everyone is cut out for this. They might even handle a small community pretty well, but at a certain size it gets difficult.
The level of communication and collaboration required can only grow. Not everyone can deal with this and that's ok.
All of that sounds reasonable. But it also doesn't need to be a reason to find maintaining OSS very draining or unpleasant, as GP put it.
First of all, when a project grows, its core team of maintainers can also grow, so that the maintenance burden can be shared. This is up to the original author(s) to address if they think their workload is a problem.
Secondly, and coming back to the post that started this thread, the comment was "working for free is not fun", implying that if people paid for their work, then it would be "fun". They didn't complain about the amount of work, but about the fact that they weren't financially compensated for it. These are just skewed incentives to have when working on an open source project. It means that they would prioritize support of paying customers over non-paying users, which indirectly also guides the direction of the project, and eventually leads to enshittification and rugpulls, as in MinIO's case.
The approach that actually makes open source projects thrive is to see it as an opportunity to build a community of people who are passionate about a common topic, and deal with the good and the bad aspects as they come. This does mean that you will have annoying and entitled users, which is the case for any project regardless of its license, but it also means that your project will be improved by the community itself, and that the maintenance burden doesn't have to be entirely on your shoulders. Any successful OSS project in history has been managed this way, while those that aren't remain a footnote in some person's GitHub profile, or are forked by people who actually understand open source.
Honestly, I don't see how you're adding anything here other than inflated expectations and a strange anti-individual pro-mega-corporation ideology.
Fundamentally your post boils down to this: All contributions should be self funded by the person making them.
This might seem acceptable at first glance, but it has some really perverse implications that are far worse than making a product customers are willing to pay for.
To be granted the right to work on an open source project, you must have a day job that isn't affiliated with the project. You must first work eight hours a day to ensure your existence, only after those eight hours are up, are you allowed to work on the open source project.
Every other form of labor is allowed to charge for money, even the street cleaner or the elderly janitor stocking up on his pension, everyone except the open source developer and that everyone includes people who work on behalf of a company that directly earns money off the open source project, including software developers hired by said company even if those software developers work full time on the open source project. This means that you can run into absurd scenarios like SF salaries being paid for contributors, while the maintainer, who might be happy with an average polish developer salary doesn't even get the little amount they would need to live a hermit life doing nothing but working on the project. No, that maintainer is expected, I mean obligated, to keep working his day job to then be granted the privilege of working for free.
Somehow the maintainer is the selfish one for wanting his desire to exist be equally as important as other people's desire for the project to exist. The idea that people value the project but not the process that brings about the project sounds deeply suspect.
Your complaint that prioritizing paid feature is bad is disturbing, because of the above paragraph. The maintainer is expected to donate his resources for the greater good, but in instances where the maintainer could acquire resources to donate to the public at large through the project itself, he must not do so, because he must acquire the donation resources through his day job. To be allowed to prioritize the project, he must deprioritize the project.
The strangest part by far though is that if you are a company that produces and sells proprietary software, you're the good guy. As I said in the beginning. This feels like a very anti OSS stance since open source software is only allowed to exist in the shadow of proprietary software that makes money. The argument is always that certain types of software should not exist and that the things that are supposedly being withheld are more important than the things being created.
I personally think this type of subtractive thinking is very insidious. You can have the best intentions in the world and still be branded the devil. Meanwhile the devil can do whatever he wants. There is always this implicit demand that you ought to be an actual devil for the good of everyone.
> I never understood this. Then why publish the code in the first place? If the goal is to help others, then the decent thing would be to add documentation and support the people who care enough to use your project.
Because these things take entirely different skill sets and the latter might be a huge burden for someone who is good at the former.
> support the people who care enough to use your project.
You make that sound like they are helping the developer. The help is going the other way, it seems to me.
The person "throwing" the software has 0 obligation to any potential or actual users of said software. Just the act of making it available, even without any kind of license, is already benevolent. Anything further just continues to add to that benevolence, and nothing can take away from it, not even if they decide to push a malware-ridden update.
There is obligation to a given user only if it's explicitly specified in a license or some other communication to which the user is privy.
Who gave you the right to "decent" things anyway ? Yeah it would be cool, but do you have any lega/social/moral right to it ? Absolutely not.
That collaboration goes both ways, or not as is often the case.
I've been involved with free software for coming on 30 years, have maintained several reasonably popular free software projects and have 100% enjoyed it every time. Developing relationships with the community members and working with them toward a common goal is very rewarding. Not much more to say about this as these are subjective interpretations of our experiences and the experiences could be very different. But it definitely can be fun.
> Working for free is not fun
Open source can be very fun if you genuinely enjoy it.
The problem is dealing with people that have wrong expectations, those need to be ignored.
> Ultimately, dealing with people who don't pay for your product is not fun.
I find it the other way around. I feel a bit embarrassed and stressed out working with people who have paid for a copy of software I've made (which admittedly is rather rare). When they haven't paid, every exchange is about what's best for humanity and the public in general, i.e. they're not supposed to get some special treatment at the expense of anyone else, and nobody has a right to lord over the other party.
People who paid for your software don't really have a right to lord you around. You can chose to be accommodating because they are your customers but you hold approximately as much if not more weight in the relationship. They need your work. It's not so much special treatment as it is commissioned work.
People who don't pay are often not really invested. The relationship between more work means more costs doesn't exist for them. That can make them quite a pain in my experience.
I'm probably projecting the idea I have of myself here but if someone says
> every exchange is about what's best for humanity and the public in general
it means that they are the kind of individual who deeply care for things to work, relationships to be good and fruitful and thus if they made someone pay for something, they think they must listen to them and comply their requests, because well, they are a paying customer and the customer is always right, they gave me their money etc etc
There is no tension there.
You can care about the work and your customer will still setting healthy boundaries and accepting that wanting to do good work for them doesn't mean you are beside them.
Business is fundamentally about partnership, transactional and moneyed partnerships, but partnership still. It's best when both suppliers and customers are aware of that and like any partnership, it structured and can be stopped by both partners. You don't technically owe them more than what's in the contract and that puts a hard stop which is easy to identify if needed.
Legally speaking, accepting payment makes it very clear that there is a contract under which you have obligations, both explicitly spelled out and implied.
> People who paid for your software don't really have a right to lord you around.
Of course I realize that, rationally, but:
* They might feel highly entitled because they paid.
* I feel more anxious to satisfy than I should probably be feeling. Perhaps even guilty for having taken money. I realize that is not a rational frame of mind to be in; it would probably change if that happened frequently. I am used to two things: There is my voluntary work, which I share freely and without expecting money; and there is my 'job' where I have to bow my head to management and do not get to pursue the work as I see fit, and I devote most of my time to - but I get paid (which also kind of happens in the background, i.e. I never see the person who actually pays me). Selling a product or a service is a weird third kind of experience which I'm not used to.
You can achieve something like this with a pricing strategy.
As DHH and Jason Fried discuss in both the books REWORK, It Doesn’t Have to Be Crazy at Work, and their blog:
> The worst customer is the one you can’t afford to lose. The big whale that can crush your spirit and fray your nerves with just a hint of their dissatisfaction.
(It Doesn’t Have to Be Crazy at Work)
> First, since no one customer could pay us an outsized amount, no one customer’s demands for features or fixes or exceptions would automatically rise to the top. This left us free to make software for ourselves and on behalf of a broad base of customers, not at the behest of any single one. It’s a lot easier to do the right thing for the many when you don’t fear displeasing a few super customers could spell trouble.
(https://signalvnoise.com/svn3/why-we-never-sold-basecamp-by-...)
But, this mechanism proposed by DHH and Fried only remove differences amongst the paying-customers. I Not between "paying" and "non-paying".
I'd think, however, there's some good ideas in there to manage that difference as well. For example to let all the customers, paying- or not-paying go through the exact same flow for support, features, bugs, etc. So not making these the distinctive "drivers" why people would pay. E.g. "you must be paying customer to get support". Obviously depends on the service, but maybe if you have other distinctive features that people would pay for (e.g. hosted version) that could work out.
I think this is a good point and a true point.
However, I understood GP's mention of "embarrassment" to speak more to their own feelings of responsibility. Which would be more or less decoupled from the pressure that a particular client exerts.
Thanks, you finally settled my dilemma of whether I should have a free version of UXWizz...
Maybe open source developers should stop imagining the things they choose to give away for free as "products". I maintain a small open source library. It doesn't make any money, it will never make any money, people are free to use or not as they choose. If someone doesn't like the way I maintain the repository they are free to fork it.
Agreed, but that's only half of it. The second half is that open source users should stop imagining the things they choose to use for free as "products".
Users of open source often feel entitled, open issues like they would open a support ticket for product they actually paid for, and don't hesitate to show their frustration.
Of course that's not all the users, but the maintainers only see those (the happy users are usually quiet).
I have open sourced a few libraries under a weak copyleft licence, and every single time, some "people from the community" have been putting a lot of pressure on me, e.g. claiming everywhere that the project was unmaintained/dead (it wasn't, I just was working on it in my free time on a best-effort basis) or that anything not permissive had "strings attached" and was therefore "not viable", etc.
The only times I'm not getting those is when nobody uses my project or when I don't open source it. I have been open sourcing less of my stuff, and it's a net positive: I get less stress, and anyway I wasn't getting anything from the happy, quiet users.
It used to be that annoying noobs were aggressively told to RTFM, their feelings got hurt and they would go away. That probably was too harsh. But then came corporate OSS and with it corporate HR in OSS. Being the BOFH was now bad, gatekeeping was bad. Now everyone feels entitled to the maintainer time and maintainers burn out.
It's a trade off, we made it collectively.
I think this gets complicated when you have larger open source projects where contributors change over time. By taking over stewardship of something that people depend on you should have some obligation to not intentionally fuck those people over even if you are not paid for it.
This is also true to some extend when it's a project you started. I don't think you should e.g. be able to point to the typical liability disclaimer in free software licenses when you add features that intentionally harm your users.
> By taking over stewardship of something that people depend on you should have some obligation
No. If it's free and open source, all it says is what you can do with the code. There is no obligation towards the users whatsoever.
If you choose to depend on something, it's your problem. The professional way to do it is either to contractually make sure that the project doesn't "fuck you over" (using your words), or to make sure that you are able to fork the project if necessary.
If you base your business on the fact that someone will be working for you, for free, forever, then it's your problem.
> Working for free is not fun.
Why were you doing it then?
Ex mailcow owner here. Can confirm. Hard times.
I loved everyone in the community though. By heart. You were the best.
It's remarkable how many people wrongly assume that open source projects can't be monetized. Business models and open source are orthogonal but compatible concepts. However, if your primary goal while maintaining an open source project is profiting financially from it, your incentives are skewed. If you feel this way, you should also stop using any open source projects, unless you financially support them as well.
Good luck with the back pain.
[dead]
They point to AIStor as alternative.
Other alternatives:
https://github.com/deuxfleurs-org/garage
https://github.com/rustfs/rustfs
https://github.com/seaweedfs/seaweedfs
https://github.com/supabase/storage
https://github.com/scality/cloudserver
https://github.com/ceph/ceph
Among others
I'm the author of another option (https://github.com/mickael-kerjean/filestash) which has a S3 gateway that expose itself as a S3 server but is just a proxy that forward your S3 call onto anything else like SFTP, local FS, FTP, NFS, SMB, IPFS, Sharepoint, Azure, git repo, Dropbox, Google Drive, another S3, ... it's entirely stateless and act as a proxy translating S3 call onto whatever you have connected in the other end
Is this some dark pattern or what?
https://imgur.com/a/WN2Mr1z (UK: https://files.catbox.moe/m0lxbr.png)
I clicked settings, this appeared, clicking away hid it but now I cant see any setting for it.
The nasty way of reading that popup, my first way of reading it, was that filestash sends crash reports and usage data, and I have the option to have it not be shared with third parties, but that it is always sent, and it defaults to sharing with third parties. The OK is always consenting to share crash reports and usage.
I'm not sure if it's actually operating that way, but if it's not the language should probably be
There is no telemetry unless you opt in. It was just a very poorly worded screen which will definitly get updated with your suggestions
update: done => https://github.com/mickael-kerjean/filestash/commit/d3380713...
You're very conscientious, thank you (providing a separate link for UK readers). I hate that it's come to this.
Another alternative that follows this paradigm is rclone
https://rclone.org/commands/rclone_serve/
I was looking at running [versitygw](https://github.com/versity/versitygw) but filestash looks pretty sweet! Any chance you're familiar with Versity and how the S3 proxy may differ?
I did a project with Monash university who were using Versity on their storage to handle multi tiers storage on their 12PB cluster, with glacier like capabilities on tape storage with a robot picking up data on their tape backup and a hot storage tier for better access performance, lifecycle rules to move data from hot storage to cold, etc.... The underlying storage was all Versity and they had Filestash working on top, effectively we did some custom plugins so you could recall the data on their own selfhosted glacier while using it through the frontend so their user had a Dropbox like experience. Depending on what you want to do they can be very much complimentary
Monash University is also a Ceph Foundation member.
They've been active in the Ceph community for a long time.
I don't know any specifics, but I'm pretty sure their Ceph installation is pretty big and used to support critical data.
Didn't know about filestash yet. Kudos, this framework seems to be really well implemented, I really like the plugin and interface based architecture.
from my experiences
rustfs have promise, supports a lot of features, even allows to bring your own secret/access keys (if you want to migrate without changing creds on clients) but it's very much still in-development; and they have already prepared for bait-and-switch in code ( https://github.com/rustfs/rustfs/blob/main/rustfs/src/licens... )
Ceph is closest feature wise to actual S3 feature-set wise but it's a lot to setup. It pretty much wants few local servers, you can replicate to another site but each site on its own is pretty latency sensitive between storage servers. It also offers many other features aside, as S3 is just built on top of their object store that can be also used for VM storage or even FUSE-compatible FS
Garage is great but it is very much "just to store stuff", it lacks features on both S3 side (S3 have a bunch of advanced ACLs many of the alternatives don't support, and stuff for HTTP headers too) and management side (stuff like "allow access key to access only certain path on the bucket is impossible for example). Also the clustering feature is very WAN-aware, unlike ceph where you pretty much have to have all your storage servers in same rack if you want a single site to have replication.
> [rustfs] have already prepared for bait-and-switch in code
There's also a CLA with full copyright assignment, so yeah, I'd steer clear of that one: https://github.com/rustfs/rustfs/blob/main/CLA.md
Not sure what you mean about Ceph wanting to be in a single rack.
I run Ceph at work. We have some clusters spanning 20 racks in a network fabric that has over 100 racks.
In a typical Leaf-Spine network architecture, you can easily have sub 100 microsecond network latency which would translate to sub millisecond Ceph latencies.
We have one site that is Leaf-Spine-SuperSpine, and the difference in network latency is barely measurable between machines in the same network pod and between different network pods.
I don't think this is a problem. The CLA is there to avoid future legal disputes. It prevents contributors from initiating IP lawsuits later on, which could cause significantly more trouble for the project.
Hypothetically, isn't one of the "legal disputes" that's avoided is if the projects relicenses to a closed-source model without compensating contributors, the contributors can't sue because the copyright of the contributions no longer belongs to be them?
Apart from Minio, we tried Garage and Ceph. I think there's definitely a need for something that interfaces using S3 API but is just a simple file system underneath, for local, testing and small scale deployments. Not sure that exists? Of course a lot of stuff is being bolted onto S3 and it's not as simple as it initially claimed to be.
SeaweedFS's new `weed mini` command[0] does a great job at that. Previously our most flakey tests in CI were due to MinIO sometimes not starting up properly, but with `weed mini` that was completely resolved.
[0]: https://github.com/seaweedfs/seaweedfs/wiki/Quick-Start-with...
Minio started like that but they migrated away from it. It's just hard to keep it up once you start implementing advanced S3 features (versioning/legal hold, metadata etc.) and storage features (replication/erasure coding)
> for local, testing and small scale deployments
Yes I'm looking for exactly that and unfortunately haven't found a solution.
Tried garage, but they require running a proxy for CORS, which makes signed browser uploads a practical impossibility for the development machine. I had no idea that such a simple popular scenario is in fact too exotic.
What about s3 stored in SQLite? https://github.com/seddonm1/s3ite
This was written to store many thousands of images for machine learning
From what I can gather, S3Proxy[1] can do this, but relies on a Java library that's no longer maintained[2], so not really much better.
I too think it would be great with a simple project that can serve S3 from filesystem, for local deployments that doesn't need balls to the walls performance.
[1]: https://github.com/gaul/s3proxy
[2]: https://jclouds.apache.org/
I've been looking into rclone which can serve s3 in a basic way https://rclone.org/commands/rclone_serve_s3/
Try versitygw.
I’m considering it for a production deployment, too. There’s much to be said for a system that doesn’t lock your data in to its custom storage format.
The problem with that approach is that S3 object names are not compatible with POSIX file names. They can contain characters that are not valid on a filesystem, or have special meaning (like "/")
A simple litmus test I like to do with S3 storages is to create two objects, one called "foo" and one called "foo/bar". If the S3 uses a filesystem as backend, only the first of those can be created
For testing, consider https://github.com/localstack/localstack
WAY too much. I just need a tiny service that translates common S3 ops into filesystem ops and back.
Would be cool to understand the tradeoffs of the various block storage implementations.
I'm using seaweedfs for a single-machine S3 compatible storage, and it works great. Though I'm missing out on a lot of administrative nice-to-haves (like, easy access controls and a good understanding of capacity vs usage, error rates and so on... this could be a pebcak issue though).
Ceph I have also used and seems to care a lot more about being distributed. If you have less than 4 hosts for storage it feels like it scoffs at you when setting up. I was also unable to get it to perform amazingly, though to be fair I was doing it via K8S/Rook atop the Flannel CNI, which is an easy to use CNI for toy deployments, not performance critical systems - so that could be my bad. I would trust a ceph deployment with data integrity though, it just gives me that feel of "whomever worked on this, really understood distributed systems".. but, I can't put that feeling into any concrete data.
That's a great list. I've just opened a pull request on the minio repository to add these to the list of alternatives.
https://github.com/minio/minio/pull/21746
I believe the Minio developers are aware of the alternatives, having only their own commercial solution listed as alternatives might be a deliberate decision. But you can try merging the PR, there's nothing wrong with it
The mentioned AIStor "alternative" is on the min.io website. It seems like a re-brand. I doubt they will link to competing products.
While I do approve of that MR, doing it is ironic considering the topic was "MinIO repository is no longer maintained"
Let's hope the editor has second thoughts on some parts
I'm well aware of the irony surrounding minio, adding a little bit more doesn't hurt :P
Wrote a bit about differences between rustfs and garage here https://buttondown.com/justincormack/archive/ignore-previous... - since then rustfs fixed the issue I found. They are for very different use cases. Rustfs really is close to a minio rewrite.
there is one thing that worries me about rustfs: https://github.com/rustfs/rustfs/blob/main/rustfs/src/licens...
I expect rugpull in the future
hah, good on them.
nice catch.
I don't think this is a problem. The CLA is there to avoid future legal disputes. It prevents contributors from initiating IP lawsuits later on, which could cause significantly more trouble for the project.
The only possible "trouble" would be because the project does a rug-pull to closed source. Be honest about what you're saying.
Had great experience with garage for an easy to setup distributed s3 cluster for home lab use (connecting a bunch of labs run by friends in a shared cluster via tailscale/headscale). They offer a "eventual consistency" mode (consistency_mode = dangerous is the setting, so perhaps don't use it for your 7-nines SaaS offering) where your local s3 node will happily accept (and quickly process) requests and it will then duplicate it to other servers later.
Overall great philosophy (target at self-hosting / independence) and clear and easy maintenance, not doing anything fancy, easy to understand architecture and design / operation instructions.
From my experience, Garage is the best replacement to replace MinIO *in a dev environment*. It provides a pretty good CLI that makes automatic setup easier than MinIO. However in a production environment, I guess Ceph is still the best because of how prominent it is.
Garage doesn't support CORS which makes it impossible to use for development for scenarios where web site visitors PUT files to pre-signed URLs.
Yep I know, I had to build a proxy for s3 which supports custom pre-signed URLs. In my case it was worth it because my team needs to verify uploaded content for security reasons. But for most cases I guess that you can't really bother deploying a proxy just for CORS.
https:/github.com/beep-industries/content
Both rustfs and seaweedfs are pretty pretty good based on my light testing.
We are using ruftfs for our simple usecases as a replacement for minio. Very slim footprint and very fast.
To be clear AIStor is built by the MinIO team so this is just an upsell.
First off, I don't think there is anything wrong with MinIO closing down its open source. There are simply too many people globally who use open source without being willing to pay for it. I started testing various alternatives a few months ago, and I still believe RustFS will emerge as the winner after MinIO's exit. I evaluated Garage, SeaweedFS, Ceph, and RustFS. Here are my conclusions:
1. RustFS and SeaweedFS are the fastest in the object storage field.
2. The installation for Garage and SeaweedFS is more complex compared to RustFS.
3. The RustFS console is the most convenient and user-friendly.
4. Ceph is too difficult to use; I wouldn't dare deploy it without a deep understanding of the source code.
Although many people criticize RustFS, suggesting its CLA might be "bait," I don't think such a requirement is excessive for open source software, as it helps mitigate their own legal risks.
Furthermore, Milvus gave RustFS a very high official evaluation. Based on technical benchmarks and other aspects, I believe RustFS will ultimately win.
https://milvus.io/blog/evaluating-rustfs-as-a-viable-s3-comp...
GPL for open source and commercial license for the enterprise lawyers.
Unfortunately, a majority seems to hate GPL these days even though it prevents most of the worst corporate behaviors.
Minio was AGPL, which was a perfectly fine tradeoff IMO. But apparently that wasn't good enough.
AGPL doesn't help when you want to kill your free offering to move people onto the paid tier. But quite frankly, that isn't a problem GPL is meant to solve.
Huge thanks for your contributions to the open-source world! Milvus is an incredibly cool product and a staple in my daily stack.
It’s been amazing to watch Milvus grow from its roots in China to gaining global trust and major VC backing. You've really nailed the commercialization, open-source governance, and international credibility aspects.
Regarding RustFS, I think that—much like Milvus in the early days—it just needs time to earn global trust. With storage and databases, trust is built over years; users are naturally hesitant to do large-scale replacements without that long track record.
Haha, maybe Milvus should just acquire RustFS? That would certainly make us feel a lot safer using it!
Garage installation is easy.
1. Download or build the single binary into your system (install like `/usr/local/sbin/garage`)
2. Create a file `/etc/garage.toml`:
3. Start it with `garage server` or just have an AI write an init script or unit file for you. (You can pkill -f /usr/local/sbin/garage to shut it down.)Also, NVIDIA has a phenomenal S3 compatible system that nobody seems to know about named AIStore: https://aistore.nvidia.com/ It's a bit more complex, but very powerful and fast (faster than MinIO - slightly less space efficient than MinIO because it maintains a complete copy of an object on a single node so that the object doesn't have to be reconstituted as it would on MinIO.) It also can be a proxy in front of other S3 systems, including AWS S3 or GCS etc and offer a single unified namespace to your clients.
IMO, Seaweedfs is still too much of a personal project, it's fast for small files, but keep good and frequent backups in a different system if you choose it.
I personally will avoid RustFS. Even if it was totally amazing, the Contributor License Agreement makes me feel like we're getting into the whole Minio rug-pull situation all over again, and you know what they say about doing the same thing and expecting a different result..
Garage is indeed an excellent project, but I think it has a few drawbacks compared to the alternatives: Metadata Backend: It relies on SQLite. I have concerns about how well this scales or handles high concurrency with massive datasets. Admin UI: The console is still not very user-friendly/polished. Deployment Complexity: You are required to configure a "layout" (regions/zones) to get started, whereas MinIO doesn't force this concept on you for simple setups. Design Philosophy: While Garage is fantastic for edge/geo-distributed use cases, I feel its overall design still lags behind MinIO and RustFS. There is a higher barrier to entry because you have to learn specific Garage concepts just to get it running.
If you are on Hetzner, I created a ready to use Terraform module that spins up a single node GarageFs server https://pellepelster.github.io/solidblocks/hetzner/web-s3-do...
As someone about to learn the basics of Terraform, with an interest in geo-distributed storage, and with some Hetzner credit sitting idle... I came across the perfect comment this morning.
I might extend this with ZeroFS too.
> RustFS and SeaweedFS are the fastest in the object storage field.
I'm not sure if SeaweedFS is comparable. It's based on Facebook's Haystack design, which is used to address a very specific use case: minimizing IO, in particular the metadata lookup, to access individual objects. This leads to many trade-offs. For instance, its main unit of operations is on volumes. Data is appended to a volume. Erasure coding is done per volume. Updates are done at volume level, and etc.
On the other hand, a general object store goes beyond needle-in-a-haystack type of operations. In particular, people use an object store as the backend for analytics, which requires high-throughput scans.
> Although many people criticize RustFS, suggesting its CLA might be "bait," I don't think such a requirement is excessive for open source software, as it helps mitigate their own legal risks.
What legal risks does it help mitigate?
RustFS has rug-pull written all over it. You can bookmark this comment for the future. 100% guaranteed it will happen. Only question is when.
> 4. Ceph [...]
MinIO was more for the "mini" use case (or more like "anything not large scale", with a very broad definition of large scale). Here "works out of the box" is paramount.
And Ceph is more for the maxi use case. Here in depth fine tuning, highly complex setups, distributed setups and similar are the norm. Hence out of the box small scale setup experience is bearly relevant.
So they really don't fill out the same space, even through their functionality overlaps.
> too many people globally who use open source without being willing to pay for it.
That's an odd take... open source is a software licensing model, not a business model.
Unless you have some knowledge that I don't, MinIO never asked for nor accepted donations from users of their open source offerings. All of their funding came from sales and support of their enterprise products, not their open source one. They are shutting down their own contributions to the open source code in order to focus on their closed enterprise products, not due to lack of community engagement or (as already mentioned) community funding.
> That's an odd take... open source is a software licensing model, not a business model.
Yes, open-source is a software license model, not a business model. It is also not a software support model.
This change is them essentially declaring that MinIO is EOL and will not have any further updates.
For comparison, Windows 10 which is a paid software released in the same year as first minio release i.e. 2015 is already EOL.
I respectfully disagree with the notion that open source is strictly a licensing model and not a business model. For an open-source project to achieve long-term reliability and growth, it must be backed by a sustainable commercial engine. History has shown that simply donating a project to a foundation (like Apache or CNCF) isn't a silver bullet; many projects under those umbrellas still struggle to find the resources they need to thrive. The ideal path—and the best outcome for users globally—is a "middle way" where: The software remains open and maintained. The core team has a viable way to survive and fund development. Open code ensures security, transparency, and a trustworthy software supply chain. However, the way MinIO has handled this transition is, in my view, the most disappointing approach possible. It creates a significant trust gap. When a company pivots this way, users are left wondering about the integrity of the code—whether it’s the potential for "backdoors" or undisclosed data transmission. I hope to see other open-source object storage projects mature quickly to provide a truly transparent and reliable alternative.
> For an open-source project to achieve long-term reliability and growth, it must be backed by a sustainable commercial engine
You mean like Linux, Python, PostgreSQL, Apache HTTP Server, Node.js, MariaDB, GNU Bash, GNU Coreutils, SQLite, VLC, LibreOffice, OpenSSH?
Actually, Linux reinforces my point. It isn't powered solely by volunteers; it thrives because the world's largest corporations (Intel, Google, Red Hat, etc.) foot the bill. The Linux Foundation is massively funded by corporate members, and most kernel contributors are paid engineers. Without that commercial engine, Linux would not have the dominance it does today. Even OpenAI had to pivot away from its original non-profit, open principles to survive and scale. There is nothing wrong with making money while sustaining open source. The problem is MinIO's specific approach. Instead of a symbiotic relationship, they treated the community as free QA testers and marketing pawns, only to pull up the ladder later. That’s a "bait-and-switch," not a sustainable business model.
> Even OpenAI had to pivot away from its original non-profit, open principles to survive and scale.
Uh, no, OpenAI didn't pivot from being open in order to survive.
They survived for 7 years before ChatGPT was released. When it was, they pivoted the _instant_ it became obvious that AI was about to be a trillion-dollar industry and they weren't going to miss the boat of commercialization. Yachts don't buy themselves, you know!
> Actually, Linux reinforces my point.
Not many open source projects are Linux-sized. Linux is worth billions of dollars and enabled Google and Redhat to exist, so they can give back millions, without compulsion, and in a self-interested way.
Random library maintainer dude should not expect their (very replaceable) library to print money. The cool open source tool/utility could be a 10-person company, maybe 100 tops, but people see dollar-signs in their eyes based on number of installs/GitHub stars, and get VC funding to take a swing for billions in ARR.
I remember when (small scale) open source was about scratching your own itch without making it a startup via user-coercion. It feels like the 'Open source as a growth-hack" has metastasized into "Now that they are hooked, entire user base is morally obligated to give me money". I would have no issue if a project included this before it gets popular - but that may prevent popular adoption. So it rubs me the wrong way when folk want to have their cake and eat it.
I want to like RustFS, but it feels like there's so much marketing attached to the software it turns me off a little. Even a little rocket emoji and benchmark in the Github about page. Sometimes less is more. Look at the ty Github home page - 1 benchmark on the main page, the description is just "An extremely fast Python type checker and language server, written in Rust.".
Haha, +1. I really like RustFS as a product, but the marketing fluff and documentation put me off too. It reads like non-native speakers relying heavily on AI, which explains a lot. Honestly, they really need to bring in some native English speakers to overhaul the docs. The current vibe just doesn't land well with a US audience.
Gosh, Ceph what a pita. Never again LOL. I wouldn't even want an LLM to suffer working on it.
Haha, totally get you! I think if you forced an LLM to manage a large-scale Ceph cluster, it would probably start hallucinating about retirement.
I successfully migrated from MinIO to Ceph, which I highly recommend. Along the way, I tested SeaweedFS, which looked promising. However, I ran into a strange bug, and after diagnosing it with the help of Claude, I realized the codebase was vibe-coded and riddled with a staggering number of structural errors. In my opinion, SeaweedFS should absolutely not be used for anything beyond testing — otherwise you're almost certain to lose data.
Seaweed has been around for a long time. I think you just discovered what legacy codebases look like.
Laughed reading this. We pretend Claude can't code because we don't like to acknowledge what code always turns out looking like, which is exactly what it's trained on
Ceph is the OG. Every now and then different attempts to replace it pop up, work well for some use cases, and then realise how hard the actual problem they are trying to solve is. Ceph always wins in the end.
Ceph solves the distributed consistent block storage problem very well. But I hardly ever need that problem solved, it's way more often that I need a distributed highly available blob storage, and Ceph makes the wrong tradeoffs for this task.
I just bit the bullet last week and figured we are going to migrate our self hosted minio servers to ceph instead. So far 3 server ceph cluster has been setup with cephadm and last minio server is currently mirroring its ~120TB buckets to new cluster with a whopping 420MB/s - should finish any day now. The complexity of ceph and it's cluster nature of course if a bit scary at first compared to minio - a single Go binary with minimal configuration, but after learning the basics it should be smooth sailing. What's neat is that ceph allows expanding clusters, just throw more storage servers at it, in theory at least, not sure where the ceiling is for that yet. Shame minio went that way, it had a really neat console before they cut it out. I also contemplated le garage, but it seem elasticsearch is not happy with that S3 solution for snapshots, so ceph it is.
It's complex, but Ceph's storage and consensus layer is battle-tested and a much more solid foundation for serious use. Just make sure that your nodes don't run full!
Make sure you have solid Linux system monitoring in general. About 50% of running Ceph successfully at scale is just basic, solid system monitoring and alerting.
This line of advice basically comes down to: have a competent infrastructure team. Sometimes you gotta move fast, but this is where having someone on infrastructure that knows what they are doing comes in and pays dividends. No competent infra guy is going to NOT set up linux monitoring. But you see some companies hit 100 people and get revenue then this type of thing blows up in their face.
The thing that strikes me about this thread is how many people are scrambling to evaluate alternatives they've never tested in production. That's the real risk with infrastructure dependencies — it's not that they might go closed-source, it's that the switching cost is so high that you don't maintain a credible plan B.
With application dependencies you can swap a library in a day. With object storage that's holding your data, you're looking at a migration measured in weeks or months. The S3 API compatibility helps, but anyone who's actually migrated between S3-compatible stores knows there are subtle behavioral differences that only surface under load.
I wonder how many MinIO deployments had a documented migration runbook before today.
See https://news.ycombinator.com/item?id=46136023 - MinIO is now in maintenance-mode
It was pretty clear they pivoted to their closed source repo back then.
Maintenance-mode is very different from "THIS REPOSITORY IS NO LONGER MAINTAINED".
Yes, the difference is the latter means "it is no longer maintained", and the former is "they claim to be maintaining it but everyone knows it's not really being maintained".
in theory "maintenance mode" should mean that they still deal with security issues and "no longer maintained" that they don't even do that anymore...
unless a security issue is reported it does feel very much the same...
"Critical security fixes may be evaluated on a case-by-case basis" didn't exactly give much confidence that they'd even be doing that.
Given the context is a for-profit company who is moving away from FOSS, I'm not sure the distinction matters so much, everyone understands what the first one means already.
We all saw that coming. For quite some time they have been all but transparent or open, vigorously removing even mild criticism towards any decisions they were making from github with no further explanation, locking comments, etc. No one that's been following the development and has been somewhat reliant on min.io is surprised. Personally the moment I saw the "maintenance" mode, I rushed to switch to garage. I have a few features I need to pack in a PR ready but I haven't had time to get to that. I should probably prioritize that.
Why should these guys bother with people who won't pay for their offering ? The community is not skilled enough to contribute to this type of project. Honestly most serious open source is industry backed and solves very challenging distributed systems problems. A run of the mill web dev doesnt know these things I am sorry to say.
If you are struggling with observability solutions which require object storage for production setups after such news (i.e. Thanos, Loki, Mimir, Tempo), then try alternatives without this requirement, such as VictoriaMetrics, VictoriaLogs and VictoriaTraces. They scale to petabytes of data on regular block storage, and they provide higher performance and availability than systems, which depend on manually managed object storage such as MinIO.
[dead]
AIstor. They just slap the word AI anywhere these days.
In French the adjective follows the name so AI is actually IA.
On AWS S3, you have a storage level called "Infrequent Access", shortened IA everywhere.
A few weeks ago I had to spend way too much time explaining to a customer that, no, we weren't planning on feeding their data to an AI when, on my reports, I was talking about relying on S3 IA to reduce costs...
Is that an I (indigo) or l (lama)? I though it was L, lol
COSS companies want it both ways. Free community contributions and bug reports during the growth phase. Then closed source once they've captured enough users. The code you run today belongs to you. The roadmap belongs to their investors.
Duolingo used unpaid labour to build its resources. Now it charges money for premium
This is timely news for me - I was just standing up some Loki infrastructure yesterday & following Grafana's own guides on object storage (they recommend minio for non-cloud setups). I wasn't previously experienced with minio & would have completely missed the maintenance status if it wasn't for Checkov nagging me about using latest tags for images & having to go searching for release versions.
Sofar I've switched to Rustfs which seems like a very nice project, though <24hrs is hardly an evaluation period.
Why do you need non-trivial dependency on the object storage for the database for logs in the first place?
Object storage has advantages over regular block storage if it is managed by cloud, and if it has a proven record on durability, availability and "infinite" storage space at low costs, such as S3 at Amazon or GCS at Google.
Object storage has zero advantages over regular block storage if you run it on yourself:
- It doesn't provide "infinite" storage space - you need to regularly monitor and manually add new physical storage to the object storage.
- It doesn't provide high durability and availability. It has lower availability comparing to a regular locally attached block storage because of the complicated coordination of the object storage state between storage nodes over network. It usually has lower durability than the object storage provided by cloud hosting. If some data is corrupted or lost on the underlying hardware storage, there are low chances it is properly and automatically recovered by DIY object storage.
- It is more expensive because of higher overhead (and, probably, half-baked replication) comparing to locally attached block storage.
- It is slower than locally attached block storage because of much higher network latency compared to the latency when accessing local storage. The latency difference is 1000x - 100ms at object storage vs 0.1ms at local block storage.
- It is much harder to configure, operate and troubleshoot than block storage.
So I'd recommend taking a look at other databases for logs, which do not require object storage for large-scale production setups. For example, VictoriaLogs. It scales to hundreds of terabytes of logs on a single node, and it can scale to petabytes of logs in cluster mode. Both modes are open source and free to use.
Disclaimer: I'm the core developer of VictoriaLogs.
> Object storage has zero advantages over regular block storage if you run it on yourself
Worth adding, this depends on what's using your block storage / object storage. For Loki specifically, there are known edge-cases with large object counts on block storage (this isn't related to object size or disk space) - this obviously isn't something I've encountered & I probably never will, but they are documented.
For an application I had written myself, I can see clearly that block storage is going to trump object storage for all self-hosted usecases, but for 3P software I'm merely administering, I have less control over its quirks & those pros -vs- cons are much less clear cut.
Initially I was just following recommendations blindly - I've never run Loki off-cloud before so my typical approach to learning a system would be to start with defaults & tweak/add/remove components as I learn it. Grafana's docs use object storage everywhere, so it's a lot easier with you're aligned, you can rely more heavily on config parity.
While I try to avoid complexity, idiomatic approaches have their advantages; it's always a trade-off.
That said my first instinct when I saw minio's status was to use filestorage but the rustfs setup has been pretty painless sofar. I might still remove it, we'll see.
What is the best alternative that can run as a Docker image that mimics AWS S3 to enable local only testing without any external cloud connections?
For me, my only use for Minio was to simulate AWS S3 in docker compose so that my applications were fully testable locally. I never used it it production or as a middle ware. It has not sat well with me to use alternative strategies like Ruby on Rails' local file storage for testing as it behaves differently than when the app is deployed. And using actual cloud services creates its own hurdles of either credential sharing among developers and gets rid of the "docker magic" of being to run a single set up script and be up and running to change code and run the full test suite.
My use case is any developer on the team can do a Git clone and run the set up script and then be fully up and running within minutes locally without any special configuration on their part.
Have you heard of LocalStack’s AWS emulator? [1] It’s runnable within a docker container and has a high fidelity S3 service.
Disclosure, I’m a SWE at LocalStack.
[1] https://github.com/localstack/localstack
S3 is evolving rapidly. While sticking with the old MinIO image might work for the immediate short term, I believe it is not a viable strategy for the long haul.
New standards and features are emerging constantly—such as S3 over RDMA, S3 Append, cold storage tiers, and S3 vector buckets.
In at most two or three years, relying on an unmaintained version of MinIO will likely become a liability that drags down your project as your production environment evolves. Finding an actively maintained open-source alternative is a must.
Hence my question.
for me its rustfs https://rustfs.com/
Anyone interested in keeping access should fork this open source repository now and make a local archived copy. That way when this organization deletes this repository there can still be access to this open source code.
In the Ruby on Rails space, we had this happen recently with the prawn_plus Gem where the original author yanked all published copies and deleted the GitHub repository.
On GitHub, when a private repo is deleted forks are deleted. But for public repos, the policy is different. See https://docs.github.com/en/pull-requests/collaborating-with-....
This is the latest of a sunset trap set for those of us who use Minio for local testing but not production use.
Has anybody actually tried AIStor ? Is it possible to migrate/upgrade from a minio installation to AIStor ? It seems to be very simple, just change the binary from minio to aistor: https://docs.min.io/enterprise/aistor-object-store/upgrade-a...
Is AIStor Free really free like they claim here https://www.min.io/pricing, i.e
I could use that if it didn't have hidden costs or obligations.Fool me once ...
This is becoming a predictable pattern in infrastructure tooling: build community on open source, get adoption, then pivot to closed source once you need revenue. Elastic, Redis, Terraform, now MinIO.
The frustrating part isn't the business decision itself. It's that every pivot creates a massive migration burden on teams who bet on the "open" part. When your object storage layer suddenly needs replacing, that's not a weekend project. You're looking at weeks of testing, data migration, updating every service that touches S3-compatible APIs, and hoping nothing breaks in production.
For anyone evaluating infrastructure dependencies right now: the license matters, but the funding model matters more. Single-vendor open source projects backed by VC are essentially on a countdown timer. Either they find a sustainable model that doesn't require closing the source, or they eventually pull the rug.
Community-governed projects under foundations (Ceph under Linux Foundation, for example) tend to be more durable even if they're harder to set up initially. The operational complexity of Ceph vs MinIO was always the tradeoff - but at least you're not going to wake up one morning to a "THIS REPOSITORY IS NO LONGER MAINTAINED" commit.
I guess we need a new type of Open Source license. One that is very permissive except if you are a company with a much larger revenue than the company funding the open source project, then you have to pay.
While I loath the moves to closed source you also can't fault them the hyperscalers just outcompete them with their own software.
Various projects have invented licenses like that. Those licenses aren't free, so the FOSS crowd won't like them. Rather than inventing a new one, you're probably better grabbing whatever the other not-free-but-close-enough projects are doing. Legal teams don't like bespoke licenses very much which hurts adoption.
An alternative I've seen is "the code is proprietary for 1 year after it was written, after that it's MIT/GPL/etc.", which keeps the code entirely free(ish) but still prevents many businesses from getting rich off your product and leaving you in the dust.
You could also go for AGPL, which is to companies like Google like garlic is to vampires. That would hurt any open core style business you might want to build out of your project though, unless you don't accept external contributions.
That would be interesting to figure out. Say you are single guy in some cheaper cost of living region. And then some SV startup got say million in funding. Surely that startup should give at least couple thousand to your sole proprietorship if they use your stuff? Now how you figure out these thresholds get complex.
Server Side Public License? Since it demands any company offering the project as a paid product/service to also open source the related infrastructure, the bigger companies end up creating a maintained fork with a more permissive license. See ElasticSearch -> OpenSearch, Redis -> Valkey
Inflicting pain is most likely worth it in the long run. Those internal projects now have to fight for budget and visibility and some won't make it past 2-5 years.
The hyperscalers will just rewrite your stuff from scratch if its popular enough, especially now with AI coding.
1. Completely giving up is worse.
2. You're forgetting bureaucracy and general big company overhead. Hyperscalers have tried to kill a lot of smaller external stuff and frequently they end up their own chat apps, instead.
you won't get VC funding with this license which is the whole point of even starting a business in the wider area
I would say what we need is more of a push for software to become GPLed or AGPLed, so that it (mostly) can't be closed up in a 'betrayal' of the FOSS community around a project.
> Elastic, Redis, Terraform, now MinIO.
Redis is the odd one out here[1]: Garantia Data, later known as Redis Labs, now known as Redis, did not create Redis, nor did it maintain Redis for most of its rise to popularity (2009–2015) nor did it employ Redis’s creator and then-maintainer 'antirez at that time. (He objected; they hired him; some years later he left; then he returned. He is apparently OK with how things ended up.) What the company did do is develop OSS Redis addons, then pull the rug on them while saying that Redis proper would “always remain BSD”[2], then prove that that was a lie too[3]. As well as do various other shady (if legal) stuff with the trademarks[4] and credits[5] too.
[1] https://www.gomomento.com/blog/rip-redis-how-garantia-data-p...
[2] https://redis.io/blog/redis-license-bsd-will-remain-bsd/
[3] https://lwn.net/Articles/966133/
[4] https://github.com/redis-rs/redis-rs/issues/1419
[5] https://github.com/valkey-io/valkey/issues/544
Things are a bit more complicated. Actually Redis the company (Redis Labs, and previously Garantia Data) offered since the start to hire me, but I was at VMWare, later at Pivotal, and just didn't care, wanted to stay actually "super partes" because of idealism. But actually Pivotal and Redis Labs shared the same VC, It made a lot more sense to move to Redis Labs, and work there under the same level of independence, so this happened. However, once I moved to Redis Labs a lot of good things happened, and made Redis maturing much faster: we had a core team all working at the core, I was no longer alone when there were serious bugs, improvements to make, and so forth. During those years many good things happened, including Streams, ACLs, memory reduction stuff, modules, and in general things that made Redis more solid. In order to be maintained, an open source software needs money, at scale, so we tried hard in the past to avoid going away from BSD. But eventually in the new hyperscalers situation it was impossible to avoid it, I guess. I was no longer with the company, I believe the bad call was going SSPL, it was a license very similar to AGPL but not accepted by the community. Now we are back to AGPL, and I believe that in the current situation, this is a good call. Nobody ever stopped to: 1. Provide the source on Github and continue the development. 2. Release it under a source available license (not OSI approved but practically very similar to AGPL). 3. Find a different way to do it... and indeed Redis returned AGPL after a few months I was back because maybe I helped a bit, but inside the company since the start there was a big slice that didn't accept the change. So Redis is still open source software and maintained. I can't see a parallel here.
> For anyone evaluating infrastructure dependencies right now: the license matters, but the funding model matters more. Single-vendor open source projects backed by VC are essentially on a countdown timer. Either they find a sustainable model that doesn't require closing the source, or they eventually pull the rug.
I struggle to even find example of VC-backed OSS that didn't go "ok closing down time". Only ones I remember (like Gitlab) started with open core model, not fully OSS
This is the newer generations re-discovering why various flavours of Shareware and trial demos existed since the 1980's, even though sharing code under various licenses is almost as old as computing.
I think the landscape has changed with those hyperscalers outcompeting open-source projects with alternative profit avenues for the money available in the market.
From my experience, Ceph works well, but requires a lot more hardware and dedicated cluster monitoring versus something like more simple like Minio; in my eyes, they have a somewhat different target audience. I can throw Minio into some customer environments as a convenient add-on, which I don't think I could do with Ceph.
Hopefully one of the open-source alternatives to Minio will step in and fill that "lighter" object storage gap.
What is especially annoying is the aistor/minio business model, either get the „free“ version or pay about 100k… How about accepting some small dollars and keeping the core concept? However this seems to be the business type of enshitification. Instead of slapping everything with ads, you either pay ridiculous dollars or move on.
Well, anyone using the product of an open source project is free to fork it and then take on the maintenance. Or organize multiple users to handle the maintenance.
I don't expect free shit forever.
ai
Will https://github.com/chainguard-forks/minio hold the fork?
AGPL is dead as a copy-left measure. LLMs do not understand, and would not care anyway, about regurgitating code that you have published to the internet.
Even having it as a private repo on github is a mistake at this point.
Self hosted or just using git itself is only solution
no, it's actually great, that just means now all LLM code that included that needs to be AGPL-licensed
All LLM i've tried are capable to write plugin for my AGPL work: https://github.com/mickael-kerjean/filestash
I wonder how many of the 504 contributors listed on GitHub would still have contributed their (presumably) free labor if they had known the company would eventually abandon the open source version like this while continuing to offer their paid upgraded versions.
It’s not the first time this happens, and won’t be the last.
If there is a real community around it, forking and maintaining an open edition will be a no-brainer.
Tangentially related, since we are on the subject of Minio. Minio has or rather had an option to work as an FTP server! That is kind of neat because CCTV cameras have an option to upload a picture of motion detected to an FTP server and that being a distributed minio cluster really was a neat option, since you could then generate an event of a file uploaded, kick off a pipeline job or whatever. Currently instead of that I use vsftpd and inotify to detect file uploads but that is such a major pain in the ass operate, it would be really great to find another FTP to S3 gateway.
I was recently migrating a large amount of data off of MinIO and wrote some tools for it in case anybody needs that https://github.com/dialohq/minio-format-rs
I heard that one of the reasons for this was that Minio was not happy people were including it in their commercial products without attribution
This is pretty predictable at this point. VC-backed open source with a single vendor always ends up here eventually. The operational tradeoff was always MinIO being dead simple versus Ceph being complex but foundation-governed. Turns out "easy to set up" doesn't matter much when you wake up to a repo going dark. The real lesson is funding model matters more than license. If there's no sustainable path that doesn't involve closing the source, you're just on a longer timeline to the same outcome.
I use three clustered Garage nodes in a multi cloud setup. No complaints.
I've moved my SaaS I'm developing to SeaweedFS, it was rather painless to do it. I should also move away from minio-go SDK to just use the generic AWS one, one day. No hard feelings from my side to MinIO team though.
This has been on the cards for at least a year, with the increasingly doomy commits noted by HN.
Unfortunately I don't know of any other open projects that can obviously scale to the same degree. I built up around 100PiB of storage under minio with a former employer. It's very robust in the face of drive & server failure, is simple to manage on bare hardware with ansible. We got 180Gbps sustained writes out of it, with some part time hardware maintenance.
Don't know if there's an opportunity here for larger users of minio to band together and fund some continued maintenance?
I definitely had a wishlist and some hardware management scripts around it that could be integrated into it.
Ceph can scale to pretty large numbers for both storage, writes and reads. I was running 60PB+ cluster few years back and it was still growing when I left the company.
Ceph, definitely.
Looks like they pivoted to "AI storage", whatever that means.
That means that any project without the letters "AI" in the name is dead in the eyes of investors.
Even plain terminals are now "agentic orchestrators": https://www.warp.dev
Are investors really that gullible? Whenever I see "AI" slapped onto an obviously non-AI product, it's an instant red flag to me.
> Are investors really that gullible?
Yes.
https://www.theguardian.com/technology/2017/dec/21/us-soft-d...
Are you kidding me
"Long Island Iced Tea Corp [...] In 2017, the corporation rebranded as Long Blockchain Corp [...] Its stock price spiked as much as 380% after the announcement."
https://en.wikipedia.org/wiki/Long_Blockchain_Corp.
As long as there's at least one gullible in the pack, all the other ones will behave the same because they now know there's one idiot that will happily hold the bag when it comes crashing down. They're all banking on passing the bag onto someone else before the crash.
A Ponzi can be a good investment too (for a certain definition of "good") as long as you get out before it collapses. The whole tech market right now is a big Ponzi with everyone hoping to get out before it crashes. Worse, dissent risks crashing it early so no talks of AI limitations or the lack of actual, sustainable productivity improvements are allowed, even if those concerns do absolutely happen behind closed doors.
On my side, I feel disappointed on two different counts.
- Obviously, when your selling point against competitor and alternative services was that you were Open Source, and you do a rug pull once you got enough traction, that is not great.
- But also they also switched of target. The big added value of Minio initially is that it was totally easy to run, targeting the possibility to have an s3 server running in a minute, on single instances and so... That was the perfect solution for rapid tests, local setups and automatic testing. Then, again once they started to get enough traction, they didn't just move to add more "scaling" solutions to Minio, but they kind of twisted it completely to be a complex deployment scalable solution like any other one that you find in the market. Without that much added value on that count to be honest.
We moved to garage because minio let us down.
Fair enough.
I've used minio a lot but only as part of my local dev pipeline to emulate s3, and never paid.
So far for me garage seems to work quite well as an alternative although it does lack some of the features of minio.
Any good alternatives for local development?
I didn't find an alternative that I liked as much as MinIO and I, unfortunately, ended up creating a my own. It includes just the most basic features and cannot be compared to the larger projects, but is simple and it is efficient.
https://github.com/espebra/stupid-simple-s3
The listing is perhaps in line with the first two "s". It seems it always iterates through all files, reads the "meta.json", then filters?
Yes, indeed. The list operation is expensive. The S3 spec says that the list output needs to be sorted.
1. All filenames are read. 2. All filenames are sorted. 3. Pagination applied.
It doesn't scale obviously, but works ok-ish for a smaller data set. It is difficult to do this efficiently without introducing complexity. My applications don't use listing, so I prioritised simplicity over performance for the list operation.
Maybe mention it somewhere as a limitation, so it is not used for use-cases where listing is important and there are many objects?
Listing was IMO a problem with minio as well, but maybe it is not that important because it seems to have succeeded anyway.
Go for Garage, you can check the docker-compose and the "setup" crate of this project https://github.com/beep-industries/content. There are a few tricks to make it work locally so it generates an API key and bucket declaratively but in the end it does the job
versitygw is the simplest "just expose some S3-compatible API on top of some local folder"
S3 Ninja if you really just need something local to try your code with.
OS's file system? Implementation cost has been significantly decreased these day. We can just prompt 'use S3 instead of local file system' if we need to use a S3 like service.
RustFS is dead simple to setup.
It has unfortunately also had a fair bit of drama already for a pretty young project
seaweedfs: `weed server -s3` is enough to spin up a server locally
Is there not a community fork? Even as is, is it still recommended for use?
I started a fork during the Christmas holidays https://github.com/kypello-io/kypello , but I’ve paused it for now.
My use case for minio was simply to test out uploading things to s3 for work related apps.. I loved that web ui too.
Looks like i'm gonna give seaweedfs a whirl instead of hunting down the docker image and sha of the last pre-enshittified version of Minio
Took them how many weeks to go from „maintenance mode” to unmaintained?
They could just archive it there and then, at least it would be honest. What a bunch of clowns.
We just integrated minio with our strapi cms, and our internal tool admin dashboard. And have about two months worth of pictures and faxes hosted there. Shit. Ticking time bomb now.
I will have to migrate, the cost of "self hosting" what a pain!
omg move to rustfs
We moved to Garage. Minio let us down.
Lmao, that was fast.
[dead]