This is essentially S3FS using EFS (AWS's managed NFS service) as a cache layer for active data and small random accesses. Unfortunately, this also means that it comes with some of EFS's eye-watering pricing:
— All writes cost $0.06/GB, since everything is first written to the EFS cache. For write-heavy applications, this could be a dealbreaker.
— Reads hitting the cache get billed at $0.03/GB. Large reads (>128kB) get directly streamed from the underlying S3 bucket, which is free.
— Cache is charged at $0.30/GB/month. Even though everything is written to the cache (for consistency purposes), it seems like it's only used for persistent storage of small files (<128kB), so this shouldn't cost too much.
The threshold at which the cache gets used is configurable, with 128kB the default. The assumption is that any read larger than the threshold will be a long sustained read, for which latency doesn't matter too much. My question is, do reads <128kB (or whatever the threshold is) from files >128kB get saved to the cache, or is it only used for files whose overall size is under the threshold? Frequent random access to large files is a textbook use case for a caching layer like this, but its cost will be substantial in this system.
NVMe read latency is in the 10-100µs range for 128kB blocks. S3 is about 100ms. That's 3-4 OOMs.
The threshold where the total read duration starts to dominate latency would be somewhere in the dozens to hundreds of megabytes, not kilobytes.
I agree, it's an oddly low threshold. The latency differential of NFS vs. S3 is a couple OOMs, so a threshold of ~10MB seems more appropriate to me. Perhaps it's set intentionally low to avoid racking up immense EFS bills? Setting it higher would effectively mean getting billed $0.03/GB for a huge fraction of reads, which is untenable for most people's applications.
No, within the same DC network latency does not add that much. After all EFS also manages 600µs average latency.
It's really just S3 that's slow. I assume some large fraction of S3 is spread over HDDs, not SSDs.
I imagine (hope) that they are doing some kind of intelligent read-ahead in the frontend servers to optimize for sequential reads that would avoid this looking terrible for applications.
This was my concern too. The whole point of using S3 as a file system instead of EBS / EFS (for me at least) is to minimize cost and I don't really see why I would use this instead of s3fs.
Thanks for the analysis. Interestingly when we first released our low latency s3-compatible storage (1M IOPS, p99 ~5ms)[1], a lot of people asking the same questions why we tried to bring file system semantics (atomic object/folder rename) to s3. We also got some feedback from people who really need FS sematics, and added POSIX FS support then.
aws S3FS is using normal FUSE interface, which would be super heavy due to inherent overhead of copying data back and forth between user space and kernel space, that is the initial concern when we tried to add the POSIX support for the original object storage design. Fortunately, we have found and open-sourced a perfect solution [2]: using FUSE_OVER_IO_URING + FUSE_PASSTHROUGH, we can maintain the same high-performance archtecture design of our original object storage. We'd like to come out a new blog post explain more details and reveal our performance numbers if anyone is interested with this.
S3 Files was launched today without support for atomic rename. This is not something you can bolt on. Can you imagine running Claude Code on your S3 Files and it just wants to do a little house cleaning, renaming a directory and suddenly a full copy is needed for every file in that directory?
The hardest part in building a distributed filesystem is atomic rename. It's always rename. Scalable metadata file systems, like Collosus/Tectonic/ADLSv2/HopsFS, are either designed around how to make rename work at scale* or how work around it at higher levels in the stack.
Indeed this is not an easy problem. And our s3-compatible system do support the atomic rename with extended protocol in a graceful way, see the demo with our tool [1].
Do your applications not expect any network hiccup to cause them to block indefinitely in a system call making them effectively unkillable and making the filesystem unmountable?
I was prototyping with S3 mounted as filesystem for docker volumes and evaluating solutions for that. GeeseFS cli is the fastest one, here I made a script to mount folder with it from S3 compatible storage:
> For example, suppose you edit /mnt/s3files/report.csv through the file system. Before S3 Files synchronizes your changes back to the S3 bucket, another application uploads a new version of report.csv directly to the S3 bucket. When S3 Files detects the conflict, it moves your version of report.csv to the lost and found directory and replaces it with the version from the S3 bucket.
> The lost and found directory is located in your file system's root directory under the name .s3files-lost+found-file-system-id.
Mounting S3 buckets seemed like a great way to make stateless applications stateful for a while, which sounds appealing, especially for agent-like workloads. Handling conflicts like this means you really have to approach the mounted bucket as separate stateful thing. Seems like a mismatch to me.
I wish they offered some managed bridging to local NVMe storage. AWS NVMe is super fast compared to EBS, and EBS (node-exclusive access as block device) is faster than EFS (multi-node access). I imagine this can go fast if you put some kind of further-cache-to-NVMe FS on top, but a completely vertically integrated option would be much better.
Since EFS is just an NFS mount, I wonder if you could do this yourself by attaching an NVMe volume to your instance and setting up something like cachefilesd on the NFS mount, pointed to the NVMe.
Would
mkfs.ext4 /dev/nvme0n1 && \
mount /dev/nvme0n1 /var/cache/fscache && \
mount -t s3files -o fsc fs-0aa860d05df9afdfe:/ /home/ec2-user/s3files
work out of the box? It does for EFS. It hardly seems worth it to offer a managed service that's effectively three shell commands, but this is AWS we're talking about.
> - fsc – This option enables local file caching, but does not change NFS cache coherency, and does not reduce latencies.
If the S3 Files sync logic ran client-side, we could almost entirely avoid file access latency for cached files and paying for new expensive EFS disks. I already pay for a lot of NVMe disks, let me just use those!
>This option enables local file caching, but does not change NFS cache coherency, and does not reduce latencies.
That's true for any NFS setup, not just EFS. The benefit of local NFS caching is to speed up reads of large, immutable files, where latency is relatively negligible. I'm not sure why AWS specifically dissuades users from enabling caching, since it's not like bandwidth to an EFS volume is even in the ballpark of EBS/NVMe bandwidth.
The problem with using S3 as a filesystem is that it’s immutable, and that hasn’t changed with S3 Files. So if I have a large file and change 1 byte of it, or even just rename it, it needs to upload the entire file all over again. This seems most useful for read-heavy workflows of files that are small enough to fit in the cache.
That’s not that different than CoW filesystems - there is no rule that files must map 1:1 to objects; you can (transparently) divide a file into smaller chunks to enable more fine grained edits.
Depends how you implement the fs layer on top of s3; as a quick example, I've done a couple of implementations of exactly that, where a file is chunked into multiple s3 objects; this allows for CoW semantics if required, and parallel upload/downloads; in the end it heavily depends on your use case
Files can be immutable if you have mutable metadata - but S3 does not have mutable metadata, so you can't rename a directory without a full copy of all its contents.
Immutable files can be solved by chunking them, allowing files to be opened and appended to - we do this in HopsFS. However, random writes are typically not supported in scaleout metadata file systems - but rarely used by POSIX clients, thankfully.
They found a way to make money on it by putting a cache in front of it. Less load for them, better performance for you. Maybe you save money, maybe you dont.
It appears that they put an actual file system in front of S3 (AWS EFS basically) and then perform transparent syncing. The blog post discusses a lot of caveats (consistency, for example) or object namings (incosistencies are emitted as events to customers).
Having been a fan of S3 for such a long time, I'm really a fan of the design. It's a good compromise and kudos to whoever managed to push through the design.
Because people will use it as filesystem regardless of the original intent because it is very convenient abstraction. So might as well do it in optimal and supported way I guess ?
Because without significant engineering effort (see the blog post), the mismatch between object store semantics and file semantics mean you will probably Have A Bad Time. In much earlier eras of S3, there were also some implementation specifics like throughput limits based on key prefixes (that one vanished circa 2016) that made it even worse to use for hierarchical directory shapes.
People and by people I mean architects and lead devs at big account orgs ( $$$ ) have been using S3 as a filesystem as one of the backbones of their usually wacky mega complex projects.
So there always been a pressure to AWS make it work like that. I suspect the amount of support tickets AWS receives related to "My S3 backed project is slow/fails sometimes/run into AWS limits (like the max number of buckets per account)" and "Why don't.." questions in the design phase which many times AWS people are in the room, serve as enough of a long applied pressure to overcome technical limitations of S3.
I'm not a fan of this type of "let's put a fresh coat on top of it and pretend it's something that fundamentally is not" abstractions. But I suspect here is a case of social pressure turbo charged by $$$.
I think it opens them up to a huge customer base of less technically apt people who just downloaded some random "S3asYourFS.exe" program but also opens them up to needing to support that functionality and field support calls from less technically apt people. I don't know if that business decision makes sense (since AWS already lacks the CS infrastructure to even deal with professional clients) but the idea that you could get everyone and their brother paying monthly fees to AWS is likely too tempting of a fruit to pass up.
Notably, this is going to manage your data in it's native format (i.e. you can actually read-write the files out of the S3 bucket as if they were actual objects, mapping 1:1 to each file). The ZFS backend is (almost certainly) a block-based format that is persisted to S3 (meaning that you cannot use it for existing data in S3, and you cannot access data written through ZFS via S3).
This could be useful. We use EFS, I like the benefits but I think it’s overkill for what we need. I’ve been thinking of switching to s3 but not looking forward to completely changing how we upload and download.
I was thinking: "No way this has existed for decades". But the earliest I can find it existing is 2008. Strictly speaking not decades but much closer to it than I expected.
This is pretty different than s3fs. s3fs is a FUSE file system that is backed by S3.
This means that all of the non-atomic operations that you might want to do on S3 (including edits to the middle of files, renames, etc) are run on the machine running S3fs. As a result, if your machine crashes, it's not clear what's going to show up in your S3 bucket or if would corrupt things.
As a result, S3fs is also slow because it means that the next stop after your machine is S3, which isn't suitable for many file-based applications.
What AWS has built here is different, using EFS as the middle layer means that there's a safe, durable place for your file system operations to go while they're being assembled in object operations. It also means that the performance should be much better than s3fs (it's talking to ssds where data is 1ms away instead of hdds where data is 30ms away).
Eagerly awaiting on first blogpost where developers didn't read the eventually consistent part, lost the data and made some "genius" workaround with help of the LLM that got them in that spot in the first place
> Effective immediately, all S3 GET, PUT, and LIST operations, as well as operations that change object tags, ACLs, or metadata, are now strongly consistent.
> we locked a bunch of our most senior engineers in a room and said we weren’t going to let them out till they had a plan that they all liked.
That's one way to do it.
> When you create or modify files, changes are aggregated and committed back to S3 roughly every 60 seconds as a single PUT. Sync runs in both directions, so when other applications modify objects in the bucket, S3 Files automatically spots those modifications and reflects them in the filesystem view automatically.
That sounds about right given the above. I have trouble seeing this as something other than a giant "hack." I already don't enjoy projecting costs for new types of S3 access patterns and I feel like has the potential to double the complication I already experience here.
Maybe I'm too frugal, but I've been in the cloud for a decade now, and I've worked very hard to prevent any "surprise" bills from showing up. This seems like a great feature; if you don't care what your AWS bill is each month.
There is a staggering number of user doing this with extra steps using fsx for lustre, their life greatly simplified today (unless they use gpu direct storage I guess)
Pre-compaction the recent data can be in small files, and the delete markers will also be in small files. This will bring down fetch times, while ducklake may have many of the larger blocks in memory or disk cache already.
Reading block headers for filtering is lots of small ranges, this could speed it up by 10x.
I was thinking of using it with Duckdb as well but seems it would be of limited benefit. Parquet objects are in MBs, so they would be streamed directly from S3. With raw parquet objects, it might help with S3 listing if you have a lot of them (shave off a couple of seconds from the query). If you are already on Ducklake, Duckdb will use that for getting the list of relevant objects anyway.
Maybe the OP is thinking of reading/writing to DuckDB native format files. Those require filesystem semantics for writing. Unfortunately, even NFS or SMB are not sufficiently FS-like for DuckDB.
Parquet is static append only, so DuckDB has no problems with those living on S3.
not everything should or needs to be some article geared towards the audience's convenience, or selling something to the audience. pretty much all allthingsdistributed articles are long form articles covering highly technical systems and contain a decent whack of detail/context. in my mind, they veer closer to "computer scientist does blog posts" compared to "5 ways React can boost your page visits" listicles.
edited slightly ... i really need to turn 10 minute post delay back on.
This is essentially S3FS using EFS (AWS's managed NFS service) as a cache layer for active data and small random accesses. Unfortunately, this also means that it comes with some of EFS's eye-watering pricing:
— All writes cost $0.06/GB, since everything is first written to the EFS cache. For write-heavy applications, this could be a dealbreaker.
— Reads hitting the cache get billed at $0.03/GB. Large reads (>128kB) get directly streamed from the underlying S3 bucket, which is free.
— Cache is charged at $0.30/GB/month. Even though everything is written to the cache (for consistency purposes), it seems like it's only used for persistent storage of small files (<128kB), so this shouldn't cost too much.
> Large reads (>128kB) get directly streamed from the underlying S3 bucket, which is free.
Always uncached? S3 has pretty bad latency.
The threshold at which the cache gets used is configurable, with 128kB the default. The assumption is that any read larger than the threshold will be a long sustained read, for which latency doesn't matter too much. My question is, do reads <128kB (or whatever the threshold is) from files >128kB get saved to the cache, or is it only used for files whose overall size is under the threshold? Frequent random access to large files is a textbook use case for a caching layer like this, but its cost will be substantial in this system.
NVMe read latency is in the 10-100µs range for 128kB blocks. S3 is about 100ms. That's 3-4 OOMs. The threshold where the total read duration starts to dominate latency would be somewhere in the dozens to hundreds of megabytes, not kilobytes.
I agree, it's an oddly low threshold. The latency differential of NFS vs. S3 is a couple OOMs, so a threshold of ~10MB seems more appropriate to me. Perhaps it's set intentionally low to avoid racking up immense EFS bills? Setting it higher would effectively mean getting billed $0.03/GB for a huge fraction of reads, which is untenable for most people's applications.
< NVMe read latency is in the 10-100µs range for 128kB blocks. S3 is about 100ms. That's 3-4 OOMs.
Aren't you comparing local in-process latency to network latency? That's multiple OOM right there.
No, within the same DC network latency does not add that much. After all EFS also manages 600µs average latency. It's really just S3 that's slow. I assume some large fraction of S3 is spread over HDDs, not SSDs.
I imagine (hope) that they are doing some kind of intelligent read-ahead in the frontend servers to optimize for sequential reads that would avoid this looking terrible for applications.
This was my concern too. The whole point of using S3 as a file system instead of EBS / EFS (for me at least) is to minimize cost and I don't really see why I would use this instead of s3fs.
Probably some tradeoff at high client count or if you seek into files to read partial data
Thanks for the analysis. Interestingly when we first released our low latency s3-compatible storage (1M IOPS, p99 ~5ms)[1], a lot of people asking the same questions why we tried to bring file system semantics (atomic object/folder rename) to s3. We also got some feedback from people who really need FS sematics, and added POSIX FS support then.
aws S3FS is using normal FUSE interface, which would be super heavy due to inherent overhead of copying data back and forth between user space and kernel space, that is the initial concern when we tried to add the POSIX support for the original object storage design. Fortunately, we have found and open-sourced a perfect solution [2]: using FUSE_OVER_IO_URING + FUSE_PASSTHROUGH, we can maintain the same high-performance archtecture design of our original object storage. We'd like to come out a new blog post explain more details and reveal our performance numbers if anyone is interested with this.
[1] https://fractalbits.com/blog/why-we-built-another-object-sto...
[2] https://crates.io/crates/fractal-fuse
> directly streamed from the underlying S3 bucket, which is free.
No reads from S3 are free. All outgoing traffic from AWS is charged no matter what.
Reads from s3 via an s3 endpoint inside a vpc to an interface inside of that vpc is not billed.
S3 Files was launched today without support for atomic rename. This is not something you can bolt on. Can you imagine running Claude Code on your S3 Files and it just wants to do a little house cleaning, renaming a directory and suddenly a full copy is needed for every file in that directory?
The hardest part in building a distributed filesystem is atomic rename. It's always rename. Scalable metadata file systems, like Collosus/Tectonic/ADLSv2/HopsFS, are either designed around how to make rename work at scale* or how work around it at higher levels in the stack.
* https://www.hopsworks.ai/post/scalable-metadata-the-new-bree...
Indeed this is not an easy problem. And our s3-compatible system do support the atomic rename with extended protocol in a graceful way, see the demo with our tool [1].
[1] https://github.com/fractalbits-labs/fractalbits-main/tree/ma...
We have advanced to building S3 stores with claude code - impressive:
https://github.com/fractalbits-labs/fractalbits-main/graphs/...
"NFS provides the semantics your applications expect" is one of the funniest things I have ever read.
Do your applications not expect any network hiccup to cause them to block indefinitely in a system call making them effectively unkillable and making the filesystem unmountable?
Don't forget the locking semantics. That was fun and caused Sage to fail.
Compared to roll-your-own with S3 or GCS it does :)
Semanticness as a measurement.
The best way to think of the architecture of this is it's EFS with a bidirectional sync to S3.
You can write into one and read out from the other and vice versa. Consistency guarantees kept within each but not between.
I was prototyping with S3 mounted as filesystem for docker volumes and evaluating solutions for that. GeeseFS cli is the fastest one, here I made a script to mount folder with it from S3 compatible storage:
https://gist.github.com/huksley/44341276d7c269f092e10784959e...
You might want to play with memory params for GeeseFS for better results
Synchronization bits is what I was wondering about: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-fil...
> For example, suppose you edit /mnt/s3files/report.csv through the file system. Before S3 Files synchronizes your changes back to the S3 bucket, another application uploads a new version of report.csv directly to the S3 bucket. When S3 Files detects the conflict, it moves your version of report.csv to the lost and found directory and replaces it with the version from the S3 bucket.
> The lost and found directory is located in your file system's root directory under the name .s3files-lost+found-file-system-id.
Mounting S3 buckets seemed like a great way to make stateless applications stateful for a while, which sounds appealing, especially for agent-like workloads. Handling conflicts like this means you really have to approach the mounted bucket as separate stateful thing. Seems like a mismatch to me.
Hugging Face Buckets also recently added support for mounting Buckets as a filesystem: https://huggingface.co/changelog/hf-mount
I wish they offered some managed bridging to local NVMe storage. AWS NVMe is super fast compared to EBS, and EBS (node-exclusive access as block device) is faster than EFS (multi-node access). I imagine this can go fast if you put some kind of further-cache-to-NVMe FS on top, but a completely vertically integrated option would be much better.
Since EFS is just an NFS mount, I wonder if you could do this yourself by attaching an NVMe volume to your instance and setting up something like cachefilesd on the NFS mount, pointed to the NVMe.
Would
work out of the box? It does for EFS. It hardly seems worth it to offer a managed service that's effectively three shell commands, but this is AWS we're talking about.
AWS's [docs on EFS performance](https://docs.aws.amazon.com/efs/latest/ug/performance-tips.h...) say:
> Don't use the following mount options:
> - fsc – This option enables local file caching, but does not change NFS cache coherency, and does not reduce latencies.
If the S3 Files sync logic ran client-side, we could almost entirely avoid file access latency for cached files and paying for new expensive EFS disks. I already pay for a lot of NVMe disks, let me just use those!
>This option enables local file caching, but does not change NFS cache coherency, and does not reduce latencies.
That's true for any NFS setup, not just EFS. The benefit of local NFS caching is to speed up reads of large, immutable files, where latency is relatively negligible. I'm not sure why AWS specifically dissuades users from enabling caching, since it's not like bandwidth to an EFS volume is even in the ballpark of EBS/NVMe bandwidth.
The problem with using S3 as a filesystem is that it’s immutable, and that hasn’t changed with S3 Files. So if I have a large file and change 1 byte of it, or even just rename it, it needs to upload the entire file all over again. This seems most useful for read-heavy workflows of files that are small enough to fit in the cache.
That’s not that different than CoW filesystems - there is no rule that files must map 1:1 to objects; you can (transparently) divide a file into smaller chunks to enable more fine grained edits.
But this doesn't
The most obvious approach seems to implement device blocks as S3 objects and use any existing file system on top of it.
Depends how you implement the fs layer on top of s3; as a quick example, I've done a couple of implementations of exactly that, where a file is chunked into multiple s3 objects; this allows for CoW semantics if required, and parallel upload/downloads; in the end it heavily depends on your use case
Files can be immutable if you have mutable metadata - but S3 does not have mutable metadata, so you can't rename a directory without a full copy of all its contents.
Immutable files can be solved by chunking them, allowing files to be opened and appended to - we do this in HopsFS. However, random writes are typically not supported in scaleout metadata file systems - but rarely used by POSIX clients, thankfully.
If you though locking semantics over NFS were wonky, just wait till we through a remote S3 backend in the mix!
This is very close to its first official release: https://fiberfs.io/
Built in cache, CDN compatible, JSON metadata, concurrency safe and it targets all S3 compatible storage systems.
How would you compare this to Amazon's own FUSE implementation? I think it's on its 3rd major reincarnation now
I cannot 100% confirm this, but I believe AWS insisted a lot in NOT using S3 as a file system. Why the change now?
They found a way to make money on it by putting a cache in front of it. Less load for them, better performance for you. Maybe you save money, maybe you dont.
It appears that they put an actual file system in front of S3 (AWS EFS basically) and then perform transparent syncing. The blog post discusses a lot of caveats (consistency, for example) or object namings (incosistencies are emitted as events to customers).
Having been a fan of S3 for such a long time, I'm really a fan of the design. It's a good compromise and kudos to whoever managed to push through the design.
Because people will use it as filesystem regardless of the original intent because it is very convenient abstraction. So might as well do it in optimal and supported way I guess ?
Because without significant engineering effort (see the blog post), the mismatch between object store semantics and file semantics mean you will probably Have A Bad Time. In much earlier eras of S3, there were also some implementation specifics like throughput limits based on key prefixes (that one vanished circa 2016) that made it even worse to use for hierarchical directory shapes.
People and by people I mean architects and lead devs at big account orgs ( $$$ ) have been using S3 as a filesystem as one of the backbones of their usually wacky mega complex projects.
So there always been a pressure to AWS make it work like that. I suspect the amount of support tickets AWS receives related to "My S3 backed project is slow/fails sometimes/run into AWS limits (like the max number of buckets per account)" and "Why don't.." questions in the design phase which many times AWS people are in the room, serve as enough of a long applied pressure to overcome technical limitations of S3.
I'm not a fan of this type of "let's put a fresh coat on top of it and pretend it's something that fundamentally is not" abstractions. But I suspect here is a case of social pressure turbo charged by $$$.
I think it opens them up to a huge customer base of less technically apt people who just downloaded some random "S3asYourFS.exe" program but also opens them up to needing to support that functionality and field support calls from less technically apt people. I don't know if that business decision makes sense (since AWS already lacks the CS infrastructure to even deal with professional clients) but the idea that you could get everyone and their brother paying monthly fees to AWS is likely too tempting of a fruit to pass up.
This is how tech people think, but Customer still want this, so it will be built, eventually
How does this compare with ZFS's object storage backend? https://news.ycombinator.com/item?id=46620673
Notably, this is going to manage your data in it's native format (i.e. you can actually read-write the files out of the S3 bucket as if they were actual objects, mapping 1:1 to each file). The ZFS backend is (almost certainly) a block-based format that is persisted to S3 (meaning that you cannot use it for existing data in S3, and you cannot access data written through ZFS via S3).
> changes are aggregated and committed back to S3 roughly every 60 seconds as a single PUT
Single PUT per file I assume?
Based on docs, correct.
Dumb Q: what would happen if you used this to store a SQLite database? Would it just... work?
My guess is this would only enable a read-replica and not backups as Litestream currently does?
SQLite’s locking is not NFS safe so this would not work.
thanks
SQLite works great with ZeroFS: https://github.com/Barre/ZeroFS
This could be useful. We use EFS, I like the benefits but I think it’s overkill for what we need. I’ve been thinking of switching to s3 but not looking forward to completely changing how we upload and download.
Zero mention of s3fs which already did this for decades.
A more solid (especially when it comes to caching) solution would be appreciated.
I thought that would be their https://github.com/awslabs/mountpoint-s3 . But no mention about this one either.
S3 files does have the advantage of having a "shared" cache via EFS, but then that would probably also make the cache slower.
I'd assume you can still have local cache in addition to that.
I was thinking: "No way this has existed for decades". But the earliest I can find it existing is 2008. Strictly speaking not decades but much closer to it than I expected.
This is pretty different than s3fs. s3fs is a FUSE file system that is backed by S3.
This means that all of the non-atomic operations that you might want to do on S3 (including edits to the middle of files, renames, etc) are run on the machine running S3fs. As a result, if your machine crashes, it's not clear what's going to show up in your S3 bucket or if would corrupt things.
As a result, S3fs is also slow because it means that the next stop after your machine is S3, which isn't suitable for many file-based applications.
What AWS has built here is different, using EFS as the middle layer means that there's a safe, durable place for your file system operations to go while they're being assembled in object operations. It also means that the performance should be much better than s3fs (it's talking to ssds where data is 1ms away instead of hdds where data is 30ms away).
You can also use something like JuiceFS to make using S3 as a shared filesystem more sane, but you're moving all the metadata to a shared database.
Or ZeroFS which doesn’t require a 3rd party database, just a s3 bucket!
https://github.com/Barre/ZeroFS
It also means that you need to pay for EFS, which is outrageously expensive, to use S3, whose whole purpose is to be cheap.
Of course, you don't need to, this is just a way to opt-in to getting file semantics on top of S3.
The purpose of S3 isn't to be cheap, it's to be simple.
Yeah, that blog post was written as if sliced bread has been invented again.
Reading through it, I was only thinking "is this distinguished engineer TOC 2M aware that people have been doing this since forever?".
There's also https://github.com/kahing/goofys, a Go equivalent. A bit of a dead project these days.
As usual, everything except pricing is very well explained.
Eagerly awaiting on first blogpost where developers didn't read the eventually consistent part, lost the data and made some "genius" workaround with help of the LLM that got them in that spot in the first place
> Effective immediately, all S3 GET, PUT, and LIST operations, as well as operations that change object tags, ACLs, or metadata, are now strongly consistent.
https://aws.amazon.com/blogs/aws/amazon-s3-update-strong-rea...
Since this is the thread that got attention, I've added the announcement link to the toptext and made the title work for both.
> we locked a bunch of our most senior engineers in a room and said we weren’t going to let them out till they had a plan that they all liked.
That's one way to do it.
> When you create or modify files, changes are aggregated and committed back to S3 roughly every 60 seconds as a single PUT. Sync runs in both directions, so when other applications modify objects in the bucket, S3 Files automatically spots those modifications and reflects them in the filesystem view automatically.
That sounds about right given the above. I have trouble seeing this as something other than a giant "hack." I already don't enjoy projecting costs for new types of S3 access patterns and I feel like has the potential to double the complication I already experience here.
Maybe I'm too frugal, but I've been in the cloud for a decade now, and I've worked very hard to prevent any "surprise" bills from showing up. This seems like a great feature; if you don't care what your AWS bill is each month.
There is a staggering number of user doing this with extra steps using fsx for lustre, their life greatly simplified today (unless they use gpu direct storage I guess)
Good point. There's a wide gulf between being able to design your workflow for S3 and trying to map an existing workflow to it.
Werner Vogels is awesome. I first discovered about his writing when I learnt about Dynamo DB.
This why today’s sales pitch are often disguised as a tech blog.
See also: https://github.com/Barre/ZeroFS
the "under the hood uses EFS" part is the most interesting bit here
Terrible day for people who sloppily use filesystem vocabulary when referring to S3 objects and prefixes.
any recommendations for a lambda based sftp sever setup?
TLDR: EFS as a eventually consistent cache in front of S3.
tldr: this caches your S3 data in EFS.
we run datalakes using DuckLake and this sounds really useful. GCP should follow suit quickly.
I am curious about this use case
How do you see it helping with DuckLake?
Latency, predicate pushdown.
Pre-compaction the recent data can be in small files, and the delete markers will also be in small files. This will bring down fetch times, while ducklake may have many of the larger blocks in memory or disk cache already.
Reading block headers for filtering is lots of small ranges, this could speed it up by 10x.
I was thinking of using it with Duckdb as well but seems it would be of limited benefit. Parquet objects are in MBs, so they would be streamed directly from S3. With raw parquet objects, it might help with S3 listing if you have a lot of them (shave off a couple of seconds from the query). If you are already on Ducklake, Duckdb will use that for getting the list of relevant objects anyway.
Maybe the OP is thinking of reading/writing to DuckDB native format files. Those require filesystem semantics for writing. Unfortunately, even NFS or SMB are not sufficiently FS-like for DuckDB.
Parquet is static append only, so DuckDB has no problems with those living on S3.
What does DuckDB need that NFS/SMB do not provide?
TLDR: Eventually consistent file system view on top of s3 with read/write cache.
If there is ever a post that needs a TLDR or an AI summary it is that one.
Sell the benefits.
I have around 9 TB in 21m files on S3. How does this change benefit me?
Check out the "what's new": https://aws.amazon.com/about-aws/whats-new/2026/04/amazon-s3...
not everything should or needs to be some article geared towards the audience's convenience, or selling something to the audience. pretty much all allthingsdistributed articles are long form articles covering highly technical systems and contain a decent whack of detail/context. in my mind, they veer closer to "computer scientist does blog posts" compared to "5 ways React can boost your page visits" listicles.
edited slightly ... i really need to turn 10 minute post delay back on.