and run it against a fuse filesystem, instead of the single 32k write,
the filesystem program sees 8 4k writes. Is there any reason for
A number of times in the past the idea of a large write has come up
where fuse would buffer writes until some large size was reached and
then do one large write all at once. Understood that that is not
possible, but this is something else - there should be no buffering
needed here, since the original write was large.
> > But aside from that, isn't it the case that the linux VFS layer
> > internally deals with page-sized chunks of data, which on x86 means 4kB?
> > Perhaps the gurus can confirm/deny!
> Possible, but this doesn't happen when reading via fuse. On when
It actually does happen when reading, only the single page chunks are
again aggregated by the readahead logic into a larger read. But that
will not necessarily be the exact same chunk that the user requested
With writing, the problem with aggregation, is that there's no way to
know when a page being written is the last one belonging to a single
write() syscall. If that information was available, it would be
possible to implement your suggestion.