Re: fuse-devel Digest, Vol 129, Issue 3

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: fuse-devel Digest, Vol 129, Issue 3

John Har
Forgive me if I misunderstood, but I think the scenario you described would lead to infinitely recursive calls to open().  But it doesn't work that way because you're accessing the file "file" through different paths.

If the initiating application does a libc open() to "file" via the FUSE mount point (e.g. /mnt/fuse/foo/file), then the libc open() will go through the FUSE kernel and then to your FUSE userspace filesystem code.  If the FUSE kernel has any (page) cached data, it will use the cached data before requesting new data from your code. 

When your FUSE filesystem invokes libc's open(), you're now accessing it through a different path which isn't via the FUSE mount point.  Your userspace open() call will be to "/home/foo/file", and it will NOT go through the FUSE kernel, and you won't be encountering the page cache associated with the FUSE kernel.

However, if your FUSE userspace code for some reason did try to open() via the FUSE mount point /mnt/fuse/foo/file, then you'd have a never-ending recursion.  So don't do that. :)

As far as what is cached, it seems that inode (or namespace) data is always cached, but actual file data is only cached if "keep_cache" or "auto_cache" is specified.

BTW, in my experiments with FUSE, I didn't see any caching happening unless you use "keep_cache" or "auto_cache".  Without either one, every read() call from the application results in read() calls to your filesystem.  Read throughput will be constrained going through your filesystem. With auto_cache specified, the FUSE kernel will check if the file hasn't changed with open() and fstat() calls to your filesystem code, and if it hasn't changed, it will return the page cached data to the initiating application.  This results in significant performance gains, just like reading natively from a cached local file. Again, this is based on my experiments with FUSE and have not looked at the kernel code to confirm the behavior.

John




Thank you

It answers a little bit, but not quite.

So it answers this bit: "What cache does keep_cache deal with?"
Answer: page cache

Then, I'm still confused because there are two layers of open.

There's the fuse's open operation. There, you decide whether you want the
"keep_cache" option by specifying in "fuse_file_info *fi". And you can even
decide the open operation to do nothing at all.
Then, there's the libc's open method which can even be called outside
fuse's open operation (for example, could be called in the destroy
operation even).

Logically, fuse's open operation is really not the one opening the file, so
it has to be the files that libc's open method is calling that gets cached.
So, how does fuse handle this?
Is it like, "Any libc's open method that gets called by the fuse program
will follow the specific cache option"?

If so, imagine a file outside fuse (/home/foo/file), and your fuse's open
operation calls libc's open method to that file (cat /mnt/fuse/foo/file).
And then, an outsider who is not using fuse, simply opens the file (cat
/home/foo/file). In this scenario, both fuse and outside-fuse are using
libc's open to the file (/home/foo/file), which seems to me that they
should both be looking at the same page cache.  But, if the above question
is correct, it should mean that fuse can alter the page cache for the file
(/home/foo/file), which affects the outsider accessing the file too (cat
/home/foo/file). This does not seem correct.

If this is not the case, then how does fuse isolate the cache option only
to itself?

2017-01-04 4:59 GMT+09:00 Maxim Patlasov <[hidden email]>:

> Hi,
>
>
> It should be enough to look at the following snippet to fully understand
> keep_cache option:
>
> void fuse_finish_open(struct inode *inode, struct file *file)
> {
>     ...
>     if (!(ff->open_flags & FOPEN_KEEP_CACHE))
>         invalidate_inode_pages2(inode->i_mapping);
>
> where invalidate_inode_pages2() purges page-cache associated with given
> inode. The idea is that if you open the same file more than once (and maybe
> referring by different names), the page-cache is not duplicated. There is a
> general cache coherence problem to cope with: imagine that you read/write a
> file from the node "A", so some content of the file is cached there, and
> now you modify the file from the node "B". Which mechanism should FUSE
> programmer utilize to ensure that users of node "A" won't see stale
> content? There is no good universal solution in FUSE methodology (AFAIK),
> but there are some tricks. KEEP_CACHE is one of them: it barriers stale
> data at least at the moment of handling open(2). Does this answer your
> question?
>
>
> Thanks,
>
> Maxim
>
> On 01/03/2017 02:01 AM, ???? wrote:
>
> Yes it shows that caching improves read when using "cat". But I still
> don't know what cached information is allowing the performance boost.
>
> I'm thinking the answer to this question of mine can help with that: "what

> Yes it shows that caching improves read when using "cat". But I still
> don't know what cached information is allowing the performance boost.
>
> I'm thinking the answer to this question of mine can help with that: "what
> if a process tried to read "/mnt/fuse/file" where "/mnt/fuse" is the mount
> point of the fuse program, and I made the fuse program do nothing and
> return 0 for its open operation method? What gets cached then?"
>
> 2017-01-03 18:48 GMT+09:00 Stef Bon <[hidden email]>:
>
>> Well I can't help you then.
>> In the link is described the problem and an example. This example
>> reads a file many times, to show the difference when using the
>> keep_cache
>> option. If you stil don't know what is cached here......
>>
>
>
>
> ------------------------------------------------------------------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, SlashDot.org! http://sdm.link/slashdot


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
--
fuse-devel mailing list
To unsubscribe or subscribe, visit https://lists.sourceforge.net/lists/listinfo/fuse-devel
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: fuse-devel Digest, Vol 129, Issue 3

Maruhan Park
Thank you so much for this. This is exactly the type of answer I wanted.

There are couple things I don't understand from your answer however.

1. "However, if your FUSE userspace code for some reason did try to open() via the FUSE mount point /mnt/fuse/foo/file, then you'd have a never-ending recursion."

I don't quite understand why that would be recursive. I'm probably misunderstanding how FUSE's open operation works.

From what I understand when FUSE kernel receives an open command from VFS, it simply runs its own user-defined open method from the FUSE userspace library code. What's inside that user-defined open method doesn't matter.

2. "If the initiating application does a libc open() to "file" via the FUSE mount point (e.g. /mnt/fuse/foo/file), then the libc open() will go through the FUSE kernel and then to your FUSE userspace filesystem code.  If the FUSE kernel has any (page) cached data, it will use the cached data before requesting new data from your code."

From what I read, the FUSE kernel doesn't cache anything. It's simply the VFS doing the same caching that it does to all other filesystems. So, if you call open() to the FUSE mount point (/mnt/fuse/foo/file), VFS will be like "hmmm, do I have the file data for /mnt/fuse/foo/file?" and that's just simply impossible how I see it because "/mnt/fuse/foo/file" is inside FUSE mount point, which you can code it to do absolutely anything. Like it doesn't even need to open any file at all if you call libc's open() to that directory, or you can even do something wacky like print "hello world".  I just can't grasp how VFS can cache the file data about something that's simply a gateway to calling custom code.

2017-01-04 12:05 GMT+09:00 John Har <[hidden email]>:
Forgive me if I misunderstood, but I think the scenario you described would lead to infinitely recursive calls to open().  But it doesn't work that way because you're accessing the file "file" through different paths.

If the initiating application does a libc open() to "file" via the FUSE mount point (e.g. /mnt/fuse/foo/file), then the libc open() will go through the FUSE kernel and then to your FUSE userspace filesystem code.  If the FUSE kernel has any (page) cached data, it will use the cached data before requesting new data from your code. 

When your FUSE filesystem invokes libc's open(), you're now accessing it through a different path which isn't via the FUSE mount point.  Your userspace open() call will be to "/home/foo/file", and it will NOT go through the FUSE kernel, and you won't be encountering the page cache associated with the FUSE kernel.

However, if your FUSE userspace code for some reason did try to open() via the FUSE mount point /mnt/fuse/foo/file, then you'd have a never-ending recursion.  So don't do that. :)

As far as what is cached, it seems that inode (or namespace) data is always cached, but actual file data is only cached if "keep_cache" or "auto_cache" is specified.

BTW, in my experiments with FUSE, I didn't see any caching happening unless you use "keep_cache" or "auto_cache".  Without either one, every read() call from the application results in read() calls to your filesystem.  Read throughput will be constrained going through your filesystem. With auto_cache specified, the FUSE kernel will check if the file hasn't changed with open() and fstat() calls to your filesystem code, and if it hasn't changed, it will return the page cached data to the initiating application.  This results in significant performance gains, just like reading natively from a cached local file. Again, this is based on my experiments with FUSE and have not looked at the kernel code to confirm the behavior.

John




Thank you

It answers a little bit, but not quite.

So it answers this bit: "What cache does keep_cache deal with?"
Answer: page cache

Then, I'm still confused because there are two layers of open.

There's the fuse's open operation. There, you decide whether you want the
"keep_cache" option by specifying in "fuse_file_info *fi". And you can even
decide the open operation to do nothing at all.
Then, there's the libc's open method which can even be called outside
fuse's open operation (for example, could be called in the destroy
operation even).

Logically, fuse's open operation is really not the one opening the file, so
it has to be the files that libc's open method is calling that gets cached.
So, how does fuse handle this?
Is it like, "Any libc's open method that gets called by the fuse program
will follow the specific cache option"?

If so, imagine a file outside fuse (/home/foo/file), and your fuse's open
operation calls libc's open method to that file (cat /mnt/fuse/foo/file).
And then, an outsider who is not using fuse, simply opens the file (cat
/home/foo/file). In this scenario, both fuse and outside-fuse are using
libc's open to the file (/home/foo/file), which seems to me that they
should both be looking at the same page cache.  But, if the above question
is correct, it should mean that fuse can alter the page cache for the file
(/home/foo/file), which affects the outsider accessing the file too (cat
/home/foo/file). This does not seem correct.

If this is not the case, then how does fuse isolate the cache option only
to itself?

2017-01-04 4:59 GMT+09:00 Maxim Patlasov <[hidden email]>:

> Hi,
>
>
> It should be enough to look at the following snippet to fully understand
> keep_cache option:
>
> void fuse_finish_open(struct inode *inode, struct file *file)
> {
>     ...
>     if (!(ff->open_flags & FOPEN_KEEP_CACHE))
>         invalidate_inode_pages2(inode->i_mapping);
>
> where invalidate_inode_pages2() purges page-cache associated with given
> inode. The idea is that if you open the same file more than once (and maybe
> referring by different names), the page-cache is not duplicated. There is a
> general cache coherence problem to cope with: imagine that you read/write a
> file from the node "A", so some content of the file is cached there, and
> now you modify the file from the node "B". Which mechanism should FUSE
> programmer utilize to ensure that users of node "A" won't see stale
> content? There is no good universal solution in FUSE methodology (AFAIK),
> but there are some tricks. KEEP_CACHE is one of them: it barriers stale
> data at least at the moment of handling open(2). Does this answer your
> question?
>
>
> Thanks,
>
> Maxim
>
> On 01/03/2017 02:01 AM, ???? wrote:
>
> Yes it shows that caching improves read when using "cat". But I still
> don't know what cached information is allowing the performance boost.
>
> I'm thinking the answer to this question of mine can help with that: "what

> Yes it shows that caching improves read when using "cat". But I still
> don't know what cached information is allowing the performance boost.
>
> I'm thinking the answer to this question of mine can help with that: "what
> if a process tried to read "/mnt/fuse/file" where "/mnt/fuse" is the mount
> point of the fuse program, and I made the fuse program do nothing and
> return 0 for its open operation method? What gets cached then?"
>
> 2017-01-03 18:48 GMT+09:00 Stef Bon <[hidden email]>:
>
>> Well I can't help you then.
>> In the link is described the problem and an example. This example
>> reads a file many times, to show the difference when using the
>> keep_cache
>> option. If you stil don't know what is cached here......
>>
>
>
>
> ------------------------------------------------------------------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, SlashDot.org! http://sdm.link/slashdot


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
--
fuse-devel mailing list
To unsubscribe or subscribe, visit https://lists.sourceforge.net/lists/listinfo/fuse-devel



------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
--
fuse-devel mailing list
To unsubscribe or subscribe, visit https://lists.sourceforge.net/lists/listinfo/fuse-devel
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: fuse-devel Digest, Vol 129, Issue 3

Maruhan Park
To clarity when I said, "if you call open() to the FUSE mount point (/mnt/fuse/foo/file)", I meant that the FUSE mount point is /mnt/fuse, and we are calling libc's open() to /mnt/fuse/foo/file.

2017-01-04 14:02 GMT+09:00 박마루한 <[hidden email]>:
Thank you so much for this. This is exactly the type of answer I wanted.

There are couple things I don't understand from your answer however.

1. "However, if your FUSE userspace code for some reason did try to open() via the FUSE mount point /mnt/fuse/foo/file, then you'd have a never-ending recursion."

I don't quite understand why that would be recursive. I'm probably misunderstanding how FUSE's open operation works.

From what I understand when FUSE kernel receives an open command from VFS, it simply runs its own user-defined open method from the FUSE userspace library code. What's inside that user-defined open method doesn't matter.

2. "If the initiating application does a libc open() to "file" via the FUSE mount point (e.g. /mnt/fuse/foo/file), then the libc open() will go through the FUSE kernel and then to your FUSE userspace filesystem code.  If the FUSE kernel has any (page) cached data, it will use the cached data before requesting new data from your code."

From what I read, the FUSE kernel doesn't cache anything. It's simply the VFS doing the same caching that it does to all other filesystems. So, if you call open() to the FUSE mount point (/mnt/fuse/foo/file), VFS will be like "hmmm, do I have the file data for /mnt/fuse/foo/file?" and that's just simply impossible how I see it because "/mnt/fuse/foo/file" is inside FUSE mount point, which you can code it to do absolutely anything. Like it doesn't even need to open any file at all if you call libc's open() to that directory, or you can even do something wacky like print "hello world".  I just can't grasp how VFS can cache the file data about something that's simply a gateway to calling custom code.

2017-01-04 12:05 GMT+09:00 John Har <[hidden email]>:
Forgive me if I misunderstood, but I think the scenario you described would lead to infinitely recursive calls to open().  But it doesn't work that way because you're accessing the file "file" through different paths.

If the initiating application does a libc open() to "file" via the FUSE mount point (e.g. /mnt/fuse/foo/file), then the libc open() will go through the FUSE kernel and then to your FUSE userspace filesystem code.  If the FUSE kernel has any (page) cached data, it will use the cached data before requesting new data from your code. 

When your FUSE filesystem invokes libc's open(), you're now accessing it through a different path which isn't via the FUSE mount point.  Your userspace open() call will be to "/home/foo/file", and it will NOT go through the FUSE kernel, and you won't be encountering the page cache associated with the FUSE kernel.

However, if your FUSE userspace code for some reason did try to open() via the FUSE mount point /mnt/fuse/foo/file, then you'd have a never-ending recursion.  So don't do that. :)

As far as what is cached, it seems that inode (or namespace) data is always cached, but actual file data is only cached if "keep_cache" or "auto_cache" is specified.

BTW, in my experiments with FUSE, I didn't see any caching happening unless you use "keep_cache" or "auto_cache".  Without either one, every read() call from the application results in read() calls to your filesystem.  Read throughput will be constrained going through your filesystem. With auto_cache specified, the FUSE kernel will check if the file hasn't changed with open() and fstat() calls to your filesystem code, and if it hasn't changed, it will return the page cached data to the initiating application.  This results in significant performance gains, just like reading natively from a cached local file. Again, this is based on my experiments with FUSE and have not looked at the kernel code to confirm the behavior.

John




Thank you

It answers a little bit, but not quite.

So it answers this bit: "What cache does keep_cache deal with?"
Answer: page cache

Then, I'm still confused because there are two layers of open.

There's the fuse's open operation. There, you decide whether you want the
"keep_cache" option by specifying in "fuse_file_info *fi". And you can even
decide the open operation to do nothing at all.
Then, there's the libc's open method which can even be called outside
fuse's open operation (for example, could be called in the destroy
operation even).

Logically, fuse's open operation is really not the one opening the file, so
it has to be the files that libc's open method is calling that gets cached.
So, how does fuse handle this?
Is it like, "Any libc's open method that gets called by the fuse program
will follow the specific cache option"?

If so, imagine a file outside fuse (/home/foo/file), and your fuse's open
operation calls libc's open method to that file (cat /mnt/fuse/foo/file).
And then, an outsider who is not using fuse, simply opens the file (cat
/home/foo/file). In this scenario, both fuse and outside-fuse are using
libc's open to the file (/home/foo/file), which seems to me that they
should both be looking at the same page cache.  But, if the above question
is correct, it should mean that fuse can alter the page cache for the file
(/home/foo/file), which affects the outsider accessing the file too (cat
/home/foo/file). This does not seem correct.

If this is not the case, then how does fuse isolate the cache option only
to itself?

2017-01-04 4:59 GMT+09:00 Maxim Patlasov <[hidden email]>:

> Hi,
>
>
> It should be enough to look at the following snippet to fully understand
> keep_cache option:
>
> void fuse_finish_open(struct inode *inode, struct file *file)
> {
>     ...
>     if (!(ff->open_flags & FOPEN_KEEP_CACHE))
>         invalidate_inode_pages2(inode->i_mapping);
>
> where invalidate_inode_pages2() purges page-cache associated with given
> inode. The idea is that if you open the same file more than once (and maybe
> referring by different names), the page-cache is not duplicated. There is a
> general cache coherence problem to cope with: imagine that you read/write a
> file from the node "A", so some content of the file is cached there, and
> now you modify the file from the node "B". Which mechanism should FUSE
> programmer utilize to ensure that users of node "A" won't see stale
> content? There is no good universal solution in FUSE methodology (AFAIK),
> but there are some tricks. KEEP_CACHE is one of them: it barriers stale
> data at least at the moment of handling open(2). Does this answer your
> question?
>
>
> Thanks,
>
> Maxim
>
> On 01/03/2017 02:01 AM, ???? wrote:
>
> Yes it shows that caching improves read when using "cat". But I still
> don't know what cached information is allowing the performance boost.
>
> I'm thinking the answer to this question of mine can help with that: "what

> Yes it shows that caching improves read when using "cat". But I still
> don't know what cached information is allowing the performance boost.
>
> I'm thinking the answer to this question of mine can help with that: "what
> if a process tried to read "/mnt/fuse/file" where "/mnt/fuse" is the mount
> point of the fuse program, and I made the fuse program do nothing and
> return 0 for its open operation method? What gets cached then?"
>
> 2017-01-03 18:48 GMT+09:00 Stef Bon <[hidden email]>:
>
>> Well I can't help you then.
>> In the link is described the problem and an example. This example
>> reads a file many times, to show the difference when using the
>> keep_cache
>> option. If you stil don't know what is cached here......
>>
>
>
>
> ------------------------------------------------------------------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, SlashDot.org! http://sdm.link/slashdot


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
--
fuse-devel mailing list
To unsubscribe or subscribe, visit https://lists.sourceforge.net/lists/listinfo/fuse-devel




------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
--
fuse-devel mailing list
To unsubscribe or subscribe, visit https://lists.sourceforge.net/lists/listinfo/fuse-devel
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: fuse-devel Digest, Vol 129, Issue 3

John Har
Take a look at this picture of a FUSE data flow: http://clamfs.sourceforge.net/images/large/clamfs_open.png 
VFS will decide which filesystem module (FUSE, ext2, NFS, ...) to use based on the mount path. You can create a recursion if your userspace code tries to open() (or any other file operation) on the same mount path that you are servicing  (i.e. /mnt/fuse).

Regarding where the page caching is done, I believe you're right that VFS handles the caching:
 All I know is that you have to enable FUSE mount options "keep_cache" or "auto_cache" to see any page caching effects for actual file data.  Maybe FUSE is passing hints back to VFS as to whether the cached pages are valid or not. Of course, on first read of a file, FUSE (or VFS) knows that no data blocks have been cached, so it results in read() calls to your userspace code to fill the page cache. On subsequent reads of the same file, with "auto_cache" enabled, the FUSE kernel will call open() and fstat() to your userspace code to see if the file has changed. It's up to your code to decide how to respond properly (i.e. has the modification time or file size changed).  I suspect that the FUSE kernel uses this information to decide whether to tell VFS that the relevant pages are valid or to call your userspace read() to fetch the blocks. 

Again, I'm not a kernel expert and have not looked at any of the kernel code.  I am basing this on my own experiments with FUSE.

Hope this helps,
John

On Tue, Jan 3, 2017 at 9:04 PM 박마루한 <[hidden email]> wrote:
To clarity when I said, "if you call open() to the FUSE mount point (/mnt/fuse/foo/file)", I meant that the FUSE mount point is /mnt/fuse, and we are calling libc's open() to /mnt/fuse/foo/file.

2017-01-04 14:02 GMT+09:00 박마루한 <[hidden email]>:
Thank you so much for this. This is exactly the type of answer I wanted.

There are couple things I don't understand from your answer however.

1. "However, if your FUSE userspace code for some reason did try to open() via the FUSE mount point /mnt/fuse/foo/file, then you'd have a never-ending recursion."

I don't quite understand why that would be recursive. I'm probably misunderstanding how FUSE's open operation works.

From what I understand when FUSE kernel receives an open command from VFS, it simply runs its own user-defined open method from the FUSE userspace library code. What's inside that user-defined open method doesn't matter.

2. "If the initiating application does a libc open() to "file" via the FUSE mount point (e.g. /mnt/fuse/foo/file), then the libc open() will go through the FUSE kernel and then to your FUSE userspace filesystem code.  If the FUSE kernel has any (page) cached data, it will use the cached data before requesting new data from your code."

From what I read, the FUSE kernel doesn't cache anything. It's simply the VFS doing the same caching that it does to all other filesystems. So, if you call open() to the FUSE mount point (/mnt/fuse/foo/file), VFS will be like "hmmm, do I have the file data for /mnt/fuse/foo/file?" and that's just simply impossible how I see it because "/mnt/fuse/foo/file" is inside FUSE mount point, which you can code it to do absolutely anything. Like it doesn't even need to open any file at all if you call libc's open() to that directory, or you can even do something wacky like print "hello world".  I just can't grasp how VFS can cache the file data about something that's simply a gateway to calling custom code.

2017-01-04 12:05 GMT+09:00 John Har <[hidden email]>:
Forgive me if I misunderstood, but I think the scenario you described would lead to infinitely recursive calls to open().  But it doesn't work that way because you're accessing the file "file" through different paths.

If the initiating application does a libc open() to "file" via the FUSE mount point (e.g. /mnt/fuse/foo/file), then the libc open() will go through the FUSE kernel and then to your FUSE userspace filesystem code.  If the FUSE kernel has any (page) cached data, it will use the cached data before requesting new data from your code. 

When your FUSE filesystem invokes libc's open(), you're now accessing it through a different path which isn't via the FUSE mount point.  Your userspace open() call will be to "/home/foo/file", and it will NOT go through the FUSE kernel, and you won't be encountering the page cache associated with the FUSE kernel.

However, if your FUSE userspace code for some reason did try to open() via the FUSE mount point /mnt/fuse/foo/file, then you'd have a never-ending recursion.  So don't do that. :)

As far as what is cached, it seems that inode (or namespace) data is always cached, but actual file data is only cached if "keep_cache" or "auto_cache" is specified.

BTW, in my experiments with FUSE, I didn't see any caching happening unless you use "keep_cache" or "auto_cache".  Without either one, every read() call from the application results in read() calls to your filesystem.  Read throughput will be constrained going through your filesystem. With auto_cache specified, the FUSE kernel will check if the file hasn't changed with open() and fstat() calls to your filesystem code, and if it hasn't changed, it will return the page cached data to the initiating application.  This results in significant performance gains, just like reading natively from a cached local file. Again, this is based on my experiments with FUSE and have not looked at the kernel code to confirm the behavior.

John




Thank you

It answers a little bit, but not quite.

So it answers this bit: "What cache does keep_cache deal with?"
Answer: page cache

Then, I'm still confused because there are two layers of open.

There's the fuse's open operation. There, you decide whether you want the
"keep_cache" option by specifying in "fuse_file_info *fi". And you can even
decide the open operation to do nothing at all.
Then, there's the libc's open method which can even be called outside
fuse's open operation (for example, could be called in the destroy
operation even).

Logically, fuse's open operation is really not the one opening the file, so
it has to be the files that libc's open method is calling that gets cached.
So, how does fuse handle this?
Is it like, "Any libc's open method that gets called by the fuse program
will follow the specific cache option"?

If so, imagine a file outside fuse (/home/foo/file), and your fuse's open
operation calls libc's open method to that file (cat /mnt/fuse/foo/file).
And then, an outsider who is not using fuse, simply opens the file (cat
/home/foo/file). In this scenario, both fuse and outside-fuse are using
libc's open to the file (/home/foo/file), which seems to me that they
should both be looking at the same page cache.  But, if the above question
is correct, it should mean that fuse can alter the page cache for the file
(/home/foo/file), which affects the outsider accessing the file too (cat
/home/foo/file). This does not seem correct.

If this is not the case, then how does fuse isolate the cache option only
to itself?

2017-01-04 4:59 GMT+09:00 Maxim Patlasov <[hidden email]>:

> Hi,
>
>
> It should be enough to look at the following snippet to fully understand
> keep_cache option:
>
> void fuse_finish_open(struct inode *inode, struct file *file)
> {
>     ...
>     if (!(ff->open_flags & FOPEN_KEEP_CACHE))
>         invalidate_inode_pages2(inode->i_mapping);
>
> where invalidate_inode_pages2() purges page-cache associated with given
> inode. The idea is that if you open the same file more than once (and maybe
> referring by different names), the page-cache is not duplicated. There is a
> general cache coherence problem to cope with: imagine that you read/write a
> file from the node "A", so some content of the file is cached there, and
> now you modify the file from the node "B". Which mechanism should FUSE
> programmer utilize to ensure that users of node "A" won't see stale
> content? There is no good universal solution in FUSE methodology (AFAIK),
> but there are some tricks. KEEP_CACHE is one of them: it barriers stale
> data at least at the moment of handling open(2). Does this answer your
> question?
>
>
> Thanks,
>
> Maxim
>
> On 01/03/2017 02:01 AM, ???? wrote:
>
> Yes it shows that caching improves read when using "cat". But I still
> don't know what cached information is allowing the performance boost.
>
> I'm thinking the answer to this question of mine can help with that: "what

> Yes it shows that caching improves read when using "cat". But I still
> don't know what cached information is allowing the performance boost.
>
> I'm thinking the answer to this question of mine can help with that: "what
> if a process tried to read "/mnt/fuse/file" where "/mnt/fuse" is the mount
> point of the fuse program, and I made the fuse program do nothing and
> return 0 for its open operation method? What gets cached then?"
>
> 2017-01-03 18:48 GMT+09:00 Stef Bon <[hidden email]>:
>
>> Well I can't help you then.
>> In the link is described the problem and an example. This example
>> reads a file many times, to show the difference when using the
>> keep_cache
>> option. If you stil don't know what is cached here......
>>
>
>
>
> ------------------------------------------------------------------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, SlashDot.org! http://sdm.link/slashdot


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
--
fuse-devel mailing list
To unsubscribe or subscribe, visit https://lists.sourceforge.net/lists/listinfo/fuse-devel




------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
--
fuse-devel mailing list
To unsubscribe or subscribe, visit https://lists.sourceforge.net/lists/listinfo/fuse-devel
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: fuse-devel Digest, Vol 129, Issue 3

Maruhan Park
"Take a look at this picture of a FUSE data flow: http://clamfs.sourceforge.net/images/large/clamfs_open.png 
VFS will decide which filesystem module (FUSE, ext2, NFS, ...) to use based on the mount path. You can create a recursion if your userspace code tries to open() (or any other file operation) on the same mount path that you are servicing  (i.e. /mnt/fuse)."

Oh ok. I just misunderstood what you said.
I thought you meant that if FUSE's open calls libc's open to an outside file (/foo/file), then there will be infinite recursion.

Darn. So we still don't know what is cached for /mnt/fuse/foo/file, when fuse opens nothing, does wacky things, opens many files, etc.

2017-01-04 14:54 GMT+09:00 John Har <[hidden email]>:
Take a look at this picture of a FUSE data flow: http://clamfs.sourceforge.net/images/large/clamfs_open.png 
VFS will decide which filesystem module (FUSE, ext2, NFS, ...) to use based on the mount path. You can create a recursion if your userspace code tries to open() (or any other file operation) on the same mount path that you are servicing  (i.e. /mnt/fuse).

Regarding where the page caching is done, I believe you're right that VFS handles the caching:
 All I know is that you have to enable FUSE mount options "keep_cache" or "auto_cache" to see any page caching effects for actual file data.  Maybe FUSE is passing hints back to VFS as to whether the cached pages are valid or not. Of course, on first read of a file, FUSE (or VFS) knows that no data blocks have been cached, so it results in read() calls to your userspace code to fill the page cache. On subsequent reads of the same file, with "auto_cache" enabled, the FUSE kernel will call open() and fstat() to your userspace code to see if the file has changed. It's up to your code to decide how to respond properly (i.e. has the modification time or file size changed).  I suspect that the FUSE kernel uses this information to decide whether to tell VFS that the relevant pages are valid or to call your userspace read() to fetch the blocks. 

Again, I'm not a kernel expert and have not looked at any of the kernel code.  I am basing this on my own experiments with FUSE.

Hope this helps,
John

On Tue, Jan 3, 2017 at 9:04 PM 박마루한 <[hidden email]> wrote:
To clarity when I said, "if you call open() to the FUSE mount point (/mnt/fuse/foo/file)", I meant that the FUSE mount point is /mnt/fuse, and we are calling libc's open() to /mnt/fuse/foo/file.

2017-01-04 14:02 GMT+09:00 박마루한 <[hidden email]>:
Thank you so much for this. This is exactly the type of answer I wanted.

There are couple things I don't understand from your answer however.

1. "However, if your FUSE userspace code for some reason did try to open() via the FUSE mount point /mnt/fuse/foo/file, then you'd have a never-ending recursion."

I don't quite understand why that would be recursive. I'm probably misunderstanding how FUSE's open operation works.

From what I understand when FUSE kernel receives an open command from VFS, it simply runs its own user-defined open method from the FUSE userspace library code. What's inside that user-defined open method doesn't matter.

2. "If the initiating application does a libc open() to "file" via the FUSE mount point (e.g. /mnt/fuse/foo/file), then the libc open() will go through the FUSE kernel and then to your FUSE userspace filesystem code.  If the FUSE kernel has any (page) cached data, it will use the cached data before requesting new data from your code."

From what I read, the FUSE kernel doesn't cache anything. It's simply the VFS doing the same caching that it does to all other filesystems. So, if you call open() to the FUSE mount point (/mnt/fuse/foo/file), VFS will be like "hmmm, do I have the file data for /mnt/fuse/foo/file?" and that's just simply impossible how I see it because "/mnt/fuse/foo/file" is inside FUSE mount point, which you can code it to do absolutely anything. Like it doesn't even need to open any file at all if you call libc's open() to that directory, or you can even do something wacky like print "hello world".  I just can't grasp how VFS can cache the file data about something that's simply a gateway to calling custom code.

2017-01-04 12:05 GMT+09:00 John Har <[hidden email]>:
Forgive me if I misunderstood, but I think the scenario you described would lead to infinitely recursive calls to open().  But it doesn't work that way because you're accessing the file "file" through different paths.

If the initiating application does a libc open() to "file" via the FUSE mount point (e.g. /mnt/fuse/foo/file), then the libc open() will go through the FUSE kernel and then to your FUSE userspace filesystem code.  If the FUSE kernel has any (page) cached data, it will use the cached data before requesting new data from your code. 

When your FUSE filesystem invokes libc's open(), you're now accessing it through a different path which isn't via the FUSE mount point.  Your userspace open() call will be to "/home/foo/file", and it will NOT go through the FUSE kernel, and you won't be encountering the page cache associated with the FUSE kernel.

However, if your FUSE userspace code for some reason did try to open() via the FUSE mount point /mnt/fuse/foo/file, then you'd have a never-ending recursion.  So don't do that. :)

As far as what is cached, it seems that inode (or namespace) data is always cached, but actual file data is only cached if "keep_cache" or "auto_cache" is specified.

BTW, in my experiments with FUSE, I didn't see any caching happening unless you use "keep_cache" or "auto_cache".  Without either one, every read() call from the application results in read() calls to your filesystem.  Read throughput will be constrained going through your filesystem. With auto_cache specified, the FUSE kernel will check if the file hasn't changed with open() and fstat() calls to your filesystem code, and if it hasn't changed, it will return the page cached data to the initiating application.  This results in significant performance gains, just like reading natively from a cached local file. Again, this is based on my experiments with FUSE and have not looked at the kernel code to confirm the behavior.

John




Thank you

It answers a little bit, but not quite.

So it answers this bit: "What cache does keep_cache deal with?"
Answer: page cache

Then, I'm still confused because there are two layers of open.

There's the fuse's open operation. There, you decide whether you want the
"keep_cache" option by specifying in "fuse_file_info *fi". And you can even
decide the open operation to do nothing at all.
Then, there's the libc's open method which can even be called outside
fuse's open operation (for example, could be called in the destroy
operation even).

Logically, fuse's open operation is really not the one opening the file, so
it has to be the files that libc's open method is calling that gets cached.
So, how does fuse handle this?
Is it like, "Any libc's open method that gets called by the fuse program
will follow the specific cache option"?

If so, imagine a file outside fuse (/home/foo/file), and your fuse's open
operation calls libc's open method to that file (cat /mnt/fuse/foo/file).
And then, an outsider who is not using fuse, simply opens the file (cat
/home/foo/file). In this scenario, both fuse and outside-fuse are using
libc's open to the file (/home/foo/file), which seems to me that they
should both be looking at the same page cache.  But, if the above question
is correct, it should mean that fuse can alter the page cache for the file
(/home/foo/file), which affects the outsider accessing the file too (cat
/home/foo/file). This does not seem correct.

If this is not the case, then how does fuse isolate the cache option only
to itself?

2017-01-04 4:59 GMT+09:00 Maxim Patlasov <[hidden email]>:

> Hi,
>
>
> It should be enough to look at the following snippet to fully understand
> keep_cache option:
>
> void fuse_finish_open(struct inode *inode, struct file *file)
> {
>     ...
>     if (!(ff->open_flags & FOPEN_KEEP_CACHE))
>         invalidate_inode_pages2(inode->i_mapping);
>
> where invalidate_inode_pages2() purges page-cache associated with given
> inode. The idea is that if you open the same file more than once (and maybe
> referring by different names), the page-cache is not duplicated. There is a
> general cache coherence problem to cope with: imagine that you read/write a
> file from the node "A", so some content of the file is cached there, and
> now you modify the file from the node "B". Which mechanism should FUSE
> programmer utilize to ensure that users of node "A" won't see stale
> content? There is no good universal solution in FUSE methodology (AFAIK),
> but there are some tricks. KEEP_CACHE is one of them: it barriers stale
> data at least at the moment of handling open(2). Does this answer your
> question?
>
>
> Thanks,
>
> Maxim
>
> On 01/03/2017 02:01 AM, ???? wrote:
>
> Yes it shows that caching improves read when using "cat". But I still
> don't know what cached information is allowing the performance boost.
>
> I'm thinking the answer to this question of mine can help with that: "what

> Yes it shows that caching improves read when using "cat". But I still
> don't know what cached information is allowing the performance boost.
>
> I'm thinking the answer to this question of mine can help with that: "what
> if a process tried to read "/mnt/fuse/file" where "/mnt/fuse" is the mount
> point of the fuse program, and I made the fuse program do nothing and
> return 0 for its open operation method? What gets cached then?"
>
> 2017-01-03 18:48 GMT+09:00 Stef Bon <[hidden email]>:
>
>> Well I can't help you then.
>> In the link is described the problem and an example. This example
>> reads a file many times, to show the difference when using the
>> keep_cache
>> option. If you stil don't know what is cached here......
>>
>
>
>
> ------------------------------------------------------------------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, SlashDot.org! http://sdm.link/slashdot


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
--
fuse-devel mailing list
To unsubscribe or subscribe, visit https://lists.sourceforge.net/lists/listinfo/fuse-devel





------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
--
fuse-devel mailing list
To unsubscribe or subscribe, visit https://lists.sourceforge.net/lists/listinfo/fuse-devel
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: fuse-devel Digest, Vol 129, Issue 3

Nikolaus Rath
In reply to this post by John Har
On Jan 04 2017, John Har <[hidden email]> wrote:
>  All I know is that you have to enable FUSE mount options "keep_cache" or
> "auto_cache" to see any page caching effects for actual file data.

At least when using the low-level API, that is not true. Even without
keep_cache, data will be cached as long as the file is open (and is not
invalidated for other reasons).

Best,
-Nikolaus

--
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

             »Time flies like an arrow, fruit flies like a Banana.«

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
--
fuse-devel mailing list
To unsubscribe or subscribe, visit https://lists.sourceforge.net/lists/listinfo/fuse-devel
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: fuse-devel Digest, Vol 129, Issue 3

Maxim Patlasov-3
In reply to this post by John Har

On 01/03/2017 09:54 PM, John Har wrote:

Take a look at this picture of a FUSE data flow: http://clamfs.sourceforge.net/images/large/clamfs_open.png 
VFS will decide which filesystem module (FUSE, ext2, NFS, ...) to use based on the mount path. You can create a recursion if your userspace code tries to open() (or any other file operation) on the same mount path that you are servicing  (i.e. /mnt/fuse).

Regarding where the page caching is done, I believe you're right that VFS handles the caching:
 All I know is that you have to enable FUSE mount options "keep_cache" or "auto_cache" to see any page caching effects for actual file data.  Maybe FUSE is passing hints back to VFS as to whether the cached pages are valid or not. Of course, on first read of a file, FUSE (or VFS) knows that no data blocks have been cached, so it results in read() calls to your userspace code to fill the page cache. On subsequent reads of the same file, with "auto_cache" enabled, the FUSE kernel will call open() and fstat() to your userspace code to see if the file has changed. It's up to your code to decide how to respond properly (i.e. has the modification time or file size changed).  I suspect that the FUSE kernel uses this information to decide whether to tell VFS that the relevant pages are valid or to call your userspace read() to fetch the blocks.

Just for clarity, the only "hints" from FUSE kernel (while handling buffered read(2)) are about file attributes. And the only attribute affecting page cache is file size. So, in a simple case when you don't attempt to read beyond EOF, it is not up to your fuse filesystem whether to use page cache or not: if corresponding page is present in the cache, VFS will use it, otherwise will return control to fuse to read it. The only exception is FOPEN_DIRECT_IO when your fuse filesystem requests explicitly not to use page cache at all (in open(2)).


Again, I'm not a kernel expert and have not looked at any of the kernel code.  I am basing this on my own experiments with FUSE.

Hope this helps,
John

On Tue, Jan 3, 2017 at 9:04 PM 박마루한 <[hidden email]> wrote:
To clarity when I said, "if you call open() to the FUSE mount point (/mnt/fuse/foo/file)", I meant that the FUSE mount point is /mnt/fuse, and we are calling libc's open() to /mnt/fuse/foo/file.

2017-01-04 14:02 GMT+09:00 박마루한 <[hidden email]>:
Thank you so much for this. This is exactly the type of answer I wanted.

There are couple things I don't understand from your answer however.

1. "However, if your FUSE userspace code for some reason did try to open() via the FUSE mount point /mnt/fuse/foo/file, then you'd have a never-ending recursion."

I don't quite understand why that would be recursive. I'm probably misunderstanding how FUSE's open operation works.

From what I understand when FUSE kernel receives an open command from VFS, it simply runs its own user-defined open method from the FUSE userspace library code. What's inside that user-defined open method doesn't matter.

2. "If the initiating application does a libc open() to "file" via the FUSE mount point (e.g. /mnt/fuse/foo/file), then the libc open() will go through the FUSE kernel and then to your FUSE userspace filesystem code.  If the FUSE kernel has any (page) cached data, it will use the cached data before requesting new data from your code."

From what I read, the FUSE kernel doesn't cache anything. It's simply the VFS doing the same caching that it does to all other filesystems. So, if you call open() to the FUSE mount point (/mnt/fuse/foo/file), VFS will be like "hmmm, do I have the file data for /mnt/fuse/foo/file?" and that's just simply impossible how I see it because "/mnt/fuse/foo/file" is inside FUSE mount point, which you can code it to do absolutely anything. Like it doesn't even need to open any file at all if you call libc's open() to that directory, or you can even do something wacky like print "hello world".  I just can't grasp how VFS can cache the file data about something that's simply a gateway to calling custom code.

2017-01-04 12:05 GMT+09:00 John Har <[hidden email]>:
Forgive me if I misunderstood, but I think the scenario you described would lead to infinitely recursive calls to open().  But it doesn't work that way because you're accessing the file "file" through different paths.

If the initiating application does a libc open() to "file" via the FUSE mount point (e.g. /mnt/fuse/foo/file), then the libc open() will go through the FUSE kernel and then to your FUSE userspace filesystem code.  If the FUSE kernel has any (page) cached data, it will use the cached data before requesting new data from your code. 

When your FUSE filesystem invokes libc's open(), you're now accessing it through a different path which isn't via the FUSE mount point.  Your userspace open() call will be to "/home/foo/file", and it will NOT go through the FUSE kernel, and you won't be encountering the page cache associated with the FUSE kernel.

However, if your FUSE userspace code for some reason did try to open() via the FUSE mount point /mnt/fuse/foo/file, then you'd have a never-ending recursion.  So don't do that. :)

As far as what is cached, it seems that inode (or namespace) data is always cached, but actual file data is only cached if "keep_cache" or "auto_cache" is specified.

BTW, in my experiments with FUSE, I didn't see any caching happening unless you use "keep_cache" or "auto_cache".  Without either one, every read() call from the application results in read() calls to your filesystem.  Read throughput will be constrained going through your filesystem. With auto_cache specified, the FUSE kernel will check if the file hasn't changed with open() and fstat() calls to your filesystem code, and if it hasn't changed, it will return the page cached data to the initiating application.  This results in significant performance gains, just like reading natively from a cached local file. Again, this is based on my experiments with FUSE and have not looked at the kernel code to confirm the behavior.

John




Thank you

It answers a little bit, but not quite.

So it answers this bit: "What cache does keep_cache deal with?"
Answer: page cache

Then, I'm still confused because there are two layers of open.

There's the fuse's open operation. There, you decide whether you want the
"keep_cache" option by specifying in "fuse_file_info *fi". And you can even
decide the open operation to do nothing at all.
Then, there's the libc's open method which can even be called outside
fuse's open operation (for example, could be called in the destroy
operation even).

Logically, fuse's open operation is really not the one opening the file, so
it has to be the files that libc's open method is calling that gets cached.
So, how does fuse handle this?
Is it like, "Any libc's open method that gets called by the fuse program
will follow the specific cache option"?

If so, imagine a file outside fuse (/home/foo/file), and your fuse's open
operation calls libc's open method to that file (cat /mnt/fuse/foo/file).
And then, an outsider who is not using fuse, simply opens the file (cat
/home/foo/file). In this scenario, both fuse and outside-fuse are using
libc's open to the file (/home/foo/file), which seems to me that they
should both be looking at the same page cache.  But, if the above question
is correct, it should mean that fuse can alter the page cache for the file
(/home/foo/file), which affects the outsider accessing the file too (cat
/home/foo/file). This does not seem correct.

If this is not the case, then how does fuse isolate the cache option only
to itself?

2017-01-04 4:59 GMT+09:00 Maxim Patlasov <[hidden email]>:

> Hi,
>
>
> It should be enough to look at the following snippet to fully understand
> keep_cache option:
>
> void fuse_finish_open(struct inode *inode, struct file *file)
> {
>     ...
>     if (!(ff->open_flags & FOPEN_KEEP_CACHE))
>         invalidate_inode_pages2(inode->i_mapping);
>
> where invalidate_inode_pages2() purges page-cache associated with given
> inode. The idea is that if you open the same file more than once (and maybe
> referring by different names), the page-cache is not duplicated. There is a
> general cache coherence problem to cope with: imagine that you read/write a
> file from the node "A", so some content of the file is cached there, and
> now you modify the file from the node "B". Which mechanism should FUSE
> programmer utilize to ensure that users of node "A" won't see stale
> content? There is no good universal solution in FUSE methodology (AFAIK),
> but there are some tricks. KEEP_CACHE is one of them: it barriers stale
> data at least at the moment of handling open(2). Does this answer your
> question?
>
>
> Thanks,
>
> Maxim
>
> On 01/03/2017 02:01 AM, ???? wrote:
>
> Yes it shows that caching improves read when using "cat". But I still
> don't know what cached information is allowing the performance boost.
>
> I'm thinking the answer to this question of mine can help with that: "what
> Yes it shows that caching improves read when using "cat". But I still
> don't know what cached information is allowing the performance boost.
>
> I'm thinking the answer to this question of mine can help with that: "what
> if a process tried to read "/mnt/fuse/file" where "/mnt/fuse" is the mount
> point of the fuse program, and I made the fuse program do nothing and
> return 0 for its open operation method? What gets cached then?"
>
> 2017-01-03 18:48 GMT+09:00 Stef Bon <[hidden email]>:
>
>> Well I can't help you then.
>> In the link is described the problem and an example. This example
>> reads a file many times, to show the difference when using the
>> keep_cache
>> option. If you stil don't know what is cached here......
>>
>
>
>
> ------------------------------------------------------------------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, SlashDot.org! http://sdm.link/slashdot


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
--
fuse-devel mailing list
To unsubscribe or subscribe, visit https://lists.sourceforge.net/lists/listinfo/fuse-devel





------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot




------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
--
fuse-devel mailing list
To unsubscribe or subscribe, visit https://lists.sourceforge.net/lists/listinfo/fuse-devel
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: fuse-devel Digest, Vol 129, Issue 3

Nikolaus Rath
In reply to this post by Maruhan Park
On Jan 04 2017, 박마루한 <mrpark-tlZpZqNTSqmJ1ku80POtVQC/[hidden email]> wrote:
> Darn. So we still don't know what is cached for /mnt/fuse/foo/file, when
> fuse opens nothing, does wacky things, opens many files, etc.

"We" know that very well :-).

* Whatever data your file system process returns when the kernel issues
  a read() request will be cached.

* Whatever data an userspace process writes through the kernel into your
  file system will be cached.

This has nothing to do with how your file system implements its open()
method.

Best,
-Nikolaus


--
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

             »Time flies like an arrow, fruit flies like a Banana.«

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
--
fuse-devel mailing list
To unsubscribe or subscribe, visit https://lists.sourceforge.net/lists/listinfo/fuse-devel
Loading...