Discussion:
2GB File size limit exceeded over NFS on 10.3
Dave Howorth
2007-12-07 11:36:26 UTC
Permalink
I have a server that's been running fine for some years with Suse 9.2.
I've just installed 10.3 and am now getting 'File size limit exceeded'
errors.

The access is being made by an application on another box, running 9.2.
It's trying to copy a 2.7 GB file from a local disk to a filesystem on
the server that it's mounting using NFS. The filesystem is reiserfs and
was not changed when I upgraded the OS. Both machines are 64-bit. The
application has worked fine for years but now says:

File size limit exceeded

I don't have a file size limit (file size (blocks, -f) unlimited)

Google showed it may be a restriction of NFS V2 though why that is now
running is another mystery. rpcinfo and nfsstat shows that server and
client are both running both v2 and v3. I haven't been able so far to
find out which version is in use for a particular mount. How can I do that?

Does anybody recognize these symptoms?

Thanks, Dave
--
To unsubscribe, e-mail: opensuse+***@opensuse.org
For additional commands, e-mail: opensuse+***@opensuse.org
Andreas Jaeger
2007-12-07 11:43:57 UTC
Permalink
Post by Dave Howorth
I have a server that's been running fine for some years with Suse 9.2.
I've just installed 10.3 and am now getting 'File size limit exceeded'
errors.
The access is being made by an application on another box, running 9.2.
It's trying to copy a 2.7 GB file from a local disk to a filesystem on
the server that it's mounting using NFS. The filesystem is reiserfs and
was not changed when I upgraded the OS. Both machines are 64-bit. The
File size limit exceeded
I don't have a file size limit (file size (blocks, -f) unlimited)
Google showed it may be a restriction of NFS V2 though why that is now
running is another mystery. rpcinfo and nfsstat shows that server and
client are both running both v2 and v3. I haven't been able so far to
find out which version is in use for a particular mount. How can I do that?
Does anybody recognize these symptoms?
You might use the old reiserfs format which cannot cope with files
larger than 2 GB - or one of the tools in 9.2 cannot,

Andreas
--
Andreas Jaeger, Director Platform/openSUSE, ***@suse.de
SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg)
Maxfeldstr. 5, 90409 Nürnberg, Germany
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126
Rui Santos
2007-12-07 11:58:46 UTC
Permalink
Post by Dave Howorth
I have a server that's been running fine for some years with Suse 9.2.
I've just installed 10.3 and am now getting 'File size limit exceeded'
errors.
The access is being made by an application on another box, running 9.2.
It's trying to copy a 2.7 GB file from a local disk to a filesystem on
the server that it's mounting using NFS. The filesystem is reiserfs and
was not changed when I upgraded the OS. Both machines are 64-bit. The
File size limit exceeded
I don't have a file size limit (file size (blocks, -f) unlimited)
Google showed it may be a restriction of NFS V2 though why that is now
running is another mystery. rpcinfo and nfsstat shows that server and
client are both running both v2 and v3. I haven't been able so far to
find out which version is in use for a particular mount. How can I do that?
Does anybody recognize these symptoms?
Yes,

You are probably establishing a NFSv2 connection.
The workaround I used to solve the problem was to instruct the server
not to accept NFSv1 nor NFSv2 connections.

But, you can try to force a specific version on the mount command from
the client side by giving the parameter 'nfsvers=3'. eg: mount -t nfs -o
defaults,nfsvers=3 nfsserver:/share /mountpoint
If it works, you can then instruct the server not to accept NFSv1/2
connections, if it suits your needs, of course.
Post by Dave Howorth
Thanks, Dave
Hope it helps,
Rui
--
Rui Santos
http://www.ruisantos.com/

Veni, vidi, Linux!
--
To unsubscribe, e-mail: opensuse+***@opensuse.org
For additional commands, e-mail: opensuse+***@opensuse.org
Dave Howorth
2007-12-07 12:13:13 UTC
Permalink
Post by Andreas Jaeger
You might use the old reiserfs format which cannot cope with files
larger than 2 GB - or one of the tools in 9.2 cannot,
I don't think so, since it's been working for years :) It's only stopped
working since I installed 10.3 on the server. Apart from root, the
filesystems are exactly as they were before.

Cheers, Dave
--
To unsubscribe, e-mail: opensuse+***@opensuse.org
For additional commands, e-mail: opensuse+***@opensuse.org
Dave Howorth
2007-12-07 12:16:48 UTC
Permalink
Post by Rui Santos
Post by Dave Howorth
I have a server that's been running fine for some years with Suse 9.2.
I've just installed 10.3 and am now getting 'File size limit exceeded'
errors.
The access is being made by an application on another box, running 9.2.
It's trying to copy a 2.7 GB file from a local disk to a filesystem on
the server that it's mounting using NFS. The filesystem is reiserfs and
was not changed when I upgraded the OS. Both machines are 64-bit. The
File size limit exceeded
I don't have a file size limit (file size (blocks, -f) unlimited)
Google showed it may be a restriction of NFS V2 though why that is now
running is another mystery. rpcinfo and nfsstat shows that server and
client are both running both v2 and v3. I haven't been able so far to
find out which version is in use for a particular mount. How can I do that?
Does anybody recognize these symptoms?
Yes,
You are probably establishing a NFSv2 connection.
That's what I'm guessing as well. I want to find out what versions all
the connections are using to confirm that, but I don't know how.
Post by Rui Santos
The workaround I used to solve the problem was to instruct the server
not to accept NFSv1 nor NFSv2 connections.
That's what I think I'd like to do, but I need to discover what all the
existing mounts are and find out why any V2 ones are V2 and whether they
need to be before I can turn off V2 mount support.
Post by Rui Santos
But, you can try to force a specific version on the mount command from
the client side by giving the parameter 'nfsvers=3'. eg: mount -t nfs -o
defaults,nfsvers=3 nfsserver:/share /mountpoint
Thanks, I'll try that to at least see if I can fix the specific issue I
have with this particular application.

Cheers, Dave
Post by Rui Santos
If it works, you can then instruct the server not to accept NFSv1/2
connections, if it suits your needs, of course.
Post by Dave Howorth
Thanks, Dave
Hope it helps,
Rui
--
To unsubscribe, e-mail: opensuse+***@opensuse.org
For additional commands, e-mail: opensuse+***@opensuse.org
Hans Witvliet
2007-12-07 14:34:02 UTC
Permalink
Post by Dave Howorth
I have a server that's been running fine for some years with Suse 9.2.
I've just installed 10.3 and am now getting 'File size limit exceeded'
errors.
The access is being made by an application on another box, running 9.2.
It's trying to copy a 2.7 GB file from a local disk to a filesystem on
the server that it's mounting using NFS. The filesystem is reiserfs and
was not changed when I upgraded the OS. Both machines are 64-bit. The
File size limit exceeded
I don't have a file size limit (file size (blocks, -f) unlimited)
Google showed it may be a restriction of NFS V2 though why that is now
running is another mystery. rpcinfo and nfsstat shows that server and
client are both running both v2 and v3. I haven't been able so far to
find out which version is in use for a particular mount. How can I do that?
Does anybody recognize these symptoms?
Thanks, Dave
Hi,

For larger files, you can not use the default mount options anymore!
You must use nfsvers=3 instead on nfsver=2 (and use tcp instead of udp)

HW
--
To unsubscribe, e-mail: opensuse+***@opensuse.org
For additional commands, e-mail: opensuse+***@opensuse.org
Dave Howorth
2007-12-07 20:49:44 UTC
Permalink
Post by Hans Witvliet
For larger files, you can not use the default mount options anymore!
You must use nfsvers=3 instead on nfsver=2 (and use tcp instead of udp)
Hi Hans, Thanks for this. I will try it on Monday. But again, *this has
been working for years.* I've been copying a file > 2 GB every two weeks
for years, successfully, without using this option. It has only now
stopped working AFTER I installed 10.3 on the server. I haven't changed
the client - where the mount request is made.

Something has broken backwards compatibility and I'd like to discover
what.

BTW, why do I need to use TCP?

And if anybody knows, how can I discover whether any specific link is
using NFSV2 or NFSV3 and TCP or UDP?

Thanks, Dave
--
To unsubscribe, e-mail: opensuse+***@opensuse.org
For additional commands, e-mail: opensuse+***@opensuse.org
Dave Howorth
2007-12-10 17:05:39 UTC
Permalink
You'll need to read my reply using the (1), (2), (3) sequence for it to
make sense :)

Sorry, it was easier than cutting and pasting bits
Post by Dave Howorth
Post by Rui Santos
Post by Dave Howorth
I have a server that's been running fine for some years with Suse 9.2.
I've just installed 10.3 and am now getting 'File size limit exceeded'
errors.
The access is being made by an application on another box, running 9.2.
It's trying to copy a 2.7 GB file from a local disk to a filesystem on
the server that it's mounting using NFS. The filesystem is reiserfs and
was not changed when I upgraded the OS. Both machines are 64-bit. The
File size limit exceeded
I don't have a file size limit (file size (blocks, -f) unlimited)
Google showed it may be a restriction of NFS V2 though why that is now
running is another mystery. rpcinfo and nfsstat shows that server and
client are both running both v2 and v3. I haven't been able so far to
find out which version is in use for a particular mount. How can I do that?
Does anybody recognize these symptoms?
Yes,
You are probably establishing a NFSv2 connection.
That's what I'm guessing as well. I want to find out what versions all
the connections are using to confirm that, but I don't know how.
(3) I still haven't found any way to discover what NFS version
particular mounts are using.
Post by Dave Howorth
Post by Rui Santos
The workaround I used to solve the problem was to instruct the server
not to accept NFSv1 nor NFSv2 connections.
(2) The script /etc/init.d/nfsserver was completely rewritten between
9.2 and 10.3. In 9.2 it started rpc.mountd like this:

startproc /usr/sbin/rpc.mountd

while in 10.3 it starts it like this:

echo "+2 +3 -4" > /proc/fs/nfsd/versions
VERSION_PARAMS="--no-nfs-version 4"
...
startproc /usr/sbin/rpc.mountd $VERSION_PARAMS

I don't know whether that would be enough to change the default
behaviour. But there doesn't seem to be any provision to change the
options that rpc.mountd is started with. Did you hack the script?
Post by Dave Howorth
That's what I think I'd like to do, but I need to discover what all the
existing mounts are and find out why any V2 ones are V2 and whether they
need to be before I can turn off V2 mount support.
Post by Rui Santos
But, you can try to force a specific version on the mount command from
the client side by giving the parameter 'nfsvers=3'. eg: mount -t nfs -o
defaults,nfsvers=3 nfsserver:/share /mountpoint
(1) I've now done this as you and Hans suggested and it has fixed the
problem. So I think that confirms that the Suse10.3 server is making
NFSv2 connections by default whereas the Suse9.2 server made NFSv3
connections. I still don't understand why, though. See above
Post by Dave Howorth
Thanks, I'll try that to at least see if I can fix the specific issue I
have with this particular application.
Cheers, Dave
Post by Rui Santos
If it works, you can then instruct the server not to accept NFSv1/2
connections, if it suits your needs, of course.
Post by Dave Howorth
Thanks, Dave
Hope it helps,
Rui
--
To unsubscribe, e-mail: opensuse+***@opensuse.org
For additional commands, e-mail: opensuse+***@opensuse.org
M. Todd Smith
2007-12-10 17:38:32 UTC
Permalink
Post by Dave Howorth
Post by Hans Witvliet
For larger files, you can not use the default mount options anymore!
You must use nfsvers=3 instead on nfsver=2 (and use tcp instead of
udp)
The default options since SuSe 9.x have been for TCP and NFSv3 by
default. It is still worth it to declare it in /etc/fstab for the
sake of clarity.
Post by Dave Howorth
Hi Hans, Thanks for this. I will try it on Monday. But again, *this
has
been working for years.* I've been copying a file > 2 GB every two
weeks
for years, successfully, without using this option. It has only now
stopped working AFTER I installed 10.3 on the server. I haven't
changed
the client - where the mount request is made.
The client is where all the NFS mount options are asked for, so if you
haven't changed it, then perhaps that should be the first place to
look for the problem.
Could you please copy your mount entry from /etc/fstab on the client
and /etc/exports on the server and post them in this thread?
Post by Dave Howorth
Something has broken backwards compatibility and I'd like to discover
what.
BTW, why do I need to use TCP?
Its debatable if you really need to use TCP in a non-WAN setting with
good hardware. UDP has no flow control and little has been added to
the protocol over the past 10 years or so. TCP is quite the
opposite. Using either has both advantages and disadvantages, it is
generally accepted that TCP is easier and better to use. Should you
choose to go with UDP there has been much conversation about not using
anything over an 8k rwsize because of problems it causes.
Post by Dave Howorth
And if anybody knows, how can I discover whether any specific link is
using NFSV2 or NFSV3 and TCP or UDP?
On your client if you type `cat /proc/mounts` then you will get back a
full listing of your mounts and all the options they are connecting
with to your nfs server (including the ones that are defaults that you
wouldn't normally see in just /etc/fstab. Typing `mount` will also
return how you are connected to the nfs server but with the default
connection information hidden and extended options you might have used
in /etc/fstab shown.

Cheers
Todd

Systems Administrator
---------------------------------------------
Soho VFX - Visual Effects Studio
99 Atlantic Avenue, Suite 303
Toronto, Ontario, M6K 3J8
(416) 516-7863
http://www.sohovfx.com
---------------------------------------------
--
To unsubscribe, e-mail: opensuse+***@opensuse.org
For additional commands, e-mail: opensuse+***@opensuse.org
Hans Witvliet
2007-12-10 21:39:22 UTC
Permalink
Post by M. Todd Smith
The default options since SuSe 9.x have been for TCP and NFSv3 by
default. It is still worth it to declare it in /etc/fstab for the
sake of clarity.
I wouldn't put any money on that statement!
Regularly we have a system falling back to udp and using NFS-2;
Thus trunkating and corrupting the xen-images that are all way larger
than 2GB (all of them are between 6 and 20GB)

Other nice "feature" is that sometimes portmapper does not come up after
a unplanned reboot, causing very long delays.

hw
--
To unsubscribe, e-mail: opensuse+***@opensuse.org
For additional commands, e-mail: opensuse+***@opensuse.org
M. Todd Smith
2007-12-10 22:20:33 UTC
Permalink
Post by Hans Witvliet
Post by M. Todd Smith
The default options since SuSe 9.x have been for TCP and NFSv3 by
default. It is still worth it to declare it in /etc/fstab for the
sake of clarity.
I wouldn't put any money on that statement!
Regularly we have a system falling back to udp and using NFS-2;
Thus trunkating and corrupting the xen-images that are all way larger
than 2GB (all of them are between 6 and 20GB)
I wouldn't write something I hadn't checked. Here's the pudding:

[***@doozer3 /proc]$ cat /etc/SuSE-release
SuSE Linux 9.3 (i586)
VERSION = 9.3

To show I'm not forcing NFS v3 or tcp

[***@doozer3 /proc]$ cat /etc/fstab
/dev/sda3 / reiserfs
acl,user_xattr 1 1
/dev/sda1 /boot reiserfs
acl,user_xattr 1 2
/dev/sda2 swap swap
pri=42 0 0
devpts /dev/pts devpts
mode=0620,gid=5 0 0
proc /proc proc
defaults 0 0
usbfs /proc/bus/usb usbfs
noauto 0 0
sysfs /sys sysfs
noauto 0 0
/dev/cdrom /media/cdrom subfs
noauto,fs=cdfss,ro,procuid,nosuid,nodev,exec,iocharset=utf8 0 0
/dev/fd0 /media/floppy subfs
noauto,fs=floppyfss,procuid,nodev,nosuid,sync 0 0
/dev/sdb1 /data reiserfs
acl,user_xattr 1 1
*************:/ifs/data/linux_home /home nfs
noexec
,dev,suid,rw,rsize=32768,wsize=32768,timeo=500,retrans=10,retry=60 0 0
*************:/ifs/data/archive /mnt/archive nfs
exec,dev,suid,rw,rsize=32768,wsize=32768,timeo=500,retrans=10,retry=60
0 0
*************:/ifs/data/shared /mnt/shared nfs
exec,dev,suid,rw,rsize=32768,wsize=32768,timeo=500,retrans=10,retry=60
0 0
*************:/ifs/data/projects /mnt/projects nfs
exec,dev,suid,rw,rsize=32768,wsize=32768,timeo=500,retrans=10,retry=60
0 0

Here's what /proc/mounts returns - the default connect settings.

[***@doozer3 /proc]$ cat /proc/mounts
rootfs / rootfs rw 0 0
initramfsdevs /lib/klibc/dev tmpfs rw 0 0
/dev/sda3 / reiserfs rw 0 0
proc /proc proc rw,nodiratime 0 0
sysfs /sys sysfs rw 0 0
devpts /dev/pts devpts rw 0 0
tmpfs /dev/shm tmpfs rw 0 0
/dev/sda1 /boot reiserfs rw 0 0
/dev/sdb1 /data reiserfs rw 0 0
usbfs /proc/bus/usb usbfs rw 0 0
***************:/ifs/data/linux_home /home nfs
rw
,noexec
,v3,rsize=32768,wsize=32768,hard,tcp,lock,addr=****************0 0
***************:/ifs/data/archive /mnt/archive nfs
rw,v3,rsize=32768,wsize=32768,hard,tcp,lock,addr=****************0 0
***************:/ifs/data/shared /mnt/shared nfs
rw,v3,rsize=32768,wsize=32768,hard,tcp,lock,addr=**************** 0 0
***************:/ifs/data/projects /mnt/projects nfs
rw,v3,rsize=32768,wsize=32768,hard,tcp,lock,addr=i**************** 0 0
/dev/fd0 /media/floppy subfs rw,sync,nosuid,nodev 0 0
Post by Hans Witvliet
Other nice "feature" is that sometimes portmapper does not come up
after
a unplanned reboot, causing very long delays.
Portmap relies on both the syslog service and the network service to
come up before it will, the problem with hard reboots usually lies
within those two coming back up in my experience.

Systems Administrator
---------------------------------------------
Soho VFX - Visual Effects Studio
99 Atlantic Avenue, Suite 303
Toronto, Ontario, M6K 3J8
(416) 516-7863
http://www.sohovfx.com
---------------------------------------------
--
To unsubscribe, e-mail: opensuse+***@opensuse.org
For additional commands, e-mail: opensuse+***@opensuse.org
Dave Howorth
2007-12-11 10:40:34 UTC
Permalink
Todd,

Thanks very much for your replies. They help enormously (not yet enough
to say SOLVED, sadly) ...
Post by M. Todd Smith
Post by Dave Howorth
Post by Hans Witvliet
For larger files, you can not use the default mount options anymore!
You must use nfsvers=3 instead on nfsver=2 (and use tcp instead of udp)
The default options since SuSe 9.x have been for TCP and NFSv3 by
default. It is still worth it to declare it in /etc/fstab for the sake
of clarity.
I'm glad Hans questioned you in a later mail because I had trouble
believing this as well :( But you've already answered the question
before I started writing this :)

Everything I've read about nfs says that v2 is the default, but I've
checked as you described and I'm seeing v3 as you say. Is that a
Suse-only thing?
Post by M. Todd Smith
Post by Dave Howorth
Hi Hans, Thanks for this. I will try it on Monday. But again, *this has
been working for years.* I've been copying a file > 2 GB every two weeks
for years, successfully, without using this option. It has only now
stopped working AFTER I installed 10.3 on the server. I haven't changed
the client - where the mount request is made.
The client is where all the NFS mount options are asked for, so if you
haven't changed it, then perhaps that should be the first place to look
for the problem.
I don't understand this. I'm happy to go along but I don't understand.
I'd expect to look for problems in the place that *was* changed? But
here goes ...
Post by M. Todd Smith
Could you please copy your mount entry from /etc/fstab on the client and
/etc/exports on the server and post them in this thread?
client (suse3) /etc/fstab:

suse1:/data /nfs/suse1/data nfs
rsize=8192,wsize=8192,intr,bg,noatime,nfsvers=3 0 0

suse1:/home /home nfs
rsize=8192,wsize=8192,intr,bg,noatime 0 0


client (suse3) /proc/mounts:

suse1:/home /home nfs
rw,noatime,v3,rsize=8192,wsize=8192,hard,intr,tcp,lock,addr=suse1 0 0

suse1:/data /nfs/suse1/data nfs
rw,noatime,v3,rsize=8192,wsize=8192,hard,intr,tcp,lock,addr=suse1 0 0


server (suse1) /etc/exports:

/data @scop_hosts(rw,root_squash,async,no_subtree_check)
/data/wwpdb *.lmb.internal(ro,all_squash,async,no_subtree_check)
/home @scop_hosts(rw,root_squash,async,no_subtree_check)


server (suse1) /etc/netgroup:

scop_hosts (other-hosts-snipped) (suse3,,) (other-hosts-snipped)
Post by M. Todd Smith
Post by Dave Howorth
Something has broken backwards compatibility and I'd like to discover
what.
So I can now see more information but I don't know how to interpret it
to explain the symptoms. Originally, I didn't have that nfsvers=3 option
on the 'data' mount - it was the same as the 'home' mount. And in that
conguration I had the 2GB size limit problem. I added the nfsvers=3
option and remounted 'data' and now I don't have the problem. But
/proc/mounts seems to show that the two mounts - one old-style and one
new-style fstab entry - result in identical mounts. So why would I have
the problem in the first place?

I can't do another experiment at present, because having got over the
problem, there's a job running that will take a few days to finish. I'd
like to nail the issue completely though. I don't want it coming back
next time the machine is rebooted after I've forgotten everything about it!
Post by M. Todd Smith
Post by Dave Howorth
BTW, why do I need to use TCP?
Its debatable if you really need to use TCP in a non-WAN setting with
good hardware. UDP has no flow control and little has been added to the
protocol over the past 10 years or so. TCP is quite the opposite.
Using either has both advantages and disadvantages, it is generally
accepted that TCP is easier and better to use. Should you choose to go
with UDP there has been much conversation about not using anything over
an 8k rwsize because of problems it causes.
Thanks - it seems I've been using tcp without realizing :) So I could
increase my r/wsize.
Post by M. Todd Smith
Post by Dave Howorth
And if anybody knows, how can I discover whether any specific link is
using NFSV2 or NFSV3 and TCP or UDP?
On your client if you type `cat /proc/mounts` then you will get back a
full listing of your mounts and all the options they are connecting with
to your nfs server (including the ones that are defaults that you
wouldn't normally see in just /etc/fstab.
Excellent! This is what I was missing.

Thanks Todd,
Dave
Post by M. Todd Smith
Typing `mount` will also
return how you are connected to the nfs server but with the default
connection information hidden and extended options you might have used
in /etc/fstab shown.
--
To unsubscribe, e-mail: opensuse+***@opensuse.org
For additional commands, e-mail: opensuse+***@opensuse.org
M. Todd Smith
2007-12-11 18:26:48 UTC
Permalink
Post by Dave Howorth
Todd,
Thanks very much for your replies. They help enormously (not yet
enough
to say SOLVED, sadly) ...
Looks like you are on your way now though :)
Post by Dave Howorth
Post by M. Todd Smith
Post by Hans Witvliet
For larger files, you can not use the default mount options
anymore!
You must use nfsvers=3 instead on nfsver=2 (and use tcp instead
of udp)
The default options since SuSe 9.x have been for TCP and NFSv3 by
default. It is still worth it to declare it in /etc/fstab for the
sake
of clarity.
Everything I've read about nfs says that v2 is the default, but I've
checked as you described and I'm seeing v3 as you say. Is that a
Suse-only thing?
Not that I am aware of, I checked the only other machine I have
running a 2.4.x kernel (2.4.20 redhat 9 machine). It too defaults to
nfsv3 connections on 4 different servers. I haven't seen a machine
default to nfsv2 for the past 4 years or so. You can do two things
really, get rid of the servers ability to serve up an NFSv2
connection, which I thought might have been happening on our SANS but
I setup a small NFS server to check it out and its still defaulting to
v3. This procedure is outlined in the NFS FAQ I link below. Or force
v3 on all your clients. I would probably force it all on the clients
as it is far more clear to see it in many /etc/fstab's then it is on a
single /etc/sysconfig file.
Post by Dave Howorth
Post by M. Todd Smith
The client is where all the NFS mount options are asked for, so if you
haven't changed it, then perhaps that should be the first place to
look
for the problem.
I don't understand this. I'm happy to go along but I don't understand.
I'd expect to look for problems in the place that *was* changed? But
here goes ...
The client asks the server for whatever options you give it. If the
server can comply then it will. Many people don't enforce nfsv3 or
tcp by default and thus it can easily be a point of contention. The
changed part is not always the problem piece, although I completely
agree with your logic :). Its certainly an odd behaviour, reminds me
of the days when you had to force 1000/full duplex connections because
the autonegotiation never seemed to work properly.

I think the FAQ at http://nfs.sourceforge.net/ may help you out a
little in the troubleshooting process of NFS.

Cheers
Todd
Systems Administrator
---------------------------------------------
Soho VFX - Visual Effects Studio
99 Atlantic Avenue, Suite 303
Toronto, Ontario, M6K 3J8
(416) 516-7863
http://www.sohovfx.com
---------------------------------------------
--
To unsubscribe, e-mail: opensuse+***@opensuse.org
For additional commands, e-mail: opensuse+***@opensuse.org
Dave Howorth
2007-12-12 13:27:42 UTC
Permalink
Post by Dave Howorth
Everything I've read about nfs says that v2 is the default, but I've
checked as you described and I'm seeing v3 as you say. Is that a
Suse-only thing?
Not that I am aware of, I checked the only other machine I have running
a 2.4.x kernel (2.4.20 redhat 9 machine). It too defaults to nfsv3
connections on 4 different servers. I haven't seen a machine default to
nfsv2 for the past 4 years or so. You can do two things really, get rid
of the servers ability to serve up an NFSv2 connection, which I thought
might have been happening on our SANS but I setup a small NFS server to
check it out and its still defaulting to v3. This procedure is outlined
in the NFS FAQ I link below.
Yes, I'd seen that, but when I looked at how to implement that on a Suse
system it seems there's no place to configure it. It would mean hacking
the /etc/init.d/nfsserver script unless I've missed something.
Or force v3 on all your clients. I would
probably force it all on the clients as it is far more clear to see it
in many /etc/fstab's then it is on a single /etc/sysconfig file.
Except it's a lot harder to see one missing option in many fstabs than
in a single server config file (but since there isn't an appropriate
server config file, that's moot :)

Thanks again for your help,
Dave
--
To unsubscribe, e-mail: opensuse+***@opensuse.org
For additional commands, e-mail: opensuse+***@opensuse.org
M. Todd Smith
2007-12-12 15:16:17 UTC
Permalink
Post by Dave Howorth
Or force v3 on all your clients. I would
probably force it all on the clients as it is far more clear to see
it
in many /etc/fstab's then it is on a single /etc/sysconfig file.
Except it's a lot harder to see one missing option in many fstabs than
in a single server config file (but since there isn't an appropriate
server config file, that's moot :)
After some perusing it doesn't seem to be possible to pass that option
to a SuSe NFS server.

To explain my reason on many /etc/fstab's over a single /etc/
sysconfig. I usually push out new mount entries by scripts so all of
them are the same. Moreover if a single client has a problem I am
more apt to dissect the contents of that client system than I am to
take apart the server which is still servicing everyone other client
fine. The problem would be more transparent in that sense.

Cheers
Todd
Systems Administrator
---------------------------------------------
Soho VFX - Visual Effects Studio
99 Atlantic Avenue, Suite 303
Toronto, Ontario, M6K 3J8
(416) 516-7863
http://www.sohovfx.com
---------------------------------------------
--
To unsubscribe, e-mail: opensuse+***@opensuse.org
For additional commands, e-mail: opensuse+***@opensuse.org
Joseph Loo
2007-12-13 01:28:11 UTC
Permalink
Post by M. Todd Smith
Post by Dave Howorth
Or force v3 on all your clients. I would
probably force it all on the clients as it is far more clear to see
it
in many /etc/fstab's then it is on a single /etc/sysconfig file.
Except it's a lot harder to see one missing option in many fstabs than
in a single server config file (but since there isn't an appropriate
server config file, that's moot :)
After some perusing it doesn't seem to be possible to pass that option
to a SuSe NFS server.
To explain my reason on many /etc/fstab's over a single /etc/
sysconfig. I usually push out new mount entries by scripts so all of
them are the same. Moreover if a single client has a problem I am
more apt to dissect the contents of that client system than I am to
take apart the server which is still servicing everyone other client
fine. The problem would be more transparent in that sense.
Cheers
Todd
Systems Administrator
---------------------------------------------
Soho VFX - Visual Effects Studio
99 Atlantic Avenue, Suite 303
Toronto, Ontario, M6K 3J8
(416) 516-7863
http://www.sohovfx.com
---------------------------------------------
Have you tied using autofs and ldap to create your mount points?
--
Joseph Loo
***@acm.org
--
To unsubscribe, e-mail: opensuse+***@opensuse.org
For additional commands, e-mail: opensuse+***@opensuse.org
M. Todd Smith
2007-12-13 14:40:36 UTC
Permalink
Post by Joseph Loo
Have you tied using autofs and ldap to create your mount points?
--
I admin a mixed mac/linux environment, its been a couple years since I
tried to use autofs and ldap to automount however it has always been
something that we've wanted to do. Now with OS X 10.5 being fully
open directory compliant things should slide in this direction much
easier than they use to.

Unfortunately this is on a long list of things to do, and only one
systems guy to do them :)

Cheers
Todd

Systems Administrator
---------------------------------------------
Soho VFX - Visual Effects Studio
99 Atlantic Avenue, Suite 303
Toronto, Ontario, M6K 3J8
(416) 516-7863
http://www.sohovfx.com
---------------------------------------------
--
To unsubscribe, e-mail: opensuse+***@opensuse.org
For additional commands, e-mail: opensuse+***@opensuse.org
Linda Walsh
2007-12-10 20:57:52 UTC
Permalink
Post by Dave Howorth
Post by Hans Witvliet
For larger files, you can not use the default mount options anymore!
You must use nfsvers=3 instead on nfsver=2 (and use tcp instead of udp)
Hi Hans, Thanks for this. I will try it on Monday. But again, *this has
been working for years.* I've been copying a file > 2 GB every two weeks
for years, successfully, without using this option. It has only now
stopped working AFTER I installed 10.3 on the server. I haven't changed
the client - where the mount request is made.
Something has broken backwards compatibility and I'd like to discover
what.
---
The name of the nfs clients and server packages were renamed
in 10.3 -- that's the first different (that shouldn't make a difference).
The next thing -- as near as I can tell, 10.3 defaults to NFS4. At
least this was what I found out when I ran into the same problems in
10.3. I "upgraded" the packages to the working nfs packages in 10.2
and things went back to normal and started working.

Also -- in trying to upgrade, somehow I picked the wrong
NFS-server somewhere for one of my servers -- it started serving with
a user-space NFS-server instead of the kernel-NFS server. The user
space NFS-server I had been using, also seemed to have a 2GB limit
as well as being limited to NFSv2.

Am now running with NFSv3 and thinks seem to work -- I did
have to "backgrade" the needed NFS-related files to suse10.2, though,
to make it work.

NFSv4 also seems to need another daemon or two -- some sort of
id mapper, at least. Might be useful in some environments, but until I
complete upgrades on my machines, I am sticking with SuSE10.2's NFS
images as they just "worked" for me.

Good luck,
Linda
--
To unsubscribe, e-mail: opensuse+***@opensuse.org
For additional commands, e-mail: opensuse+***@opensuse.org
Dave Howorth
2007-12-11 11:14:06 UTC
Permalink
Post by Linda Walsh
Post by Dave Howorth
Post by Hans Witvliet
For larger files, you can not use the default mount options anymore!
You must use nfsvers=3 instead on nfsver=2 (and use tcp instead of udp)
Hi Hans, Thanks for this. I will try it on Monday. But again, *this has
been working for years.* I've been copying a file > 2 GB every two weeks
for years, successfully, without using this option. It has only now
stopped working AFTER I installed 10.3 on the server. I haven't changed
the client - where the mount request is made.
Something has broken backwards compatibility and I'd like to discover
what.
---
The name of the nfs clients and server packages were renamed
in 10.3 -- that's the first different (that shouldn't make a difference).
The next thing -- as near as I can tell, 10.3 defaults to NFS4.
I don't have v4 running, but I do appear to have some v2 mounts. Now I
know about /proc/mounts, I'll see if I can find a client machine that
admits to owning the traffic:

suse1:~# nfsstat
Server rpc stats:
calls badcalls badauth badclnt xdrcall
27875534 1 1 0 0

Server nfs v2:
null getattr setattr root lookup readlink
1 0% 1418162 17% 58016 0% 0 0% 1987379 23% 479859
5%
read wrcache write create remove rename
3509146 42% 0 0% 493881 5% 58032 0% 27 0% 6
0%
link symlink mkdir rmdir readdir fsstat
0 0% 290496 3% 14 0% 5 0% 2308 0% 5
0%

Server nfs v3:
null getattr setattr lookup access readlink
20 0% 5615777 28% 83718 0% 6024169 30% 1843591 9% 2735877
13%
read write create mkdir symlink mknod
2390416 12% 773094 3% 16955 0% 1427 0% 25 0% 0
0%
remove rmdir rename link readdir readdirplus
1638 0% 336 0% 16125 0% 77 0% 2232 0% 34855
0%
fsstat fsinfo pathconf commit
1180 0% 29 0% 0 0% 34800 0%


I think I chose not to switch it on when I set up the NFS server.
Post by Linda Walsh
At
least this was what I found out when I ran into the same problems in
10.3. I "upgraded" the packages to the working nfs packages in 10.2
and things went back to normal and started working.
I'd seen your problem in the archive but I wasn't sure how similar the
symptoms are and I didn't want to start a new installation by putting
non-standard parts in the engine. If it's configuration, I hope to find
my mistake; if it's a bug, I hope we can identify it so it can be fixed.

<snip>
Post by Linda Walsh
NFSv4 also seems to need another daemon or two -- some sort of
id mapper, at least. Might be useful in some environments, but until I
complete upgrades on my machines, I am sticking with SuSE10.2's NFS
images as they just "worked" for me.
I'd agree that there doesn't seem any point in my environment in moving
to V4.
Post by Linda Walsh
Good luck,
Linda
Thanks, Dave
--
To unsubscribe, e-mail: opensuse+***@opensuse.org
For additional commands, e-mail: opensuse+***@opensuse.org
Carlos E. R.
2007-12-23 20:28:05 UTC
Permalink
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Post by Dave Howorth
Google showed it may be a restriction of NFS V2 though why that is now
running is another mystery. rpcinfo and nfsstat shows that server and
client are both running both v2 and v3. I haven't been able so far to
find out which version is in use for a particular mount. How can I do that?
Just by chance, I found this. I think 'cat /proc/fs/nfsfs/servers' or
'...volumes' will do:


NV SERVER PORT USE HOSTNAME
v3 c0a8010c 801 1 nimrodel.valinor


NV SERVER PORT DEV FSID
v3 c0a8010c 801 0:19 fefbab737c720323


HTH

- --
Cheers,
Carlos E. R.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.4-svn0 (GNU/Linux)

iD8DBQFHbsTWtTMYHG2NR9URAjOCAJ9/gpoWxt8NdyU6U5wjl5LGCTwiqgCfR2MI
H+RJ3iL+Pkjlf0CRe7nuKbc=
=eAUw
-----END PGP SIGNATURE-----
--
To unsubscribe, e-mail: opensuse+***@opensuse.org
For additional commands, e-mail: opensuse+***@opensuse.org
Loading...