I've tried to answer you using google crhome and my answer was not load to here, although I've received email that I have answer:)
Now I try using Explorer and the answer has appeared.
So, lets go to answer you:
We use FSFS and the most of our repositories has about 200MB and there are some (4) with 2GB each one.
We do not use externals.
We have hooks scripts, but they are centralized in unique path, and aach repository has hiperlink to them.
These hooks scripts do validation with Jira tool and also do inserts in oracle database for customize some features.
We do not test yet our backups, but once I copy a repository to another server and i just do a tar.gz from the repository and uncompress it in anohter server and works well.
So I dont know if my approach to do backup (using backups tools from filesystem) is bad or not, because the SVN documentation does not talk about this, just talk about svnadmin dump. But I would like understand if there's some reason to do not do like Im doing.
Example: Backups for Oracle datafiles just works if we stop database (or if we configure it to make backup online, with rman). If I do not stop database, the datafiles will be inconsistents and we could not restore the db with them.
But at SVN, I dont know if we have some type of problems doing different from svnadmin dump.
We do not use svnadmin dump, just because we should create scripts to do dump, it is easy, but I thought that was not needed, since we have our Backup System already configured for all of our machines.
What is your opinion?