Hi! I'm just thinking about storing our whole company's configuration into GIT, because I'm all too used to it. That is, there are configuration dumps of n*10000 routers and switches, as well as "regular" configuration files on server machines (mostly Linux and Solaris.) While probably all of the server machines could run GIT natively, we already have some scripts to dump all router's/switch's configuration to a Solaris system, so we could it import/commit from there. There might be a small number of Windows machines, but I guess these will be done by exporting the interesting stuff to Linux/Solaris machines... I initially thought about running a git-init-db on each machine's root directory and adding all interesting files, but that might hurt GIT's usage for single software projects on those machines, no? Additionally, a lot of configuration files will be common, or at least very similar. A lot of repos would probably result in worse compression when starting with packs. Another idea would be to regularly copy all interesting files into a staging directory (with the same directory structure as the root filesystem) and git-init-db'ing this staging directory, to not have a machine-wide .git/ in the root directory. In both cases, I'd be left with a good number of GIT repos, which should probably be bound together with the GIT subproject functions. However, one really interesting thing would be to be able to get the diff of two machine's configuration files. (Think of machines that *should* be all identical!) For this, it probably would be easier to not put each machine into its own GIT repo, but to use a single one with a zillion branches, one for each machine. Did anybody already try to do something like that and can help me with some real-life experience on that topic? MfG, JBG -- Jan-Benedict Glaw jbglaw@lug-owl.de +49-172-7608481 Signature of: http://catb.org/~esr/faqs/smart-questions.html the second :