Managing large binary files with Git

I am looking for opinions of how to handle large binary files on which my source code (web application) is dependent. We are currently discussing several alternatives:

  1. Copy the binary files by hand.
    • Pro: Not sure.
    • Contra: I am strongly against this, as it increases the likelihood of errors when setting up a new site/migrating the old one. Builds up another hurdle to take.
  2. Manage them all with Git.
    • Pro: Removes the possibility to ‘forget’ to copy a important file
    • Contra: Bloats the repository and decreases flexibility to manage the code-base and checkouts, clones, etc. will take quite a while.
  3. Separate repositories.
    • Pro: Checking out/cloning the source code is fast as ever, and the images are properly archived in their own repository.
    • Contra: Removes the simpleness of having the one and only Git repository on the project. It surely introduces some other things I haven’t thought about.

What are your experiences/thoughts regarding this?

Also: Does anybody have experience with multiple Git repositories and managing them in one project?

The files are images for a program which generates PDFs with those files in it. The files will not change very often (as in years), but they are very relevant to a program. The program will not work without the files.

12 s
12

Another solution, since April 2015 is Git Large File Storage (LFS) (by GitHub).

It uses git-lfs (see git-lfs.github.com) and tested with a server supporting it: lfs-test-server:
You can store metadata only in the git repo, and the large file elsewhere.

https://cloud.githubusercontent.com/assets/1319791/7051226/c4570828-ddf4-11e4-87eb-8fc165e5ece4.gif

Leave a Comment