Git LFS Performance Optimizations with Anchorpoint and Assembla

How to optimize Git LFS for faster transfers in large-scale, game-development related projects.

Cláudia Fernandes
16 Jul 2025
Updated on
11
min read
Content

If you are working in game development, VFX, simulation, or any project that uses large binary files, you have probably run into the limits of what Git can handle comfortably. Git LFS (Large File Storage) is a great help, but simply installing it and moving on is not enough.

Many Git LFS users are missing out on valuable performance improvements because they are not aware of the extra optimizations that can make Git LFS faster and more efficient. By using the right Git client and host, along with a few configuration tweaks, you can seriously improve your day-to-day workflow.

In this article, we will focus on how Anchorpoint, as your Git client, can optimize your Git LFS experience. We will also look at how Assembla complements this setup as a reliable, high-performance Git hosting provider.

What is Git LFS?

Git was built to track plain text files. When you push a new file, Git stores the entire file at first, then saves only the changes from that point on. Each commit is basically a ‘this bit changed from this to that’ record. 

This system works great for text files because Git was designed around text. It is also what allows Git to merge files so efficiently. However, this becomes a problem with non-text files like PNGs, JPGs, audio files, animations, and video files. Git does not understand their intrinsic formats. To Git, they are just blobs of data.

When Git encounters a non-text file, it simply saves the entire file again every time you commit, even if you only changed a single byte in a 15 MB file. Over time, this can cause your repository history to grow extremely large. It also makes cloning a repository painfully slow because you download every version of every file from the beginning of the project.

This is where Git LFS comes in. Git LFS is an extension that allows Git to track large files using lightweight text pointers instead of saving the entire file in the main repository. The actual file content is stored separately. This keeps your repository smaller and much faster to work with.

Git LFS needs to be installed on both the client side and the server side. These days, most Git hosting providers have LFS support built in.

Using default LFS

In order to get the most out of LFS, the user needs to edit the .gitattributes file to add the file extensions of the files that need to be considered Large Files, so the LFS system will intercept them and treat them differently.

Here’s an example of what a typical configuration looks like:

*.psd filter=lfs diff=lfs merge=lfs -text
*.wav filter=lfs diff=lfs merge=lfs -text
Assets/Textures/*.png filter=lfs diff=lfs merge=lfs -text

The filtering parameters possible are:

  • *.psd is a glob pattern that matches all files with the .psd extension.
  • filter=lfs tells Git to process these files using Git LFS.
  • diff=lfs and merge=lfs pass diffing and merging tasks to Git LFS.
  • -text tells Git not to attempt to treat the file as text.

If you want Git LFS to work correctly, setting this up in your .gitattributes file is essential.

Default LFS best practices

One thing that works well in all usage cases of Git – but particularly when LFS is involved – is working in branches. When working in a branch, it means you are not getting merged and checking history for all other users in their branches. 

Git by design ONLY gets the history for the branch currently being looked at (unless the user specifically fetches from other branches), which means a lot less download time and space being used on a client machine.

Conversely though, binary files are impossible to merge, which means if two branches are affecting the same file, one is going to lose all their changes if they commit to main before someone else does, since Git cannot merge binary files. Worse still, LFS won’t help because LFS doesn’t help merge files, it just makes the amount of storage required to store and download changes to binary files is way smaller.

What’s worse is that when dealing with binary files in branches, it’s entirely possible that users may not know that their changes were over-ridden by other branches later. There is no mechanism to alert a user in a branch that their merge to main has been overridden by someone else’s later merge to main.

This is particularly a danger in high change frequency binary files like blueprints, for Unreal. And this is where exclusive checkout comes in.

It is possible to mark files as exclusive checkout by editing the .gitattributes file, and adding an extra parameter to the LFS definition. E.g.:

*.psd filter=lfs diff=lfs merge=lfs -text lockable

The ‘lockable’ part marks the file extension as “only one person can be editing and commit this at once”.

In order to lock this file – so the file is considered owned by one person who is editing it, the user can use this command.

git lfs lock path/to/file.psd

Now, while the lock is held on the file, if anyone else tries to lock that file, they will get a warning telling them file is already locked by the original locker.

If they don’t try and unlock a locked file, but just edit it and try and commit it, git itself will tell them “you can’t commit this, this file is locked (or unlocked, but has been checked in so there is a more recent version) by someone else.”

This is a way of protecting users against accidental binary file overwriting. Of course the downside is that only one person can be editing files at once, which makes collaborative work on the same assets (e.g. an Unreal level file) not something easily implemented, and requires much breaking down of assets in a hierarchy, so all the parts of a level are effectively prefabs (when using Unity), which are all independent files. For this, you would use the 'One File Per Actor' feature on the Unreal Engine side of things, which breaks the level up into small actor files.

Anchorpoint: A Git client optimized for Git LFS workflows

Anchorpoint uses a metadata system to extend Git for instant file locking, centralized Git configurations and art asset management features such as tagging. It comes with Git LFS optimizations that make your workflow smoother and easier to manage.

Here are some of the key Git LFS features that Anchorpoint brings to the table.

Background downloading (coming soon)

Anchorpoint will soon add a feature that automatically downloads LFS files in the background.  When others make changes to the server repository storage, those changes are trickled down to the local client in the background, invisible to the user. So when the user needs to fetch or pull, while the actual mechanics of the fetch or pull happen there and then (i.e. the new content is merged into the branch on the client machine) the files required to be merged are already resident on the client machine. The client Git app doesn’t have to actually download them, only do the merge.

This feature applies to LFS marked files. But if the LFS functionality is being used in the first place it means that large files and likely some large merges are going to happen, so downloading of larger merge-ready files from the repo to the client in the background is even more important and time saving. Note: this will only happen on the current branch, not other branches, since that could easily flood a connection if a repository has a lot of branches.

Clearing the LFS cache

When you push or clone a repository that uses Git LFS, Git stores a local cache of the large files. This cache can quickly eat up a lot of disk space if it is not cleared.

Anchorpoint takes care of this automatically. It clears the cache that is generated when LFS files are part of a clone or a push. This means that less disc space is used after the operation is completed, which is a good thing for everyone.

Note: this can be done by hand if the user wants – using the command:

git lfs prune --verify-remote --verify-unreachable --when-unverified=continue

This command safely removes cached files that are no longer needed. Anchorpoint actually contributed this feature to the Git LFS open-source project, so even users on other platforms can benefit from it.

Git LFS configuration tweaks for power users

There are also some useful Git settings you can adjust to improve performance, especially when working with large files or unstable internet connections.

  • git config --global http.postBuffer
    This setting increases the HTTP buffer size, which can help prevent push errors when uploading large files. It is useful for general file transfers, not just LFS files.
  • git config --global lfs.concurrenttransfers X
    Where X is a number up to 8. This controls how many LFS files a pull (or push) sends or receives from the server repository at once. Its default is 3, but this allows you to increase that number – something useful if you have a good internet connection. This basically speeds up the pull or push when LFS files are involved.
  • git config --global lfs.batch true/false
    This option, when set to true (which is the default) allows Git to ask (or send) a file list and batch together the files in one transfer. A false option sends each file individually. It’s worth ensuring this is set to true, since legacy Git installations may have this set to false.

Note: all of these options are global settings on the server repository for the logged-in user. They do NOT affect other users of the same repository.

Hosting high volume Git LFS transfers with Assembla

Once you have optimized your Git client, you also need a hosting solution that will not slow you down. Assembla is a Git hosting platform that works especially well for teams dealing with large files.

Here are some of the reasons why Assembla is a great match for Git LFS power users.

Global server hosting locations

When you create your Assembla account, you can choose your data residency region. Options include Central US, Central Europe, and Central Asia (India). This gives your team faster access to files without having to connect to servers on the other side of the world. If you are in Australia, you won’t have to reach across the world to London to get your data!

No concurrent upload limit

Remember git config --global lfs.batch true/false? Assembla’s settings default to true, which means your team is free to upload up to 8 concurrent files.

No individual file size limits

GitHub, for example, limits individual file uploads to 2 GB on GitHub free and 5GB on Enterprise Cloud. Assembla has no such restriction, which makes it a better fit for teams working with large assets like game builds, high-resolution textures, or audio files.

Simple storage model

Other hosting providers separate basic default git storage and LFS file storage into separate costs. Assembla uses a unified storage quota. All files, whether they are regular Git files or LFS files, count toward the same plan storage limit. This makes storage management easier.

Reliable performance

Assembla offers stable, consistent performance for Git LFS operations. Unlike some other hosts, such as Azure DevOps, which can suffer from slow transfers or tricky authentication issues, Assembla provides a smoother experience.

The optimized Git LFS workflow: Anchorpoint + Assembla

To get the most out of Git LFS, the best setup is to combine a powerful Git client like Anchorpoint with a flexible and reliable Git host like Assembla.

Here’s a quick recap:

  • Use Anchorpoint as your Git client to take advantage of automatic cache cleanup, background downloads, and a user-friendly interface designed for creative teams.
  • Host your repositories on Assembla, which gives you global server options, no upload or file size limits, and a straightforward storage model.
  • Apply the Git LFS configuration tweaks we covered to get faster and more stable file transfers.

Whether you are developing an Unreal game, creating high-resolution 3D assets, or working on large multimedia projects, this setup can help you avoid the most common headaches and speed up your workflow.