git@vger.kernel.org mailing list mirror (one of many)
 help / color / mirror / code / Atom feed
* commit sized around 100 gb in changes failed to push to a TFS remote - Git
       [not found] ` <DE8A44FD55B8BE44AC9861D8ECF567F801FEEC1D@DEFTHW99EM2MSX.ww902.siemens.net>
@ 2019-06-14 16:47   ` Aram Maliachi (WIPRO LIMITED)
  2019-06-14 18:40     ` Aram Maliachi (WIPRO LIMITED)
  2019-06-16 22:49     ` Philip Oakley
  0 siblings, 2 replies; 3+ messages in thread
From: Aram Maliachi (WIPRO LIMITED) @ 2019-06-14 16:47 UTC (permalink / raw)
  To: git@vger.kernel.org; +Cc: Kranz, Peter, Brettschneider, Marco

To @Git Community
From the perspective of an Azure DevOps support engineer. I have a customer who is unable to make a push with following error:

fatal: The remote end hung up unexpectedly
failed to push some refs into https://zelos.healthcare.siemens.com/tfs/Hoover/VA20A.DevInt.Gvfs/_git/Saturn

The local repository has only one change when comparing it to the remote and it is a commit labelled with SHA value: 504aedfdbb to a branch called gitTest
This being said the scheme is as following:

[Remote] - master
b946c27c

[Local] - gitTest branch
504aedfdbb
b946c27c


Important data:
- The commit 504aedfdbb contains +100 GB in file changes 
- The remote git repository is a TFS server
- Customer isn't building code - it is using the remote kind of as a storage service <- We understand these are not best practices but is the way customer is using Git and TFS. If @Git Community could confirm/elaborate on this customer may change up the current approach he is using.

Things tried:
- reset the history for the local repository back to the latest shared commit b946c27c  and committed something small which succeeded to push into remote into a brand new branch by running $ git push origin <name of local branch>
- cherry-picked the commit into local master and attempted to push = failed. <- this makes me think this is entirely caused by the oversized commit
- boosted up the http post buffer configuration = failed. Rolled configuration back to default according to the MSFT docs https://docs.microsoft.com/en-us/azure/devops/repos/git/rpc-failures-http-postbuffer?view=azure-devops
- since this is a TFS server I initially though this could be caused by insufficient disk storage capacity in the server containing the TFS product. But @Vimal Thiagaraj has confirmed that the repositories size limit depend upon the remote TFS databases and not the server itself. Is there a limit on these databases or on how much changes can a git commit contain?

Things I've suggested to customer:
- commit more frequently in smaller batches
- understand that the nature of git is to collaborate and track versions of files over time - not a cloud storage provider

Would appreciate any insight on this @Git Community. Thanks to @Phillip Oakley who took the time to answer last time I posted a question to this mailing list.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* RE: commit sized around 100 gb in changes failed to push to a TFS remote - Git
  2019-06-14 16:47   ` commit sized around 100 gb in changes failed to push to a TFS remote - Git Aram Maliachi (WIPRO LIMITED)
@ 2019-06-14 18:40     ` Aram Maliachi (WIPRO LIMITED)
  2019-06-16 22:49     ` Philip Oakley
  1 sibling, 0 replies; 3+ messages in thread
From: Aram Maliachi (WIPRO LIMITED) @ 2019-06-14 18:40 UTC (permalink / raw)
  To: Aram Maliachi (WIPRO LIMITED), git@vger.kernel.org
  Cc: Kranz, Peter, Brettschneider, Marco

We have a hard limit in the service of 5GB for a single push.

The advice we've given other customers is to do partial pushes by checking out an older commit, pushing that, and then checking out a newer commit, pushing, etc.  You have to push multiple times, but you can build up the entire history that way.

This is due to a limit set by the TFS product.

-----Original Message-----
From: git-owner@vger.kernel.org <git-owner@vger.kernel.org> On Behalf Of Aram Maliachi (WIPRO LIMITED)
Sent: Friday, June 14, 2019 11:48 AM
To: git@vger.kernel.org
Cc: Kranz, Peter <kranz.peter.ext@siemens-healthineers.com>; Brettschneider, Marco <marco.brettschneider.ext@siemens-healthineers.com>
Subject: commit sized around 100 gb in changes failed to push to a TFS remote - Git

To @Git Community
From the perspective of an Azure DevOps support engineer. I have a customer who is unable to make a push with following error:

fatal: The remote end hung up unexpectedly failed to push some refs into https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fzelos.healthcare.siemens.com%2Ftfs%2FHoover%2FVA20A.DevInt.Gvfs%2F_git%2FSaturn&amp;data=02%7C01%7Cv-armal%40microsoft.com%7C00a886aa8e6e4eb7b38308d6f0e81171%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636961276860256000&amp;sdata=HF36q%2FZff3882jBBNdyXQdQMUcFrsJ1jHtWJyfbTu0s%3D&amp;reserved=0

The local repository has only one change when comparing it to the remote and it is a commit labelled with SHA value: 504aedfdbb to a branch called gitTest This being said the scheme is as following:

[Remote] - master
b946c27c

[Local] - gitTest branch
504aedfdbb
b946c27c


Important data:
- The commit 504aedfdbb contains +100 GB in file changes
- The remote git repository is a TFS server
- Customer isn't building code - it is using the remote kind of as a storage service <- We understand these are not best practices but is the way customer is using Git and TFS. If @Git Community could confirm/elaborate on this customer may change up the current approach he is using.

Things tried:
- reset the history for the local repository back to the latest shared commit b946c27c  and committed something small which succeeded to push into remote into a brand new branch by running $ git push origin <name of local branch>
- cherry-picked the commit into local master and attempted to push = failed. <- this makes me think this is entirely caused by the oversized commit
- boosted up the http post buffer configuration = failed. Rolled configuration back to default according to the MSFT docs https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fdevops%2Frepos%2Fgit%2Frpc-failures-http-postbuffer%3Fview%3Dazure-devops&amp;data=02%7C01%7Cv-armal%40microsoft.com%7C00a886aa8e6e4eb7b38308d6f0e81171%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636961276860256000&amp;sdata=jnxDSnfGiRpHbs%2F1n0yRt3V%2FE3UogElyRhhyoxFc%2FTM%3D&amp;reserved=0
- since this is a TFS server I initially though this could be caused by insufficient disk storage capacity in the server containing the TFS product. But @Vimal Thiagaraj has confirmed that the repositories size limit depend upon the remote TFS databases and not the server itself. Is there a limit on these databases or on how much changes can a git commit contain?

Things I've suggested to customer:
- commit more frequently in smaller batches
- understand that the nature of git is to collaborate and track versions of files over time - not a cloud storage provider

Would appreciate any insight on this @Git Community. Thanks to @Phillip Oakley who took the time to answer last time I posted a question to this mailing list.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: commit sized around 100 gb in changes failed to push to a TFS remote - Git
  2019-06-14 16:47   ` commit sized around 100 gb in changes failed to push to a TFS remote - Git Aram Maliachi (WIPRO LIMITED)
  2019-06-14 18:40     ` Aram Maliachi (WIPRO LIMITED)
@ 2019-06-16 22:49     ` Philip Oakley
  1 sibling, 0 replies; 3+ messages in thread
From: Philip Oakley @ 2019-06-16 22:49 UTC (permalink / raw)
  To: Aram Maliachi (WIPRO LIMITED), git@vger.kernel.org
  Cc: Kranz, Peter, Brettschneider, Marco

Hi Aram
On 14/06/2019 17:47, Aram Maliachi (WIPRO LIMITED) wrote:
> To @Git Community
>  From the perspective of an Azure DevOps support engineer. I have a customer who is unable to make a push with following error:
>
> fatal: The remote end hung up unexpectedly
> failed to push some refs into https://zelos.healthcare.siemens.com/tfs/Hoover/VA20A.DevInt.Gvfs/_git/Saturn
>
> The local repository has only one change when comparing it to the remote and it is a commit labelled with SHA value: 504aedfdbb to a branch called gitTest
> This being said the scheme is as following:
>
> [Remote] - master
> b946c27c
>
> [Local] - gitTest branch
> 504aedfdbb
> b946c27c
>
>
> Important data:
> - The commit 504aedfdbb contains +100 GB in file changes
> - The remote git repository is a TFS server
> - Customer isn't building code - it is using the remote kind of as a storage service <- We understand these are not best practices but is the way customer is using Git and TFS. If @Git Community could confirm/elaborate on this customer may change up the current approach he is using.
>
> Things tried:
> - reset the history for the local repository back to the latest shared commit b946c27c  and committed something small which succeeded to push into remote into a brand new branch by running $ git push origin <name of local branch>
> - cherry-picked the commit into local master and attempted to push = failed. <- this makes me think this is entirely caused by the oversized commit
> - boosted up the http post buffer configuration = failed. Rolled configuration back to default according to the MSFT docs https://docs.microsoft.com/en-us/azure/devops/repos/git/rpc-failures-http-postbuffer?view=azure-devops
> - since this is a TFS server I initially though this could be caused by insufficient disk storage capacity in the server containing the TFS product. But @Vimal Thiagaraj has confirmed that the repositories size limit depend upon the remote TFS databases and not the server itself. Is there a limit on these databases or on how much changes can a git commit contain?
>
> Things I've suggested to customer:
> - commit more frequently in smaller batches
> - understand that the nature of git is to collaborate and track versions of files over time - not a cloud storage provider
>
> Would appreciate any insight on this @Git Community. Thanks to @Phillip Oakley who took the time to answer last time I posted a question to this mailing list.
Can you confirm the operating system versions and Git versions for the 
machine doing the push and the server attempting the receiving? 
Especially are either of them Windows systems which currently have a 
32bit size limit (which is sizeof(long)..).

I have a patch series in review on the Git-for-Windows Github repo 
(#2179) that should allow objects and packs greater than 4Gb. However..

This may still be a limit on the 'transfer' process (100Gb could take a 
long time, have time outs, break internal virus checkers that monitor 
the feed, etc).

Philip

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-06-16 22:49 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <MN2PR21MB1231B057E9E662BB151B2819E9EF0@MN2PR21MB1231.namprd21.prod.outlook.com>
     [not found] ` <DE8A44FD55B8BE44AC9861D8ECF567F801FEEC1D@DEFTHW99EM2MSX.ww902.siemens.net>
2019-06-14 16:47   ` commit sized around 100 gb in changes failed to push to a TFS remote - Git Aram Maliachi (WIPRO LIMITED)
2019-06-14 18:40     ` Aram Maliachi (WIPRO LIMITED)
2019-06-16 22:49     ` Philip Oakley

Code repositories for project(s) associated with this public inbox

	https://80x24.org/mirrors/git.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).