git@vger.kernel.org mailing list mirror (one of many)
 help / color / mirror / code / Atom feed
From: Brandon Williams <bmwill@google.com>
To: Jeff Hostetler <git@jeffhostetler.com>
Cc: Stefan Beller <sbeller@google.com>,
	Derrick Stolee <stolee@gmail.com>, git <git@vger.kernel.org>
Subject: Re: [RFC] push: add documentation on push v2
Date: Tue, 24 Jul 2018 12:00:03 -0700	[thread overview]
Message-ID: <20180724190003.GB225275@google.com> (raw)
In-Reply-To: <1dd6d9aa-0e96-bb8e-f7ae-873f619a2450@jeffhostetler.com>

On 07/20, Jeff Hostetler wrote:
> 
> 
> On 7/18/2018 1:15 PM, Brandon Williams wrote:
> > On 07/18, Stefan Beller wrote:
> > > On Wed, Jul 18, 2018 at 6:31 AM Derrick Stolee <stolee@gmail.com> wrote:
> > > > 
> > > > On 7/17/2018 7:25 PM, Stefan Beller wrote:
> > > > > On Tue, Jul 17, 2018 at 2:09 PM Brandon Williams <bmwill@google.com> wrote:
> > > > > > Signed-off-by: Brandon Williams <bmwill@google.com>
> > > > > > ---
> > > > > > 
> > > > > > Since introducing protocol v2 and enabling fetch I've been thinking
> > > > > > about what its inverse 'push' would look like.  After talking with a
> > > > > > number of people I have a longish list of things that could be done to
> > > > > > improve push and I think I've been able to distill the core features we
> > > > > > want in push v2.
> > > > > It would be nice to know which things you want to improve.
> > > > 
> > > > Hopefully we can also get others to chime in with things they don't like
> > > > about the existing protocol. What pain points exist, and what can we do
> > > > to improve at the transport layer before considering new functionality?
> > > 
> > > Another thing that I realized last night was the possibility to chunk requests.
> > > The web of today is driven by lots of small http(s) requests. I know our server
> > > team fights with the internal tools all the time because the communication
> > > involved in git-fetch is usually a large http request (large packfile).
> > > So it would be nice to have the possibility of chunking the request.
> > > But I think that can be added as a capability? (Not sure how)
> > 
> > Fetch and push requests/responses are already "chunked" when using the
> > http transport.  So I'm not sure what you mean by adding a capability
> > because the protocol doesn't care about which transport you're using.
> > This is of course unless you're talking about a different "chunking"
> > from what it means to chunk an http request/response.
> > 
> 
> Internally, we've talked about wanting to have resumable pushes and
> fetches.  I realize this is difficult to do when the server is
> replicated and the repeated request might be talking to a different
> server instance.  And there's a problem with temp files littering the
> server as it waits for the repeated attempt.  But still, the packfile
> sent/received can be large and connections do get dropped.
> 
> That is, if we think about sending 1 large packfile and just using a
> byte-range-like approach to resuming the transfer.
> 
> If we allowed the request to send a series of packfiles, with each
> "chunk" being self-contained and usable.  So if a push connection was
> dropped the server could apply the successfully received packfile(s)
> (add the received objects and update the refs to the commits received so
> far).  And ignore the interrupted and unreceived packfile(s) and let the
> client retry later.  When/if the client retried the push, it would
> renegotiate haves/wants and send a new series of packfile(s).  With the
> assumption being that the server would have updated refs from the
> earlier aborted push, so the packfile(s) computed for the second attempt
> would not repeat the content successfully transmitted in the first
> attempt.
> 
> This would require that the client build an ordered set of packfiles
> from oldest to newest so that the server can apply them in-order and
> the graph remain connected.  That may be outside your scope here.
> 
> Also, we might have to add a few messages to the protocol after the
> negotiation, for the client to say that it is going to send the push
> content in 'n' packfiles and send 'n' messages with the intermediate
> ref values being updated in each packfile.
> 
> Just thinking out loud here.
> Jeff

We've talked about working on resumable fetch/push (both of which are
out of the scope of this work), but we haven't started working on
anything just yet.

There's a couple different ways to do this like you've pointed out, we
can either have the server redirect the client to fetch from a CDN
(where its put the packfile) and then the client can use ranged requests
to fetch until the server decides to remove it from the CDN.  This can
be tricky because every fetch can produce a unique packfile so maybe you
don't want to put a freshly constructed, unique packfile for each client
request up on a CDN somewhere.

Breaking up a response into multiple packfiles and small ref-updates
could work, that way as long as some of the smaller packs/updates are
applied then the client is making headway towards being up to date with
the server.

-- 
Brandon Williams

  reply	other threads:[~2018-07-24 19:00 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-07-17 21:09 [RFC] push: add documentation on push v2 Brandon Williams
2018-07-17 23:25 ` Stefan Beller
2018-07-18 13:31   ` Derrick Stolee
2018-07-18 16:56     ` Stefan Beller
2018-07-18 17:15       ` Brandon Williams
2018-07-20 13:12         ` Jeff Hostetler
2018-07-24 19:00           ` Brandon Williams [this message]
2018-07-18 17:11     ` Brandon Williams
2018-07-18 17:19       ` Duy Nguyen
2018-07-18 17:46         ` Brandon Williams
2018-07-18 17:57           ` Duy Nguyen
2018-07-18 17:08   ` Brandon Williams
2018-07-18 18:07     ` Stefan Beller
2018-07-18 18:17       ` Duy Nguyen
2018-07-18 18:21       ` Brandon Williams
2018-07-24 19:28 ` Brandon Williams
2018-07-25 15:15   ` Duy Nguyen
2018-07-25 17:46     ` Brandon Williams
2018-08-02 15:17       ` Duy Nguyen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: http://vger.kernel.org/majordomo-info.html

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180724190003.GB225275@google.com \
    --to=bmwill@google.com \
    --cc=git@jeffhostetler.com \
    --cc=git@vger.kernel.org \
    --cc=sbeller@google.com \
    --cc=stolee@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://80x24.org/mirrors/git.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).