git@vger.kernel.org mailing list mirror (one of many)
 help / color / mirror / code / Atom feed
* GIT push to sftp (feature request)
@ 2007-08-05  9:05 pavlix
  2007-08-05 13:38 ` Johannes Schindelin
  2007-08-05 21:12 ` Martin Langhoff
  0 siblings, 2 replies; 8+ messages in thread
From: pavlix @ 2007-08-05  9:05 UTC (permalink / raw
  To: git

[-- Attachment #1: Type: text/plain, Size: 598 bytes --]

Hello

It would be very useful if git supported sftp urls to push to remote 
repositories.

The use cases are obvious... you would only need ssh run on the other side, 
which is usually available. One cannot or doesn't want to install git on 
every machine where he wants his remote repository.

Git would also have to be able to create a remote repository (maybe an option 
to push?).

Did I miss something?

Pavel Šimerda

P.S.: I am switching from bazaar-ng which can save so sftp and other protocols

-- 

Web: http://www.pavlix.net/
Jabber & E-mail: pavlix@pavlix.net


[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: GIT push to sftp (feature request)
  2007-08-05  9:05 GIT push to sftp (feature request) pavlix
@ 2007-08-05 13:38 ` Johannes Schindelin
  2007-08-05 21:12 ` Martin Langhoff
  1 sibling, 0 replies; 8+ messages in thread
From: Johannes Schindelin @ 2007-08-05 13:38 UTC (permalink / raw
  To: pavlix; +Cc: git

Hi,

On Sun, 5 Aug 2007, pavlix wrote:

> Git would also have to be able to create a remote repository (maybe an option 
> to push?).
> 
> Did I miss something?

Yes.

First, we do not allow remote repository initialising yet.

Second, if you do not have git installed on the remote host, you probably 
want to serve via http.  This is a very suboptimal transport, as it cannot 
repack the contents.

And if you use such a suboptimal transport, people will blame _git_ for 
being slow, even if you made it slow deliberately.

Ciao,
Dscho

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: GIT push to sftp (feature request)
  2007-08-05  9:05 GIT push to sftp (feature request) pavlix
  2007-08-05 13:38 ` Johannes Schindelin
@ 2007-08-05 21:12 ` Martin Langhoff
  2007-08-05 22:20   ` Matthieu Moy
  1 sibling, 1 reply; 8+ messages in thread
From: Martin Langhoff @ 2007-08-05 21:12 UTC (permalink / raw
  To: pavlix; +Cc: git

On 8/5/07, pavlix <pavlix@pavlix.net> wrote:
> Did I miss something?


Unfortunately, git does not "push" over protocols that cannot execute
git on the remote server. We call them "dumb protocols" and if you
search this list for that name, you'll find lots.

Git tries to be smart in at least 2 ways that don't work with dump
protocols: it works locklessly (yet it performs atomic updates) and it
sends only the objects needed over the wire (saving a lot of
bandwidth).

Using dumb protocols it's impossible to do either. And these days it's
not that hard to setup git (or any other binary) to execute at the
remote end.

Bazaar-NG and others do support dumb protocols, and (I think) they do
it by using one big lock over the repo. But the lock is not safe, and
things can (and do) go wrong with weak locking schemes.

git used to support rsync -- but I don't think that works anymore for
pushes. Other than git over ssh, perhaps you can try the apache module
that implements git over http?

hope that helps,



martin

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: GIT push to sftp (feature request)
  2007-08-05 21:12 ` Martin Langhoff
@ 2007-08-05 22:20   ` Matthieu Moy
  2007-08-06  0:00     ` Martin Langhoff
                       ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Matthieu Moy @ 2007-08-05 22:20 UTC (permalink / raw
  To: Martin Langhoff; +Cc: pavlix, git

"Martin Langhoff" <martin.langhoff@gmail.com> writes:

> Git tries to be smart in at least 2 ways that don't work with dump
> protocols: it works locklessly (yet it performs atomic updates) and it
> sends only the objects needed over the wire (saving a lot of
> bandwidth).
>
> Using dumb protocols it's impossible to do either.

That's not exactly true. You can't be as efficient with dumb protocols
than you are with a dedicated protocol (something with some
intelligence on both sides), but at least the second point you mention
can be achieved with a dumb protocol, and bzr is a proof of existance.
To read over HTTP, it uses ranges request, and to push over
ftp/sftp/webdav, it appends new data to existing files (its ancestor,
GNU Arch, also had a way to be network-efficient on dumb protocols).

Regarding atomic and lock-less updates, I believe this is
implementable too as soon as you have an atomit "rename" in the
protocol. But here, bzr isn't a proof of existance, it does locking.

(BTW, about bzr, it also has a dedicated server now)

-- 
Matthieu

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: GIT push to sftp (feature request)
  2007-08-05 22:20   ` Matthieu Moy
@ 2007-08-06  0:00     ` Martin Langhoff
  2007-08-06  8:59       ` Matthieu Moy
  2007-08-06  0:14     ` Jakub Narebski
  2007-08-07 21:50     ` Jan Hudec
  2 siblings, 1 reply; 8+ messages in thread
From: Martin Langhoff @ 2007-08-06  0:00 UTC (permalink / raw
  To: Martin Langhoff, pavlix, git

On 8/6/07, Matthieu Moy <Matthieu.Moy@imag.fr> wrote:
> "Martin Langhoff" <martin.langhoff@gmail.com> writes:
>
> > Git tries to be smart in at least 2 ways that don't work with dump
> > protocols: it works locklessly (yet it performs atomic updates) and it
> > sends only the objects needed over the wire (saving a lot of
> > bandwidth).
> >
> > Using dumb protocols it's impossible to do either.
>
> That's not exactly true. You can't be as efficient with dumb protocols

You are right -- I should have said: it's pretty hard, and we haven't
put the effort ;-)

there's been discussion recently of having info in the pack index that
would support http range requests.

> (its ancestor,
> GNU Arch, also had a way to be network-efficient on dumb protocols).

Do I remember your name from gnuarch-users? -- that Arch/tla was never
particularly efficient, and fetches of large updates were slow and
painful. Surely it was efficient on paper though :-p

> Regarding atomic and lock-less updates, I believe this is
> implementable too as soon as you have an atomit "rename" in the
> protocol. But here, bzr isn't a proof of existance, it does locking.

And I should have said - minimal locking rather than no locking

To update it safely, you need to open with a lock, read to ensure the
sha1 is what you think it is, write the new sha1, close. A rename is
still subject to race conditions.

IMVHO it would be good to have a way to push over sftp even it it is
slow, unsafe and full of big blinking warnings. git itself is sane
enough that the client side won't get corrupted or lose data if there
is a race condition on the server side.

Given a brief delay, the client can probably check - post push - that
the operation wasn't clobbered by a race condition. Of course, this
*is* sticks-and-bubblegum approach on the server side. But a solid
client repo makes almost any server-side disaster recoverable.

cheers,



martin

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: GIT push to sftp (feature request)
  2007-08-05 22:20   ` Matthieu Moy
  2007-08-06  0:00     ` Martin Langhoff
@ 2007-08-06  0:14     ` Jakub Narebski
  2007-08-07 21:50     ` Jan Hudec
  2 siblings, 0 replies; 8+ messages in thread
From: Jakub Narebski @ 2007-08-06  0:14 UTC (permalink / raw
  To: git

Matthieu Moy wrote:

> "Martin Langhoff" <martin.langhoff@gmail.com> writes:
> 
>> Git tries to be smart in at least 2 ways that don't work with dump
>> protocols: it works locklessly (yet it performs atomic updates) and it
>> sends only the objects needed over the wire (saving a lot of
>> bandwidth).
>>
>> Using dumb protocols it's impossible to do either.

But git _can_ push over http protocol (with WebDAV), and http is a dumb
protocol, and over rsync (although it is deprecated).
 
> That's not exactly true. You can't be as efficient with dumb protocols
> than you are with a dedicated protocol (something with some
> intelligence on both sides), but at least the second point you mention
> can be achieved with a dumb protocol, and bzr is a proof of existance.
> To read over HTTP, it uses ranges request, and to push over
> ftp/sftp/webdav, it appends new data to existing files (its ancestor,
> GNU Arch, also had a way to be network-efficient on dumb protocols).

If I understand correctly to read (fetch) over http and other dumb
protocols (like ftp), git uses two indices .git/info/refs
and .git/objects/info/packs which must be present on the server serving
http protocol (see git-update-server-info) to calculate which packs
to get, and I think it always downloads whole packs, but I'm not sure...

-- 
Jakub Narebski
Warsaw, Poland
ShadeHawk on #git

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: GIT push to sftp (feature request)
  2007-08-06  0:00     ` Martin Langhoff
@ 2007-08-06  8:59       ` Matthieu Moy
  0 siblings, 0 replies; 8+ messages in thread
From: Matthieu Moy @ 2007-08-06  8:59 UTC (permalink / raw
  To: Martin Langhoff; +Cc: pavlix, git

"Martin Langhoff" <martin.langhoff@gmail.com> writes:

> On 8/6/07, Matthieu Moy <Matthieu.Moy@imag.fr> wrote:
>> > Using dumb protocols it's impossible to do either.
>>
>> That's not exactly true. You can't be as efficient with dumb protocols
>
> You are right -- I should have said: it's pretty hard, and we haven't
> put the effort ;-)

Yes, that seems more accurate.

>> (its ancestor,
>> GNU Arch, also had a way to be network-efficient on dumb protocols).
>
> Do I remember your name from gnuarch-users?

Possibly so, yes. I also remembered yours from the old good time where
people started explaining why they unsubscribed the list and migrated
to something better ;-).

> -- that Arch/tla was never particularly efficient, and fetches of
> large updates were slow and painful. Surely it was efficient on
> paper though :-p

It was actually efficient in terms of bandwidth. You downloaded only
the needed pieces (this has to do with the fact that the original
author wrote it at a time when he had only a slow modem connection).
But badly pipelined, and local operations were slow, so the result was
obviously _very_ far from what git can do.

-- 
Matthieu

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: GIT push to sftp (feature request)
  2007-08-05 22:20   ` Matthieu Moy
  2007-08-06  0:00     ` Martin Langhoff
  2007-08-06  0:14     ` Jakub Narebski
@ 2007-08-07 21:50     ` Jan Hudec
  2 siblings, 0 replies; 8+ messages in thread
From: Jan Hudec @ 2007-08-07 21:50 UTC (permalink / raw
  To: Martin Langhoff, pavlix, git

[-- Attachment #1: Type: text/plain, Size: 1941 bytes --]

On Mon, Aug 06, 2007 at 00:20:55 +0200, Matthieu Moy wrote:
> "Martin Langhoff" <martin.langhoff@gmail.com> writes:
> 
> > Git tries to be smart in at least 2 ways that don't work with dump
> > protocols: it works locklessly (yet it performs atomic updates) and it
> > sends only the objects needed over the wire (saving a lot of
> > bandwidth).
> >
> > Using dumb protocols it's impossible to do either.
> 
> That's not exactly true. You can't be as efficient with dumb protocols
> than you are with a dedicated protocol (something with some
> intelligence on both sides), but at least the second point you mention
> can be achieved with a dumb protocol, and bzr is a proof of existance.
> To read over HTTP, it uses ranges request, and to push over
> ftp/sftp/webdav, it appends new data to existing files (its ancestor,
> GNU Arch, also had a way to be network-efficient on dumb protocols).

I believe bzr locks are not completely safe in a sense that breaking a lock
does not cause the operation to immediately abort. GNU Arch ones did, but
it's specific data layout was part of a reason why it worked (it wrote the
data to a directory, so removing that would abort the operation).

> Regarding atomic and lock-less updates, I believe this is
> implementable too as soon as you have an atomit "rename" in the
> protocol. But here, bzr isn't a proof of existance, it does locking.

Actually rename or link is necessary for atomic updates, lockless or lockful.

Slight problem with it is, that unix (and similar) systems allow overwriting
another file on rename (and do so atomically in a sense the destination
always exists), while windooze fail if the target exists. Most network
protocols don't specify overwriting and simply do whatever the underlying
system does. GNU Arch solved this by renaming directories, which are not
overwriten under any system.

-- 
						 Jan 'Bulb' Hudec <bulb@ucw.cz>

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2007-08-07 21:52 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-08-05  9:05 GIT push to sftp (feature request) pavlix
2007-08-05 13:38 ` Johannes Schindelin
2007-08-05 21:12 ` Martin Langhoff
2007-08-05 22:20   ` Matthieu Moy
2007-08-06  0:00     ` Martin Langhoff
2007-08-06  8:59       ` Matthieu Moy
2007-08-06  0:14     ` Jakub Narebski
2007-08-07 21:50     ` Jan Hudec

Code repositories for project(s) associated with this public inbox

	https://80x24.org/mirrors/git.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).