git@vger.kernel.org mailing list mirror (one of many)
 help / color / mirror / code / Atom feed
* [PATCH 00/11] various lockfile-leaks and -fixes
@ 2017-10-01 14:56 Martin Ågren
  2017-10-01 14:56 ` [PATCH 01/11] sha1_file: do not leak `lock_file` Martin Ågren
                   ` (12 more replies)
  0 siblings, 13 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-01 14:56 UTC (permalink / raw)
  To: git
  Cc: Michael Haggerty, Jeff King, Paul Tan, Christian Couder,
	Nguyễn Thái Ngọc Duy

A recent series allowed `struct lock_file`s to be freed [1], so I wanted
to get rid of the "simple" leaks of this kind. I found a couple of
lock-related cleanups along the way and it resulted in this series. It
feels a bit scary at eleven patches -- especially as this is about
locking -- but I hope I've managed to put this into understandable and
reviewable patches. Reviews, thoughts, opinions welcome, as always.

1-2

Instead of allocating and leaking `struct lock_file`s, initialize them
on the stack. These instances were identified by simple grepping.

3-4

Documentation fixes for lockfile and temporary file APIs.

5-7

No need to represent the same information twice. No need for the caller
to provide a (static) lock for us now that we can safely manage our own.

8-11

Error-handling in read-cache.c was a bit lacking and over-eager at the
same time. Sometimes we'd leave the lock open when we should commit it,
sometimes we'd roll it back when we should only close it. We'd also
crash under rare circumstances. Patch 9 addresses a bug in the API which
most likely hasn't hit anyone and maybe never would, but if we don't
have the functionality, I don't think we should pretend like we do.

Martin

[1] https://public-inbox.org/git/20170905121353.62zg3mtextmq5zrs@sigill.intra.peff.net/

Martin Ågren (11):
  sha1_file: do not leak `lock_file`
  treewide: prefer lockfiles on the stack
  lockfile: fix documentation on `close_lock_file_gently()`
  tempfile: fix documentation on `delete_tempfile()`
  cache-tree: simplify locking logic
  apply: move lockfile into `apply_state`
  apply: remove `newfd` from `struct apply_state`
  cache.h: document `write_locked_index()`
  read-cache: require flags for `write_locked_index()`
  read-cache: don't leave dangling pointer in `do_write_index()`
  read-cache: roll back lock on error with `COMMIT_LOCK`

 apply.c            | 25 ++++++++-----------------
 apply.h            |  8 +++-----
 builtin/am.c       | 27 ++++++++++++---------------
 builtin/apply.c    |  4 +---
 builtin/checkout.c | 14 ++++++--------
 builtin/clone.c    |  7 +++----
 builtin/diff.c     |  7 +++----
 builtin/difftool.c |  1 -
 cache-tree.c       | 12 ++++--------
 cache.h            | 19 +++++++++++++++++++
 config.c           | 17 ++++++++---------
 git-compat-util.h  |  7 ++++++-
 lockfile.h         |  4 ++--
 merge-recursive.c  |  6 +++---
 merge.c            |  8 +++-----
 read-cache.c       | 26 ++++++++++++++------------
 sequencer.c        |  1 -
 sha1_file.c        | 16 +++++++---------
 tempfile.h         |  8 ++++----
 wt-status.c        |  8 ++++----
 20 files changed, 110 insertions(+), 115 deletions(-)

-- 
2.14.1.727.g9ddaf86


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH 01/11] sha1_file: do not leak `lock_file`
  2017-10-01 14:56 [PATCH 00/11] various lockfile-leaks and -fixes Martin Ågren
@ 2017-10-01 14:56 ` Martin Ågren
  2017-10-02  5:26   ` Jeff King
  2017-10-01 14:56 ` [PATCH 02/11] treewide: prefer lockfiles on the stack Martin Ågren
                   ` (11 subsequent siblings)
  12 siblings, 1 reply; 69+ messages in thread
From: Martin Ågren @ 2017-10-01 14:56 UTC (permalink / raw)
  To: git; +Cc: Jeff King

There is no longer any need to allocate and leak a `struct lock_file`.
Initialize it on the stack instead.

Instead of setting `lock = NULL` to signal that we have already rolled
back, and that we should not do any more work, check with
`is_lock_file_locked()`. Since we take the lock with
`LOCK_DIE_ON_ERROR`, it evaluates to false exactly when we have called
`rollback_lock_file()`.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
---
 sha1_file.c | 16 +++++++---------
 1 file changed, 7 insertions(+), 9 deletions(-)

diff --git a/sha1_file.c b/sha1_file.c
index 5a2014811..c50d6237e 100644
--- a/sha1_file.c
+++ b/sha1_file.c
@@ -456,12 +456,12 @@ struct alternate_object_database *alloc_alt_odb(const char *dir)
 
 void add_to_alternates_file(const char *reference)
 {
-	struct lock_file *lock = xcalloc(1, sizeof(struct lock_file));
+	struct lock_file lock = LOCK_INIT;
 	char *alts = git_pathdup("objects/info/alternates");
 	FILE *in, *out;
 
-	hold_lock_file_for_update(lock, alts, LOCK_DIE_ON_ERROR);
-	out = fdopen_lock_file(lock, "w");
+	hold_lock_file_for_update(&lock, alts, LOCK_DIE_ON_ERROR);
+	out = fdopen_lock_file(&lock, "w");
 	if (!out)
 		die_errno("unable to fdopen alternates lockfile");
 
@@ -481,17 +481,15 @@ void add_to_alternates_file(const char *reference)
 		strbuf_release(&line);
 		fclose(in);
 
-		if (found) {
-			rollback_lock_file(lock);
-			lock = NULL;
-		}
+		if (found)
+			rollback_lock_file(&lock);
 	}
 	else if (errno != ENOENT)
 		die_errno("unable to read alternates file");
 
-	if (lock) {
+	if (is_lock_file_locked(&lock)) {
 		fprintf_or_die(out, "%s\n", reference);
-		if (commit_lock_file(lock))
+		if (commit_lock_file(&lock))
 			die_errno("unable to move new alternates file into place");
 		if (alt_odb_tail)
 			link_alt_odb_entries(reference, '\n', NULL, 0);
-- 
2.14.1.727.g9ddaf86


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH 02/11] treewide: prefer lockfiles on the stack
  2017-10-01 14:56 [PATCH 00/11] various lockfile-leaks and -fixes Martin Ågren
  2017-10-01 14:56 ` [PATCH 01/11] sha1_file: do not leak `lock_file` Martin Ågren
@ 2017-10-01 14:56 ` Martin Ågren
  2017-10-02  3:37   ` Junio C Hamano
  2017-10-02  5:34   ` Jeff King
  2017-10-01 14:56 ` [PATCH 03/11] lockfile: fix documentation on `close_lock_file_gently()` Martin Ågren
                   ` (10 subsequent siblings)
  12 siblings, 2 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-01 14:56 UTC (permalink / raw)
  To: git

There is no longer any need to allocate and leak a `struct lock_file`.
The previous patch addressed an instance where we needed a minor tweak
alongside the trivial changes.

Deal with the remaining instances where we allocate and leak a struct
within a single function. Change them to have the `struct lock_file` on
the stack instead.

These instances were identified by running `git grep "^\s*struct
lock_file\s*\*"`.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
---
The changes to config.c conflict on pu (52d59cc64 (branch: add a --copy
(-c) option to go with --move (-m), 2017-06-18)), both textually and
because that commit add some more "lock" that need to be "&lock".

 builtin/am.c       | 24 +++++++++++-------------
 builtin/checkout.c | 14 ++++++--------
 builtin/clone.c    |  7 +++----
 builtin/diff.c     |  7 +++----
 config.c           | 17 ++++++++---------
 merge-recursive.c  |  6 +++---
 merge.c            |  8 ++++----
 wt-status.c        |  8 ++++----
 8 files changed, 42 insertions(+), 49 deletions(-)

diff --git a/builtin/am.c b/builtin/am.c
index d7513f537..4e16fd428 100644
--- a/builtin/am.c
+++ b/builtin/am.c
@@ -1134,11 +1134,11 @@ static const char *msgnum(const struct am_state *state)
  */
 static void refresh_and_write_cache(void)
 {
-	struct lock_file *lock_file = xcalloc(1, sizeof(struct lock_file));
+	struct lock_file lock_file = LOCK_INIT;
 
-	hold_locked_index(lock_file, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
 	refresh_cache(REFRESH_QUIET);
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die(_("unable to write index file"));
 }
 
@@ -1946,15 +1946,14 @@ static void am_resolve(struct am_state *state)
  */
 static int fast_forward_to(struct tree *head, struct tree *remote, int reset)
 {
-	struct lock_file *lock_file;
+	struct lock_file lock_file = LOCK_INIT;
 	struct unpack_trees_options opts;
 	struct tree_desc t[2];
 
 	if (parse_tree(head) || parse_tree(remote))
 		return -1;
 
-	lock_file = xcalloc(1, sizeof(struct lock_file));
-	hold_locked_index(lock_file, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
 
 	refresh_cache(REFRESH_QUIET);
 
@@ -1970,11 +1969,11 @@ static int fast_forward_to(struct tree *head, struct tree *remote, int reset)
 	init_tree_desc(&t[1], remote->buffer, remote->size);
 
 	if (unpack_trees(2, t, &opts)) {
-		rollback_lock_file(lock_file);
+		rollback_lock_file(&lock_file);
 		return -1;
 	}
 
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die(_("unable to write new index file"));
 
 	return 0;
@@ -1986,15 +1985,14 @@ static int fast_forward_to(struct tree *head, struct tree *remote, int reset)
  */
 static int merge_tree(struct tree *tree)
 {
-	struct lock_file *lock_file;
+	struct lock_file lock_file = LOCK_INIT;
 	struct unpack_trees_options opts;
 	struct tree_desc t[1];
 
 	if (parse_tree(tree))
 		return -1;
 
-	lock_file = xcalloc(1, sizeof(struct lock_file));
-	hold_locked_index(lock_file, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
 
 	memset(&opts, 0, sizeof(opts));
 	opts.head_idx = 1;
@@ -2005,11 +2003,11 @@ static int merge_tree(struct tree *tree)
 	init_tree_desc(&t[0], tree->buffer, tree->size);
 
 	if (unpack_trees(1, t, &opts)) {
-		rollback_lock_file(lock_file);
+		rollback_lock_file(&lock_file);
 		return -1;
 	}
 
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die(_("unable to write new index file"));
 
 	return 0;
diff --git a/builtin/checkout.c b/builtin/checkout.c
index 3345a0d16..34c3c5b61 100644
--- a/builtin/checkout.c
+++ b/builtin/checkout.c
@@ -247,7 +247,7 @@ static int checkout_paths(const struct checkout_opts *opts,
 	struct object_id rev;
 	struct commit *head;
 	int errs = 0;
-	struct lock_file *lock_file;
+	struct lock_file lock_file = LOCK_INIT;
 
 	if (opts->track != BRANCH_TRACK_UNSPECIFIED)
 		die(_("'%s' cannot be used with updating paths"), "--track");
@@ -275,9 +275,7 @@ static int checkout_paths(const struct checkout_opts *opts,
 		return run_add_interactive(revision, "--patch=checkout",
 					   &opts->pathspec);
 
-	lock_file = xcalloc(1, sizeof(struct lock_file));
-
-	hold_locked_index(lock_file, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
 	if (read_cache_preload(&opts->pathspec) < 0)
 		return error(_("index file corrupt"));
 
@@ -376,7 +374,7 @@ static int checkout_paths(const struct checkout_opts *opts,
 	}
 	errs |= finish_delayed_checkout(&state);
 
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die(_("unable to write new index file"));
 
 	read_ref_full("HEAD", 0, rev.hash, NULL);
@@ -472,9 +470,9 @@ static int merge_working_tree(const struct checkout_opts *opts,
 			      int *writeout_error)
 {
 	int ret;
-	struct lock_file *lock_file = xcalloc(1, sizeof(struct lock_file));
+	struct lock_file lock_file = LOCK_INIT;
 
-	hold_locked_index(lock_file, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
 	if (read_cache_preload(NULL) < 0)
 		return error(_("index file corrupt"));
 
@@ -591,7 +589,7 @@ static int merge_working_tree(const struct checkout_opts *opts,
 	if (!cache_tree_fully_valid(active_cache_tree))
 		cache_tree_update(&the_index, WRITE_TREE_SILENT | WRITE_TREE_REPAIR);
 
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die(_("unable to write new index file"));
 
 	if (!opts->force && !opts->quiet)
diff --git a/builtin/clone.c b/builtin/clone.c
index dbddd98f8..96a3aaaa1 100644
--- a/builtin/clone.c
+++ b/builtin/clone.c
@@ -706,7 +706,7 @@ static int checkout(int submodule_progress)
 {
 	struct object_id oid;
 	char *head;
-	struct lock_file *lock_file;
+	struct lock_file lock_file = LOCK_INIT;
 	struct unpack_trees_options opts;
 	struct tree *tree;
 	struct tree_desc t;
@@ -733,8 +733,7 @@ static int checkout(int submodule_progress)
 	/* We need to be in the new work tree for the checkout */
 	setup_work_tree();
 
-	lock_file = xcalloc(1, sizeof(struct lock_file));
-	hold_locked_index(lock_file, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
 
 	memset(&opts, 0, sizeof opts);
 	opts.update = 1;
@@ -750,7 +749,7 @@ static int checkout(int submodule_progress)
 	if (unpack_trees(1, &t, &opts) < 0)
 		die(_("unable to checkout working tree"));
 
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die(_("unable to write new index file"));
 
 	err |= run_hook_le(NULL, "post-checkout", sha1_to_hex(null_sha1),
diff --git a/builtin/diff.c b/builtin/diff.c
index 7e3ebcea3..91995965d 100644
--- a/builtin/diff.c
+++ b/builtin/diff.c
@@ -203,17 +203,16 @@ static int builtin_diff_combined(struct rev_info *revs,
 
 static void refresh_index_quietly(void)
 {
-	struct lock_file *lock_file;
+	struct lock_file lock_file = LOCK_INIT;
 	int fd;
 
-	lock_file = xcalloc(1, sizeof(struct lock_file));
-	fd = hold_locked_index(lock_file, 0);
+	fd = hold_locked_index(&lock_file, 0);
 	if (fd < 0)
 		return;
 	discard_cache();
 	read_cache();
 	refresh_cache(REFRESH_QUIET|REFRESH_UNMERGED);
-	update_index_if_able(&the_index, lock_file);
+	update_index_if_able(&the_index, &lock_file);
 }
 
 static int builtin_diff_files(struct rev_info *revs, int argc, const char **argv)
diff --git a/config.c b/config.c
index 345d78c2b..0e61e863d 100644
--- a/config.c
+++ b/config.c
@@ -2748,7 +2748,7 @@ int git_config_rename_section_in_file(const char *config_filename,
 {
 	int ret = 0, remove = 0;
 	char *filename_buf = NULL;
-	struct lock_file *lock;
+	struct lock_file lock = LOCK_INIT;
 	int out_fd;
 	char buf[1024];
 	FILE *config_file = NULL;
@@ -2762,8 +2762,7 @@ int git_config_rename_section_in_file(const char *config_filename,
 	if (!config_filename)
 		config_filename = filename_buf = git_pathdup("config");
 
-	lock = xcalloc(1, sizeof(struct lock_file));
-	out_fd = hold_lock_file_for_update(lock, config_filename, 0);
+	out_fd = hold_lock_file_for_update(&lock, config_filename, 0);
 	if (out_fd < 0) {
 		ret = error("could not lock config file %s", config_filename);
 		goto out;
@@ -2782,9 +2781,9 @@ int git_config_rename_section_in_file(const char *config_filename,
 		goto out;
 	}
 
-	if (chmod(get_lock_file_path(lock), st.st_mode & 07777) < 0) {
+	if (chmod(get_lock_file_path(&lock), st.st_mode & 07777) < 0) {
 		ret = error_errno("chmod on %s failed",
-				  get_lock_file_path(lock));
+				  get_lock_file_path(&lock));
 		goto out;
 	}
 
@@ -2805,7 +2804,7 @@ int git_config_rename_section_in_file(const char *config_filename,
 				}
 				store.baselen = strlen(new_name);
 				if (write_section(out_fd, new_name) < 0) {
-					ret = write_error(get_lock_file_path(lock));
+					ret = write_error(get_lock_file_path(&lock));
 					goto out;
 				}
 				/*
@@ -2831,20 +2830,20 @@ int git_config_rename_section_in_file(const char *config_filename,
 			continue;
 		length = strlen(output);
 		if (write_in_full(out_fd, output, length) < 0) {
-			ret = write_error(get_lock_file_path(lock));
+			ret = write_error(get_lock_file_path(&lock));
 			goto out;
 		}
 	}
 	fclose(config_file);
 	config_file = NULL;
 commit_and_out:
-	if (commit_lock_file(lock) < 0)
+	if (commit_lock_file(&lock) < 0)
 		ret = error_errno("could not write config file %s",
 				  config_filename);
 out:
 	if (config_file)
 		fclose(config_file);
-	rollback_lock_file(lock);
+	rollback_lock_file(&lock);
 out_no_rollback:
 	free(filename_buf);
 	return ret;
diff --git a/merge-recursive.c b/merge-recursive.c
index 1d3f8f0d2..24c5c26a6 100644
--- a/merge-recursive.c
+++ b/merge-recursive.c
@@ -2162,7 +2162,7 @@ int merge_recursive_generic(struct merge_options *o,
 			    struct commit **result)
 {
 	int clean;
-	struct lock_file *lock = xcalloc(1, sizeof(struct lock_file));
+	struct lock_file lock = LOCK_INIT;
 	struct commit *head_commit = get_ref(head, o->branch1);
 	struct commit *next_commit = get_ref(merge, o->branch2);
 	struct commit_list *ca = NULL;
@@ -2178,14 +2178,14 @@ int merge_recursive_generic(struct merge_options *o,
 		}
 	}
 
-	hold_locked_index(lock, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock, LOCK_DIE_ON_ERROR);
 	clean = merge_recursive(o, head_commit, next_commit, ca,
 			result);
 	if (clean < 0)
 		return clean;
 
 	if (active_cache_changed &&
-	    write_locked_index(&the_index, lock, COMMIT_LOCK))
+	    write_locked_index(&the_index, &lock, COMMIT_LOCK))
 		return err(o, _("Unable to write index."));
 
 	return clean ? 0 : 1;
diff --git a/merge.c b/merge.c
index 1d441ad94..a18a452b5 100644
--- a/merge.c
+++ b/merge.c
@@ -53,11 +53,11 @@ int checkout_fast_forward(const struct object_id *head,
 	struct tree_desc t[MAX_UNPACK_TREES];
 	int i, nr_trees = 0;
 	struct dir_struct dir;
-	struct lock_file *lock_file = xcalloc(1, sizeof(struct lock_file));
+	struct lock_file lock_file = LOCK_INIT;
 
 	refresh_cache(REFRESH_QUIET);
 
-	if (hold_locked_index(lock_file, LOCK_REPORT_ON_ERROR) < 0)
+	if (hold_locked_index(&lock_file, LOCK_REPORT_ON_ERROR) < 0)
 		return -1;
 
 	memset(&trees, 0, sizeof(trees));
@@ -91,8 +91,8 @@ int checkout_fast_forward(const struct object_id *head,
 	}
 	if (unpack_trees(nr_trees, t, &opts))
 		return -1;
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK)) {
-		rollback_lock_file(lock_file);
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK)) {
+		rollback_lock_file(&lock_file);
 		return error(_("unable to write new index file"));
 	}
 	return 0;
diff --git a/wt-status.c b/wt-status.c
index 6f730ee8f..7d88e7ca8 100644
--- a/wt-status.c
+++ b/wt-status.c
@@ -2299,14 +2299,14 @@ int has_uncommitted_changes(int ignore_submodules)
  */
 int require_clean_work_tree(const char *action, const char *hint, int ignore_submodules, int gently)
 {
-	struct lock_file *lock_file = xcalloc(1, sizeof(*lock_file));
+	struct lock_file lock_file = LOCK_INIT;
 	int err = 0, fd;
 
-	fd = hold_locked_index(lock_file, 0);
+	fd = hold_locked_index(&lock_file, 0);
 	refresh_cache(REFRESH_QUIET);
 	if (0 <= fd)
-		update_index_if_able(&the_index, lock_file);
-	rollback_lock_file(lock_file);
+		update_index_if_able(&the_index, &lock_file);
+	rollback_lock_file(&lock_file);
 
 	if (has_unstaged_changes(ignore_submodules)) {
 		/* TRANSLATORS: the action is e.g. "pull with rebase" */
-- 
2.14.1.727.g9ddaf86


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH 03/11] lockfile: fix documentation on `close_lock_file_gently()`
  2017-10-01 14:56 [PATCH 00/11] various lockfile-leaks and -fixes Martin Ågren
  2017-10-01 14:56 ` [PATCH 01/11] sha1_file: do not leak `lock_file` Martin Ågren
  2017-10-01 14:56 ` [PATCH 02/11] treewide: prefer lockfiles on the stack Martin Ågren
@ 2017-10-01 14:56 ` Martin Ågren
  2017-10-02  5:35   ` Jeff King
  2017-10-01 14:56 ` [PATCH 04/11] tempfile: fix documentation on `delete_tempfile()` Martin Ågren
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 69+ messages in thread
From: Martin Ågren @ 2017-10-01 14:56 UTC (permalink / raw)
  To: git; +Cc: Jeff King

Commit 83a3069a3 (lockfile: do not rollback lock on failed close,
2017-09-05) forgot to update the documentation by the function definition
to reflect that the lock is not rolled back in case closing fails.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
---
 lockfile.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/lockfile.h b/lockfile.h
index 7c1c484d7..f401c979f 100644
--- a/lockfile.h
+++ b/lockfile.h
@@ -240,8 +240,8 @@ extern char *get_locked_file_path(struct lock_file *lk);
  * If the lockfile is still open, close it (and the file pointer if it
  * has been opened using `fdopen_lock_file()`) without renaming the
  * lockfile over the file being locked. Return 0 upon success. On
- * failure to `close(2)`, return a negative value and roll back the
- * lock file. Usually `commit_lock_file()`, `commit_lock_file_to()`,
+ * failure to `close(2)`, return a negative value (the lockfile is not
+ * rolled back). Usually `commit_lock_file()`, `commit_lock_file_to()`,
  * or `rollback_lock_file()` should eventually be called.
  */
 static inline int close_lock_file_gently(struct lock_file *lk)
-- 
2.14.1.727.g9ddaf86


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH 04/11] tempfile: fix documentation on `delete_tempfile()`
  2017-10-01 14:56 [PATCH 00/11] various lockfile-leaks and -fixes Martin Ågren
                   ` (2 preceding siblings ...)
  2017-10-01 14:56 ` [PATCH 03/11] lockfile: fix documentation on `close_lock_file_gently()` Martin Ågren
@ 2017-10-01 14:56 ` Martin Ågren
  2017-10-02  5:38   ` Jeff King
  2017-10-01 14:56 ` [PATCH 05/11] cache-tree: simplify locking logic Martin Ågren
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 69+ messages in thread
From: Martin Ågren @ 2017-10-01 14:56 UTC (permalink / raw)
  To: git; +Cc: Michael Haggerty, Jeff King

The function has always been documented as returning 0 or -1. It is in
fact `void`. Correct that. As part of the rearrangements we lose the
mention that `delete_tempfile()` might set `errno`. Because there is
no return value, the user can't really know whether it did anyway.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
---
 tempfile.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/tempfile.h b/tempfile.h
index b8f4b5e14..450908b2e 100644
--- a/tempfile.h
+++ b/tempfile.h
@@ -68,10 +68,10 @@
  * `create_tempfile()` returns an allocated tempfile on success or NULL
  * on failure. On errors, `errno` describes the reason for failure.
  *
- * `delete_tempfile()`, `rename_tempfile()`, and `close_tempfile_gently()`
- * return 0 on success. On failure they set `errno` appropriately and return
- * -1. `delete` and `rename` (but not `close`) do their best to delete the
- * temporary file before returning.
+ * `rename_tempfile()` and `close_tempfile_gently()` return 0 on success.
+ * On failure they set `errno` appropriately and return -1.
+ * `delete_tempfile()` and `rename` (but not `close`) do their best to
+ * delete the temporary file before returning.
  */
 
 struct tempfile {
-- 
2.14.1.727.g9ddaf86


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH 05/11] cache-tree: simplify locking logic
  2017-10-01 14:56 [PATCH 00/11] various lockfile-leaks and -fixes Martin Ågren
                   ` (3 preceding siblings ...)
  2017-10-01 14:56 ` [PATCH 04/11] tempfile: fix documentation on `delete_tempfile()` Martin Ågren
@ 2017-10-01 14:56 ` Martin Ågren
  2017-10-02  3:40   ` Junio C Hamano
  2017-10-02  5:41   ` Jeff King
  2017-10-01 14:56 ` [PATCH 06/11] apply: move lockfile into `apply_state` Martin Ågren
                   ` (7 subsequent siblings)
  12 siblings, 2 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-01 14:56 UTC (permalink / raw)
  To: git; +Cc: Paul Tan

After we have taken the lock using `LOCK_DIE_ON_ERROR`, we know that
`newfd` is non-negative. So when we check for exactly that property
before calling `write_locked_index()`, the outcome is guaranteed.

If we write and commit successfully, we set `newfd = -1`, so that we can
later avoid calling `rollback_lock_file` on an already-committed lock.
But we might just as well unconditionally call `rollback_lock_file()` --
it will be a no-op if we have already committed.

All in all, we use `newfd` as a bool and the only benefit we get from it
is that we can avoid calling a no-op. Remove `newfd` so that we have one
variable less to reason about.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
---
 cache-tree.c | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/cache-tree.c b/cache-tree.c
index 71d092ed5..f646f5673 100644
--- a/cache-tree.c
+++ b/cache-tree.c
@@ -602,11 +602,11 @@ static struct cache_tree *cache_tree_find(struct cache_tree *it, const char *pat
 
 int write_index_as_tree(unsigned char *sha1, struct index_state *index_state, const char *index_path, int flags, const char *prefix)
 {
-	int entries, was_valid, newfd;
+	int entries, was_valid;
 	struct lock_file lock_file = LOCK_INIT;
 	int ret = 0;
 
-	newfd = hold_lock_file_for_update(&lock_file, index_path, LOCK_DIE_ON_ERROR);
+	hold_lock_file_for_update(&lock_file, index_path, LOCK_DIE_ON_ERROR);
 
 	entries = read_index_from(index_state, index_path);
 	if (entries < 0) {
@@ -625,10 +625,7 @@ int write_index_as_tree(unsigned char *sha1, struct index_state *index_state, co
 			ret = WRITE_TREE_UNMERGED_INDEX;
 			goto out;
 		}
-		if (0 <= newfd) {
-			if (!write_locked_index(index_state, &lock_file, COMMIT_LOCK))
-				newfd = -1;
-		}
+		write_locked_index(index_state, &lock_file, COMMIT_LOCK);
 		/* Not being able to write is fine -- we are only interested
 		 * in updating the cache-tree part, and if the next caller
 		 * ends up using the old index with unupdated cache-tree part
@@ -650,8 +647,7 @@ int write_index_as_tree(unsigned char *sha1, struct index_state *index_state, co
 		hashcpy(sha1, index_state->cache_tree->oid.hash);
 
 out:
-	if (0 <= newfd)
-		rollback_lock_file(&lock_file);
+	rollback_lock_file(&lock_file);
 	return ret;
 }
 
-- 
2.14.1.727.g9ddaf86


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH 06/11] apply: move lockfile into `apply_state`
  2017-10-01 14:56 [PATCH 00/11] various lockfile-leaks and -fixes Martin Ågren
                   ` (4 preceding siblings ...)
  2017-10-01 14:56 ` [PATCH 05/11] cache-tree: simplify locking logic Martin Ågren
@ 2017-10-01 14:56 ` Martin Ågren
  2017-10-02  5:48   ` Jeff King
  2017-10-01 14:56 ` [PATCH 07/11] apply: remove `newfd` from `struct apply_state` Martin Ågren
                   ` (6 subsequent siblings)
  12 siblings, 1 reply; 69+ messages in thread
From: Martin Ågren @ 2017-10-01 14:56 UTC (permalink / raw)
  To: git; +Cc: Christian Couder

We have two users of `struct apply_state` and the related functionality
in apply.c. Each user sets up its `apply_state` by handing over a
pointer to its static `lock_file`. (Before 076aa2cbd (tempfile:
auto-allocate tempfiles on heap, 2017-09-05), we could never free
lockfiles, so making them static was a reasonable approach.)

Other than that, they never directly access their `lock_file`s, which
are instead handled by the functionality in apply.c.

To make life easier for the caller and to make it less tempting for a
future caller to mess with the lock, make apply.c fully responsible for
setting up the `lock_file`. As mentioned above, it is now safe to free a
`lock_file`, so we can make the `struct apply_state` contain an actual
`struct lock_file` instead of a pointer to one.

The user in builtin/apply.c is rather simple. For builtin/am.c, we might
worry that the lock state is actually meant to be inherited across
calls. But the lock is only taken as `apply_all_patches()` executes, and
code inspection shows that it will always be released.

Alternatively, we can observe that the lock itself is never queried
directly. When we decide whether we should lock, we check a related
variable `newfd`. That variable is not inherited, so from the point of
view of apply.c, the state machine really is reset with each call to
`init_apply_state()`. (It would be a bug if `newfd` and the lock status
were not in sync. The duplication of information in `newfd` and the lock
will be addressed in the next patch.)

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
---
 apply.c         | 14 +++++---------
 apply.h         |  5 ++---
 builtin/am.c    |  3 +--
 builtin/apply.c |  4 +---
 4 files changed, 9 insertions(+), 17 deletions(-)

diff --git a/apply.c b/apply.c
index c022af53a..5a6ca10a7 100644
--- a/apply.c
+++ b/apply.c
@@ -75,12 +75,10 @@ static int parse_ignorewhitespace_option(struct apply_state *state,
 }
 
 int init_apply_state(struct apply_state *state,
-		     const char *prefix,
-		     struct lock_file *lock_file)
+		     const char *prefix)
 {
 	memset(state, 0, sizeof(*state));
 	state->prefix = prefix;
-	state->lock_file = lock_file;
 	state->newfd = -1;
 	state->apply = 1;
 	state->line_termination = '\n';
@@ -146,8 +144,6 @@ int check_apply_state(struct apply_state *state, int force_apply)
 	}
 	if (state->check_index)
 		state->unsafe_paths = 0;
-	if (!state->lock_file)
-		return error("BUG: state->lock_file should not be NULL");
 
 	if (state->apply_verbosity <= verbosity_silent) {
 		state->saved_error_routine = get_error_routine();
@@ -4711,11 +4707,11 @@ static int apply_patch(struct apply_state *state,
 	state->update_index = state->check_index && state->apply;
 	if (state->update_index && state->newfd < 0) {
 		if (state->index_file)
-			state->newfd = hold_lock_file_for_update(state->lock_file,
+			state->newfd = hold_lock_file_for_update(&state->lock_file,
 								 state->index_file,
 								 LOCK_DIE_ON_ERROR);
 		else
-			state->newfd = hold_locked_index(state->lock_file, LOCK_DIE_ON_ERROR);
+			state->newfd = hold_locked_index(&state->lock_file, LOCK_DIE_ON_ERROR);
 	}
 
 	if (state->check_index && read_apply_cache(state) < 0) {
@@ -4911,7 +4907,7 @@ int apply_all_patches(struct apply_state *state,
 	}
 
 	if (state->update_index) {
-		res = write_locked_index(&the_index, state->lock_file, COMMIT_LOCK);
+		res = write_locked_index(&the_index, &state->lock_file, COMMIT_LOCK);
 		if (res) {
 			error(_("Unable to write new index file"));
 			res = -128;
@@ -4924,7 +4920,7 @@ int apply_all_patches(struct apply_state *state,
 
 end:
 	if (state->newfd >= 0) {
-		rollback_lock_file(state->lock_file);
+		rollback_lock_file(&state->lock_file);
 		state->newfd = -1;
 	}
 
diff --git a/apply.h b/apply.h
index d9b395770..cf00cda17 100644
--- a/apply.h
+++ b/apply.h
@@ -37,7 +37,7 @@ struct apply_state {
 	const char *prefix;
 
 	/* These are lock_file related */
-	struct lock_file *lock_file;
+	struct lock_file lock_file;
 	int newfd;
 
 	/* These control what gets looked at and modified */
@@ -116,8 +116,7 @@ extern int apply_parse_options(int argc, const char **argv,
 			       int *force_apply, int *options,
 			       const char * const *apply_usage);
 extern int init_apply_state(struct apply_state *state,
-			    const char *prefix,
-			    struct lock_file *lock_file);
+			    const char *prefix);
 extern void clear_apply_state(struct apply_state *state);
 extern int check_apply_state(struct apply_state *state, int force_apply);
 
diff --git a/builtin/am.c b/builtin/am.c
index 4e16fd428..40968428d 100644
--- a/builtin/am.c
+++ b/builtin/am.c
@@ -1488,11 +1488,10 @@ static int run_apply(const struct am_state *state, const char *index_file)
 	struct argv_array apply_opts = ARGV_ARRAY_INIT;
 	struct apply_state apply_state;
 	int res, opts_left;
-	static struct lock_file lock_file;
 	int force_apply = 0;
 	int options = 0;
 
-	if (init_apply_state(&apply_state, NULL, &lock_file))
+	if (init_apply_state(&apply_state, NULL))
 		die("BUG: init_apply_state() failed");
 
 	argv_array_push(&apply_opts, "apply");
diff --git a/builtin/apply.c b/builtin/apply.c
index 81b9a61c3..48d398933 100644
--- a/builtin/apply.c
+++ b/builtin/apply.c
@@ -9,8 +9,6 @@ static const char * const apply_usage[] = {
 	NULL
 };
 
-static struct lock_file lock_file;
-
 int cmd_apply(int argc, const char **argv, const char *prefix)
 {
 	int force_apply = 0;
@@ -18,7 +16,7 @@ int cmd_apply(int argc, const char **argv, const char *prefix)
 	int ret;
 	struct apply_state state;
 
-	if (init_apply_state(&state, prefix, &lock_file))
+	if (init_apply_state(&state, prefix))
 		exit(128);
 
 	argc = apply_parse_options(argc, argv,
-- 
2.14.1.727.g9ddaf86


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH 07/11] apply: remove `newfd` from `struct apply_state`
  2017-10-01 14:56 [PATCH 00/11] various lockfile-leaks and -fixes Martin Ågren
                   ` (5 preceding siblings ...)
  2017-10-01 14:56 ` [PATCH 06/11] apply: move lockfile into `apply_state` Martin Ågren
@ 2017-10-01 14:56 ` Martin Ågren
  2017-10-02  5:50   ` Jeff King
  2017-10-01 14:56 ` [PATCH 08/11] cache.h: document `write_locked_index()` Martin Ågren
                   ` (5 subsequent siblings)
  12 siblings, 1 reply; 69+ messages in thread
From: Martin Ågren @ 2017-10-01 14:56 UTC (permalink / raw)
  To: git; +Cc: Christian Couder

Similar to a previous patch, we do not need to use `newfd` to signal
that we have a lockfile to clean up. We can just unconditionally call
`rollback_lock_file`. If we do not hold the lock, it will be a no-op.

Where we check `newfd` to decide whether we need to take the lock, we
can instead use `is_lock_file_locked()`.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
---
 apply.c | 17 ++++++-----------
 apply.h |  3 +--
 2 files changed, 7 insertions(+), 13 deletions(-)

diff --git a/apply.c b/apply.c
index 5a6ca10a7..d676debd5 100644
--- a/apply.c
+++ b/apply.c
@@ -79,7 +79,6 @@ int init_apply_state(struct apply_state *state,
 {
 	memset(state, 0, sizeof(*state));
 	state->prefix = prefix;
-	state->newfd = -1;
 	state->apply = 1;
 	state->line_termination = '\n';
 	state->p_value = 1;
@@ -4705,13 +4704,13 @@ static int apply_patch(struct apply_state *state,
 		state->apply = 0;
 
 	state->update_index = state->check_index && state->apply;
-	if (state->update_index && state->newfd < 0) {
+	if (state->update_index && !is_lock_file_locked(&state->lock_file)) {
 		if (state->index_file)
-			state->newfd = hold_lock_file_for_update(&state->lock_file,
-								 state->index_file,
-								 LOCK_DIE_ON_ERROR);
+			hold_lock_file_for_update(&state->lock_file,
+						  state->index_file,
+						  LOCK_DIE_ON_ERROR);
 		else
-			state->newfd = hold_locked_index(&state->lock_file, LOCK_DIE_ON_ERROR);
+			hold_locked_index(&state->lock_file, LOCK_DIE_ON_ERROR);
 	}
 
 	if (state->check_index && read_apply_cache(state) < 0) {
@@ -4913,16 +4912,12 @@ int apply_all_patches(struct apply_state *state,
 			res = -128;
 			goto end;
 		}
-		state->newfd = -1;
 	}
 
 	res = !!errs;
 
 end:
-	if (state->newfd >= 0) {
-		rollback_lock_file(&state->lock_file);
-		state->newfd = -1;
-	}
+	rollback_lock_file(&state->lock_file);
 
 	if (state->apply_verbosity <= verbosity_silent) {
 		set_error_routine(state->saved_error_routine);
diff --git a/apply.h b/apply.h
index cf00cda17..dc4a01905 100644
--- a/apply.h
+++ b/apply.h
@@ -36,9 +36,8 @@ enum apply_verbosity {
 struct apply_state {
 	const char *prefix;
 
-	/* These are lock_file related */
+	/* Lock file */
 	struct lock_file lock_file;
-	int newfd;
 
 	/* These control what gets looked at and modified */
 	int apply; /* this is not a dry-run */
-- 
2.14.1.727.g9ddaf86


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH 08/11] cache.h: document `write_locked_index()`
  2017-10-01 14:56 [PATCH 00/11] various lockfile-leaks and -fixes Martin Ågren
                   ` (6 preceding siblings ...)
  2017-10-01 14:56 ` [PATCH 07/11] apply: remove `newfd` from `struct apply_state` Martin Ågren
@ 2017-10-01 14:56 ` Martin Ågren
  2017-10-01 14:56 ` [PATCH 09/11] read-cache: require flags for `write_locked_index()` Martin Ågren
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-01 14:56 UTC (permalink / raw)
  To: git; +Cc: Nguyễn Thái Ngọc Duy

The next patches will tweak the behavior of this function. Document it
in order to establish a basis for those patches.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
---
 cache.h | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/cache.h b/cache.h
index ea6c236e0..4605e8228 100644
--- a/cache.h
+++ b/cache.h
@@ -601,9 +601,25 @@ extern int do_read_index(struct index_state *istate, const char *path,
 extern int read_index_from(struct index_state *, const char *path);
 extern int is_index_unborn(struct index_state *);
 extern int read_index_unmerged(struct index_state *);
+
+/* Flags for `write_locked_index()`. */
 #define COMMIT_LOCK		(1 << 0)
 #define CLOSE_LOCK		(1 << 1)
+
+/*
+ * Write the index while holding an already-taken lock. The flags may
+ * contain at most one of `COMMIT_LOCK` and `CLOSE_LOCK`.
+ *
+ * Unless a split index is in use, write the index into the lock-file,
+ * then commit or close it, as indicated by `flags`.
+ *
+ * With a split index, write the shared index to a temporary file,
+ * adjust its permissions and rename it into place, then write the
+ * split index to the lockfile. If the temporary file for the shared
+ * index cannot be created, fall back to the normal case.
+ */
 extern int write_locked_index(struct index_state *, struct lock_file *lock, unsigned flags);
+
 extern int discard_index(struct index_state *);
 extern void move_index_extensions(struct index_state *dst, struct index_state *src);
 extern int unmerged_index(const struct index_state *);
-- 
2.14.1.727.g9ddaf86


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH 09/11] read-cache: require flags for `write_locked_index()`
  2017-10-01 14:56 [PATCH 00/11] various lockfile-leaks and -fixes Martin Ågren
                   ` (7 preceding siblings ...)
  2017-10-01 14:56 ` [PATCH 08/11] cache.h: document `write_locked_index()` Martin Ågren
@ 2017-10-01 14:56 ` Martin Ågren
  2017-10-02  3:49   ` Junio C Hamano
  2017-10-02  6:00   ` Jeff King
  2017-10-01 14:56 ` [PATCH 10/11] read-cache: don't leave dangling pointer in `do_write_index()` Martin Ågren
                   ` (3 subsequent siblings)
  12 siblings, 2 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-01 14:56 UTC (permalink / raw)
  To: git; +Cc: Nguyễn Thái Ngọc Duy

`write_locked_index()` takes two flags: `COMMIT_LOCK` and `CLOSE_LOCK`.
At most one is allowed. But it is also possible to use no flag, i.e.,
`0`. But when `write_locked_index()` calls `do_write_index()`, the
temporary file, a.k.a. the lockfile, will be closed. So passing `0` is
effectively the same as `CLOSE_LOCK`, which seems like a bug.

We might feel tempted to restructure the code in order to close the file
later, or conditionally. It also feels a bit unfortunate that we simply
"happen" to close the lock by way of an implementation detail of
lockfiles. But note that we need to close the temporary file before
`stat`-ing it, at least on Windows. See 9f41c7a6b (read-cache: close
index.lock in do_write_index, 2017-04-26).

So let's punt on the restructuring. Instead, require that one of the
flags is set. Adjust documentation and the assert we already have for
checking that we don't have too many flags. Add a macro `HAS_SINGLE_BIT`
(inspired by `HAS_MULTI_BITS`) to simplify this check and similar checks
in the future.

When (if) we need "write the index and leave the lockfile open", we can
revisit this.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
---
 cache.h           |  4 ++--
 git-compat-util.h |  7 ++++++-
 read-cache.c      | 12 ++++++------
 3 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/cache.h b/cache.h
index 4605e8228..32aa8afdf 100644
--- a/cache.h
+++ b/cache.h
@@ -607,8 +607,8 @@ extern int read_index_unmerged(struct index_state *);
 #define CLOSE_LOCK		(1 << 1)
 
 /*
- * Write the index while holding an already-taken lock. The flags may
- * contain at most one of `COMMIT_LOCK` and `CLOSE_LOCK`.
+ * Write the index while holding an already-taken lock. The flags must
+ * contain precisely one of `COMMIT_LOCK` and `CLOSE_LOCK`.
  *
  * Unless a split index is in use, write the index into the lock-file,
  * then commit or close it, as indicated by `flags`.
diff --git a/git-compat-util.h b/git-compat-util.h
index cedad4d58..fa8a5a95c 100644
--- a/git-compat-util.h
+++ b/git-compat-util.h
@@ -111,7 +111,12 @@
 #endif
 
 #define MSB(x, bits) ((x) & TYPEOF(x)(~0ULL << (bitsizeof(x) - (bits))))
-#define HAS_MULTI_BITS(i)  ((i) & ((i) - 1))  /* checks if an integer has more than 1 bit set */
+
+/* Checks if an integer has more than 1 bit set. */
+#define HAS_MULTI_BITS(i)  ((i) & ((i) - 1))
+
+/* Checks if an integer has precisely 1 bit set. */
+#define HAS_SINGLE_BIT(i)  ((i) && !HAS_MULTI_BITS(i))
 
 #define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d))
 
diff --git a/read-cache.c b/read-cache.c
index 65f4fe837..1ec2e8304 100644
--- a/read-cache.c
+++ b/read-cache.c
@@ -2343,14 +2343,14 @@ static int do_write_locked_index(struct index_state *istate, struct lock_file *l
 	int ret = do_write_index(istate, lock->tempfile, 0);
 	if (ret)
 		return ret;
-	assert((flags & (COMMIT_LOCK | CLOSE_LOCK)) !=
-	       (COMMIT_LOCK | CLOSE_LOCK));
+	assert(HAS_SINGLE_BIT(flags & (COMMIT_LOCK | CLOSE_LOCK)));
 	if (flags & COMMIT_LOCK)
 		return commit_locked_index(lock);
-	else if (flags & CLOSE_LOCK)
-		return close_lock_file_gently(lock);
-	else
-		return ret;
+	/*
+	 * Otherwise it's CLOSE_LOCK. The lockfile already happens
+	 * to have been closed, but let's be specific.
+	 */
+	return close_lock_file_gently(lock);
 }
 
 static int write_split_index(struct index_state *istate,
-- 
2.14.1.727.g9ddaf86


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH 10/11] read-cache: don't leave dangling pointer in `do_write_index()`
  2017-10-01 14:56 [PATCH 00/11] various lockfile-leaks and -fixes Martin Ågren
                   ` (8 preceding siblings ...)
  2017-10-01 14:56 ` [PATCH 09/11] read-cache: require flags for `write_locked_index()` Martin Ågren
@ 2017-10-01 14:56 ` Martin Ågren
  2017-10-02  6:15   ` Jeff King
  2017-10-01 14:56 ` [PATCH 11/11] read-cache: roll back lock on error with `COMMIT_LOCK` Martin Ågren
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 69+ messages in thread
From: Martin Ågren @ 2017-10-01 14:56 UTC (permalink / raw)
  To: git; +Cc: Nguyễn Thái Ngọc Duy

If `do_write_index(..., struct tempfile *, ...)` fails to close the
temporary file, it deletes it. This resets the pointer to NULL, but only
the pointer which is local to `do_write_index()`, not the pointer that
the caller holds. If the caller ever dereferences its pointer, we have
undefined behavior (most likely a crash). One of the two callers tries
to delete the temporary file on error, so it will dereference it.

We could change the function to take a `struct tempfile **` instead. But
we have another problem here. The other caller, `write_locked_index()`,
passes in `lock->tempfile`. So if we close the temporary file and reset
`lock->tempfile` to NULL, we have effectively rolled back the lock. That
caller is `write_locked_index()` and if it is used with the
`CLOSE_LOCK`-file, it means the lock is being rolled back against the
wishes of the caller. (`write_locked_index()` used to call
`close_lockfile()`, which would have rolled back on error. Commit
83a3069a3 (lockfile: do not rollback lock on failed close, 2017-09-05)
changed to not rolling back.)

Note also that if we would ever have failed to close the lockfile,
either with `COMMIT_LOCK` or with `CLOSE_LOCK`, that would have happened
already in `do_write_index()` and we would have left the lock in such a
state that trying to, e.g., roll it back or reopen the file would most
likely have crashed.

Do not delete the temporary file in `do_write_index()`. One caller will
avoid rolling back the lock, the other caller will delete its temporary
file anyway, and for both callers we will avoid crashes.

Expand the documentation on `write_locked_index()` to note that the lock
will never be rolled back when using `CLOSE_LOCK`. We can not yet make a
similar claim about `COMMIT_LOCK`; we'll fix that in the next commit.

It does feel a bit unfortunate that we simply "happen" to close the lock
by way of an implementation-detail of lockfiles. But note that we need
to close the temporary file before `stat`-ing it, at least on Windows.
See 9f41c7a6b (read-cache: close index.lock in do_write_index,
2017-04-26).

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
---
 cache.h      | 2 ++
 read-cache.c | 1 -
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/cache.h b/cache.h
index 32aa8afdf..4d09da792 100644
--- a/cache.h
+++ b/cache.h
@@ -617,6 +617,8 @@ extern int read_index_unmerged(struct index_state *);
  * adjust its permissions and rename it into place, then write the
  * split index to the lockfile. If the temporary file for the shared
  * index cannot be created, fall back to the normal case.
+ *
+ * With `CLOSE_LOCK`, the lock will be neither committed nor rolled back.
  */
 extern int write_locked_index(struct index_state *, struct lock_file *lock, unsigned flags);
 
diff --git a/read-cache.c b/read-cache.c
index 1ec2e8304..8e498e879 100644
--- a/read-cache.c
+++ b/read-cache.c
@@ -2314,7 +2314,6 @@ static int do_write_index(struct index_state *istate, struct tempfile *tempfile,
 		return -1;
 	if (close_tempfile_gently(tempfile)) {
 		error(_("could not close '%s'"), tempfile->filename.buf);
-		delete_tempfile(&tempfile);
 		return -1;
 	}
 	if (stat(tempfile->filename.buf, &st))
-- 
2.14.1.727.g9ddaf86


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH 11/11] read-cache: roll back lock on error with `COMMIT_LOCK`
  2017-10-01 14:56 [PATCH 00/11] various lockfile-leaks and -fixes Martin Ågren
                   ` (9 preceding siblings ...)
  2017-10-01 14:56 ` [PATCH 10/11] read-cache: don't leave dangling pointer in `do_write_index()` Martin Ågren
@ 2017-10-01 14:56 ` Martin Ågren
  2017-10-02  4:01   ` Junio C Hamano
  2017-10-02  2:37 ` [PATCH 00/11] various lockfile-leaks and -fixes Junio C Hamano
  2017-10-02  6:22 ` Jeff King
  12 siblings, 1 reply; 69+ messages in thread
From: Martin Ågren @ 2017-10-01 14:56 UTC (permalink / raw)
  To: git; +Cc: Nguyễn Thái Ngọc Duy

When `write_locked_index(..., lock, COMMIT_LOCK)` returns, the lockfile
might have been neither committed nor rolled back. This happens if the
call to `do_write_index()` early in `do_write_locked_index()` fails, or
if `write_shared_index()` fails.

Some callers obviously do not expect the lock to remain locked. They
simply kick an error up the call stack without rolling back the lock.
They typically have the lock on the stack, so there is now no way of
releasing it. (Some callers actually do roll back the lock, and many
others simply do not care because they are about to die anyway.) Maybe
we don't try to take the lock again within the same process, so this is
not a problem at the time, but if we ever learn to try, we could
deadlock.

We could teach existing callers to roll back where it matters. That
would introduce some boiler-plate code, and future users might
re-introduce the same mistake. After all, it does seem reasonable to
expect that `COMMIT_LOCK` does what `commit_lockfile()` does: commits or
rolls back.

Teach `write_locked_index(..., COMMIT_LOCK)` to roll back the lock
before returning. If we have already succeeded and committed, it will be
a noop. Simplify the existing callers where we now have a superfluous
call to `rollback_lockfile()`. This should keep future readers from
wondering why the callers are inconsistent.

For the users which `die()` if `write_locked_index()` fails, this change
does not bring any benefit. But it also doesn't hurt -- it simply means
we clean up the lockfile in `do_write_locked_index()` instead of in our
custom `atexit(3)` handler.

Out-of-tree callers will be affected by this change, but they will be
getting the locking-behavior that they almost certainly expect.

For the other flag, `CLOSE_LOCK`, leaving the lock as-is on error is
appropriate, since that is how `close_lockfile_gently()` behaves.
(There are not many in-tree users and they all `die()` on error.)

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
---
 builtin/difftool.c |  1 -
 cache.h            |  1 +
 merge.c            |  4 +---
 read-cache.c       | 13 ++++++++-----
 sequencer.c        |  1 -
 5 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/builtin/difftool.c b/builtin/difftool.c
index b2d3ba753..bcc79d188 100644
--- a/builtin/difftool.c
+++ b/builtin/difftool.c
@@ -616,7 +616,6 @@ static int run_dir_diff(const char *extcmd, int symlinks, const char *prefix,
 			if (hold_lock_file_for_update(&lock, buf.buf, 0) < 0 ||
 			    write_locked_index(&wtindex, &lock, COMMIT_LOCK)) {
 				ret = error("could not write %s", buf.buf);
-				rollback_lock_file(&lock);
 				goto finish;
 			}
 			changed_files(&wt_modified, buf.buf, workdir);
diff --git a/cache.h b/cache.h
index 4d09da792..0da90a545 100644
--- a/cache.h
+++ b/cache.h
@@ -618,6 +618,7 @@ extern int read_index_unmerged(struct index_state *);
  * split index to the lockfile. If the temporary file for the shared
  * index cannot be created, fall back to the normal case.
  *
+ * With `COMMIT_LOCK`, the lock is always committed or rolled back.
  * With `CLOSE_LOCK`, the lock will be neither committed nor rolled back.
  */
 extern int write_locked_index(struct index_state *, struct lock_file *lock, unsigned flags);
diff --git a/merge.c b/merge.c
index a18a452b5..e5d796c9f 100644
--- a/merge.c
+++ b/merge.c
@@ -91,9 +91,7 @@ int checkout_fast_forward(const struct object_id *head,
 	}
 	if (unpack_trees(nr_trees, t, &opts))
 		return -1;
-	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK)) {
-		rollback_lock_file(&lock_file);
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		return error(_("unable to write new index file"));
-	}
 	return 0;
 }
diff --git a/read-cache.c b/read-cache.c
index 8e498e879..53f1941c9 100644
--- a/read-cache.c
+++ b/read-cache.c
@@ -2182,9 +2182,8 @@ static int has_racy_timestamp(struct index_state *istate)
 void update_index_if_able(struct index_state *istate, struct lock_file *lockfile)
 {
 	if ((istate->cache_changed || has_racy_timestamp(istate)) &&
-	    verify_index(istate) &&
-	    write_locked_index(istate, lockfile, COMMIT_LOCK))
-		rollback_lock_file(lockfile);
+	    verify_index(istate))
+		write_locked_index(istate, lockfile, COMMIT_LOCK);
 }
 
 static int do_write_index(struct index_state *istate, struct tempfile *tempfile,
@@ -2498,7 +2497,8 @@ int write_locked_index(struct index_state *istate, struct lock_file *lock,
 	    (istate->cache_changed & ~EXTMASK)) {
 		if (si)
 			hashclr(si->base_sha1);
-		return do_write_locked_index(istate, lock, flags);
+		ret = do_write_locked_index(istate, lock, flags);
+		goto out;
 	}
 
 	if (getenv("GIT_TEST_SPLIT_INDEX")) {
@@ -2514,7 +2514,7 @@ int write_locked_index(struct index_state *istate, struct lock_file *lock,
 	if (new_shared_index) {
 		ret = write_shared_index(istate, lock, flags);
 		if (ret)
-			return ret;
+			goto out;
 	}
 
 	ret = write_split_index(istate, lock, flags);
@@ -2523,6 +2523,9 @@ int write_locked_index(struct index_state *istate, struct lock_file *lock,
 	if (!ret && !new_shared_index)
 		freshen_shared_index(sha1_to_hex(si->base_sha1), 1);
 
+out:
+	if (flags & COMMIT_LOCK)
+		rollback_lock_file(lock);
 	return ret;
 }
 
diff --git a/sequencer.c b/sequencer.c
index 60636ce54..d56c38081 100644
--- a/sequencer.c
+++ b/sequencer.c
@@ -1183,7 +1183,6 @@ static int read_and_refresh_cache(struct replay_opts *opts)
 	refresh_index(&the_index, REFRESH_QUIET|REFRESH_UNMERGED, NULL, NULL, NULL);
 	if (the_index.cache_changed && index_fd >= 0) {
 		if (write_locked_index(&the_index, &index_lock, COMMIT_LOCK)) {
-			rollback_lock_file(&index_lock);
 			return error(_("git %s: failed to refresh the index"),
 				_(action_name(opts)));
 		}
-- 
2.14.1.727.g9ddaf86


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [PATCH 00/11] various lockfile-leaks and -fixes
  2017-10-01 14:56 [PATCH 00/11] various lockfile-leaks and -fixes Martin Ågren
                   ` (10 preceding siblings ...)
  2017-10-01 14:56 ` [PATCH 11/11] read-cache: roll back lock on error with `COMMIT_LOCK` Martin Ågren
@ 2017-10-02  2:37 ` Junio C Hamano
  2017-10-02  6:22 ` Jeff King
  12 siblings, 0 replies; 69+ messages in thread
From: Junio C Hamano @ 2017-10-02  2:37 UTC (permalink / raw)
  To: Martin Ågren
  Cc: git, Michael Haggerty, Jeff King, Paul Tan, Christian Couder,
	Nguyễn Thái Ngọc Duy

Martin Ågren <martin.agren@gmail.com> writes:

> Martin Ågren (11):
>   sha1_file: do not leak `lock_file`
>   treewide: prefer lockfiles on the stack
>   lockfile: fix documentation on `close_lock_file_gently()`
>   tempfile: fix documentation on `delete_tempfile()`
>   cache-tree: simplify locking logic
>   apply: move lockfile into `apply_state`
>   apply: remove `newfd` from `struct apply_state`
>   cache.h: document `write_locked_index()`
>   read-cache: require flags for `write_locked_index()`
>   read-cache: don't leave dangling pointer in `do_write_index()`
>   read-cache: roll back lock on error with `COMMIT_LOCK`
>
>  apply.c            | 25 ++++++++-----------------
>  apply.h            |  8 +++-----
>  builtin/am.c       | 27 ++++++++++++---------------
>  builtin/apply.c    |  4 +---
>  builtin/checkout.c | 14 ++++++--------
>  builtin/clone.c    |  7 +++----
>  builtin/diff.c     |  7 +++----
>  builtin/difftool.c |  1 -
>  cache-tree.c       | 12 ++++--------
>  cache.h            | 19 +++++++++++++++++++
>  config.c           | 17 ++++++++---------
>  git-compat-util.h  |  7 ++++++-
>  lockfile.h         |  4 ++--
>  merge-recursive.c  |  6 +++---
>  merge.c            |  8 +++-----
>  read-cache.c       | 26 ++++++++++++++------------
>  sequencer.c        |  1 -
>  sha1_file.c        | 16 +++++++---------
>  tempfile.h         |  8 ++++----
>  wt-status.c        |  8 ++++----
>  20 files changed, 110 insertions(+), 115 deletions(-)

That's quite a lot of changes to the low-level code.  I'll need to
revisit this topic later once again, but from a cursory read did not
find anything glaringly wrong in it.  Thanks for working on this.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 02/11] treewide: prefer lockfiles on the stack
  2017-10-01 14:56 ` [PATCH 02/11] treewide: prefer lockfiles on the stack Martin Ågren
@ 2017-10-02  3:37   ` Junio C Hamano
  2017-10-02  4:12     ` Martin Ågren
  2017-10-02  5:34   ` Jeff King
  1 sibling, 1 reply; 69+ messages in thread
From: Junio C Hamano @ 2017-10-02  3:37 UTC (permalink / raw)
  To: Martin Ågren; +Cc: git

Martin Ågren <martin.agren@gmail.com> writes:

> diff --git a/builtin/diff.c b/builtin/diff.c
> index 7e3ebcea3..91995965d 100644
> --- a/builtin/diff.c
> +++ b/builtin/diff.c
> @@ -203,17 +203,16 @@ static int builtin_diff_combined(struct rev_info *revs,
>  
>  static void refresh_index_quietly(void)
>  {
> -	struct lock_file *lock_file;
> +	struct lock_file lock_file = LOCK_INIT;
>  	int fd;
>  
> -	lock_file = xcalloc(1, sizeof(struct lock_file));
> -	fd = hold_locked_index(lock_file, 0);
> +	fd = hold_locked_index(&lock_file, 0);
>  	if (fd < 0)
>  		return;
>  	discard_cache();
>  	read_cache();
>  	refresh_cache(REFRESH_QUIET|REFRESH_UNMERGED);
> -	update_index_if_able(&the_index, lock_file);
> +	update_index_if_able(&the_index, &lock_file);
>  }

This is not a show-stopper for this patch series, but just something
I noticed, something that used to be unavoidable in the old world
order that requires us to leak lockfiles, but now it becomes more
obvious.

update_index_if_able() calls write_lock_index() with the COMMIT_LOCK
option, but it does so conditionally.  When it does not make the call,
the lockfile is left behind to be cleaned up by the atexit() handler,
but when it does, it is closed and released.

Perhaps update_index_if_able() needs to be further cleaned up to
always release the resource held by the lockfile structure?  Its
caller cannot differentiate (and more importantly, its caller does
not want to care) if the _if_able call actually has updated the
index or not and act differently afterwards.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 05/11] cache-tree: simplify locking logic
  2017-10-01 14:56 ` [PATCH 05/11] cache-tree: simplify locking logic Martin Ågren
@ 2017-10-02  3:40   ` Junio C Hamano
  2017-10-02  5:41   ` Jeff King
  1 sibling, 0 replies; 69+ messages in thread
From: Junio C Hamano @ 2017-10-02  3:40 UTC (permalink / raw)
  To: Martin Ågren; +Cc: git, Paul Tan

Martin Ågren <martin.agren@gmail.com> writes:

> After we have taken the lock using `LOCK_DIE_ON_ERROR`, we know that
> `newfd` is non-negative. So when we check for exactly that property
> before calling `write_locked_index()`, the outcome is guaranteed.
>
> If we write and commit successfully, we set `newfd = -1`, so that we can
> later avoid calling `rollback_lock_file` on an already-committed lock.
> But we might just as well unconditionally call `rollback_lock_file()` --
> it will be a no-op if we have already committed.
>
> All in all, we use `newfd` as a bool and the only benefit we get from it
> is that we can avoid calling a no-op. Remove `newfd` so that we have one
> variable less to reason about.

Nicely explained.  Makes sense.

Thanks.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 09/11] read-cache: require flags for `write_locked_index()`
  2017-10-01 14:56 ` [PATCH 09/11] read-cache: require flags for `write_locked_index()` Martin Ågren
@ 2017-10-02  3:49   ` Junio C Hamano
  2017-10-02  4:14     ` Martin Ågren
  2017-10-02  6:00   ` Jeff King
  1 sibling, 1 reply; 69+ messages in thread
From: Junio C Hamano @ 2017-10-02  3:49 UTC (permalink / raw)
  To: Martin Ågren; +Cc: git, Nguyễn Thái Ngọc Duy

Martin Ågren <martin.agren@gmail.com> writes:

> ... Instead, require that one of the
> flags is set. Adjust documentation and the assert we already have for
> checking that we don't have too many flags. Add a macro `HAS_SINGLE_BIT`
> (inspired by `HAS_MULTI_BITS`) to simplify this check and similar checks
> in the future.

I do not have a strong opinion against this approach, but if
something can take only one of two values, wouldn't it make more
sense to express it as a single boolean, I wonder.  Then there is no
need to invent a cute HAS_SINGLE_BIT() macro, either.

"commit and leave it open" cannot be expressed with such a scheme,
but with the HAS_SINGLE_BIT() scheme it can't anyway, so...

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 11/11] read-cache: roll back lock on error with `COMMIT_LOCK`
  2017-10-01 14:56 ` [PATCH 11/11] read-cache: roll back lock on error with `COMMIT_LOCK` Martin Ågren
@ 2017-10-02  4:01   ` Junio C Hamano
  0 siblings, 0 replies; 69+ messages in thread
From: Junio C Hamano @ 2017-10-02  4:01 UTC (permalink / raw)
  To: Martin Ågren; +Cc: git, Nguyễn Thái Ngọc Duy

Martin Ågren <martin.agren@gmail.com> writes:

> Teach `write_locked_index(..., COMMIT_LOCK)` to roll back the lock
> before returning. If we have already succeeded and committed, it will be
> a noop. Simplify the existing callers where we now have a superfluous
> call to `rollback_lockfile()`. This should keep future readers from
> wondering why the callers are inconsistent.

Makes sense.  Thanks for cleaning up.  The split-index codepath was
something I've never reviewed seriously with fine toothed comb, and
I feel a lot better now knowing that somebody other than its authors
and me have read it through ;-)

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 02/11] treewide: prefer lockfiles on the stack
  2017-10-02  3:37   ` Junio C Hamano
@ 2017-10-02  4:12     ` Martin Ågren
  0 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-02  4:12 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: Git Mailing List

On 2 October 2017 at 05:37, Junio C Hamano <gitster@pobox.com> wrote:
> Martin Ågren <martin.agren@gmail.com> writes:
>
> This is not a show-stopper for this patch series, but just something
> I noticed, something that used to be unavoidable in the old world
> order that requires us to leak lockfiles, but now it becomes more
> obvious.
>
> update_index_if_able() calls write_lock_index() with the COMMIT_LOCK
> option, but it does so conditionally.  When it does not make the call,
> the lockfile is left behind to be cleaned up by the atexit() handler,
> but when it does, it is closed and released.
>
> Perhaps update_index_if_able() needs to be further cleaned up to
> always release the resource held by the lockfile structure?  Its
> caller cannot differentiate (and more importantly, its caller does
> not want to care) if the _if_able call actually has updated the
> index or not and act differently afterwards.

Ah, indeed. I should be able to do that rather easily on top. I tend to
agree that the caller shouldn't have to worry about it since the
function is just a one-off "if_able" without any return value. One
caller actually does roll back the lock, but the others don't.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 09/11] read-cache: require flags for `write_locked_index()`
  2017-10-02  3:49   ` Junio C Hamano
@ 2017-10-02  4:14     ` Martin Ågren
  2017-10-02 10:16       ` Martin Ågren
  0 siblings, 1 reply; 69+ messages in thread
From: Martin Ågren @ 2017-10-02  4:14 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: Git Mailing List, Nguyễn Thái Ngọc Duy

On 2 October 2017 at 05:49, Junio C Hamano <gitster@pobox.com> wrote:
> Martin Ågren <martin.agren@gmail.com> writes:
>
>> ... Instead, require that one of the
>> flags is set. Adjust documentation and the assert we already have for
>> checking that we don't have too many flags. Add a macro `HAS_SINGLE_BIT`
>> (inspired by `HAS_MULTI_BITS`) to simplify this check and similar checks
>> in the future.
>
> I do not have a strong opinion against this approach, but if
> something can take only one of two values, wouldn't it make more
> sense to express it as a single boolean, I wonder.  Then there is no
> need to invent a cute HAS_SINGLE_BIT() macro, either.
>
> "commit and leave it open" cannot be expressed with such a scheme,
> but with the HAS_SINGLE_BIT() scheme it can't anyway, so...

I did briefly consider renaming `flags` to `commit` and re-#defining the
two flags to 0 and 1 (or even updating all the callers to use literal
zeros and ones). It felt a bit awkward to downgrade `flags` to a bool
-- normally we'd to the reverse change. But maybe I shouldn't have
rejected that so easily. If we have a feeling we won't need other flags
(or the "don't even close the file") any time soon, maybe it'd be good
to tighten things up a bit. Thanks for looking at these.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 01/11] sha1_file: do not leak `lock_file`
  2017-10-01 14:56 ` [PATCH 01/11] sha1_file: do not leak `lock_file` Martin Ågren
@ 2017-10-02  5:26   ` Jeff King
  2017-10-02 10:15     ` Martin Ågren
  0 siblings, 1 reply; 69+ messages in thread
From: Jeff King @ 2017-10-02  5:26 UTC (permalink / raw)
  To: Martin Ågren; +Cc: git

On Sun, Oct 01, 2017 at 04:56:02PM +0200, Martin Ågren wrote:

> There is no longer any need to allocate and leak a `struct lock_file`.
> Initialize it on the stack instead.
> 
> Instead of setting `lock = NULL` to signal that we have already rolled
> back, and that we should not do any more work, check with
> `is_lock_file_locked()`. Since we take the lock with
> `LOCK_DIE_ON_ERROR`, it evaluates to false exactly when we have called
> `rollback_lock_file()`.

This looks correct (and is a good direction, as it drops a leak).

The original code is actually a bit confusing/unsafe, as we set the
"found" flag early and rollback here:

> @@ -481,17 +481,15 @@ void add_to_alternates_file(const char *reference)
>  		strbuf_release(&line);
>  		fclose(in);
>  
> -		if (found) {
> -			rollback_lock_file(lock);
> -			lock = NULL;
> -		}
> +		if (found)
> +			rollback_lock_file(&lock);

and that leaves the "out" filehandle dangling. It's correct because of
the later "do we still have the lock" check:

> -	if (lock) {
> +	if (is_lock_file_locked(&lock)) {
>  		fprintf_or_die(out, "%s\n", reference);

but the two spots must remain in sync. If I were writing it from scratch
I might have bumped "found" to the scope of the whole function, and then
replaced this final "do we still have the lock" with:

  if (found)
	rollback_lock_file(&lock);
  else {
	fprintf_or_die(out, "%s\n", reference);
	if (commit_lock_file(&lock))
	...etc...
  }

I don't know if it's worth changing now or not.

-Peff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 02/11] treewide: prefer lockfiles on the stack
  2017-10-01 14:56 ` [PATCH 02/11] treewide: prefer lockfiles on the stack Martin Ågren
  2017-10-02  3:37   ` Junio C Hamano
@ 2017-10-02  5:34   ` Jeff King
  1 sibling, 0 replies; 69+ messages in thread
From: Jeff King @ 2017-10-02  5:34 UTC (permalink / raw)
  To: Martin Ågren; +Cc: git

On Sun, Oct 01, 2017 at 04:56:03PM +0200, Martin Ågren wrote:

> There is no longer any need to allocate and leak a `struct lock_file`.
> The previous patch addressed an instance where we needed a minor tweak
> alongside the trivial changes.
> 
> Deal with the remaining instances where we allocate and leak a struct
> within a single function. Change them to have the `struct lock_file` on
> the stack instead.
> 
> These instances were identified by running `git grep "^\s*struct
> lock_file\s*\*"`.

Thanks. These all look pretty straightforward, and as a general rule
this _should_ be a safe conversion with respect to dangling references,
since:

  - the tempfile pointers copied onto the global cleanup list are always
    allocated, so there's no danger of our stack variables going out of
    scope and leaving cruft in the global list.

  - when the lock is committed or rolled back, we NULL the pointer after
    freeing the tempfile. Nobody should be looking at the lock after
    that, but if there is such a bug, we should reliably hit an assert
    (if they use the accessors which check for validity) or a segfault
    (if somebody tries to dereference it directly).

    So it's possible that this subtly introduces a new segfault or
    assertion failure, but in that case it would be finding an
    _existing_ bug which was previously just reading cruft from the
    inactive tempfile struct.

-Peff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 03/11] lockfile: fix documentation on `close_lock_file_gently()`
  2017-10-01 14:56 ` [PATCH 03/11] lockfile: fix documentation on `close_lock_file_gently()` Martin Ågren
@ 2017-10-02  5:35   ` Jeff King
  0 siblings, 0 replies; 69+ messages in thread
From: Jeff King @ 2017-10-02  5:35 UTC (permalink / raw)
  To: Martin Ågren; +Cc: git

On Sun, Oct 01, 2017 at 04:56:04PM +0200, Martin Ågren wrote:

> Commit 83a3069a3 (lockfile: do not rollback lock on failed close,
> 2017-09-05) forgot to update the documentation by the function definition
> to reflect that the lock is not rolled back in case closing fails.

Oops, thanks for catching. Patch looks obviously correct.

-Peff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 04/11] tempfile: fix documentation on `delete_tempfile()`
  2017-10-01 14:56 ` [PATCH 04/11] tempfile: fix documentation on `delete_tempfile()` Martin Ågren
@ 2017-10-02  5:38   ` Jeff King
  0 siblings, 0 replies; 69+ messages in thread
From: Jeff King @ 2017-10-02  5:38 UTC (permalink / raw)
  To: Martin Ågren; +Cc: git, Michael Haggerty

On Sun, Oct 01, 2017 at 04:56:05PM +0200, Martin Ågren wrote:

> The function has always been documented as returning 0 or -1. It is in
> fact `void`. Correct that. As part of the rearrangements we lose the
> mention that `delete_tempfile()` might set `errno`. Because there is
> no return value, the user can't really know whether it did anyway.

Right, your documentation change makes sense. Thanks for catching this.

I was curious if this inconsistency was introduced during review, but
even the initial version to the list returns void. So I guess it
happened sometime during the writing. :)

-Peff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 05/11] cache-tree: simplify locking logic
  2017-10-01 14:56 ` [PATCH 05/11] cache-tree: simplify locking logic Martin Ågren
  2017-10-02  3:40   ` Junio C Hamano
@ 2017-10-02  5:41   ` Jeff King
  1 sibling, 0 replies; 69+ messages in thread
From: Jeff King @ 2017-10-02  5:41 UTC (permalink / raw)
  To: Martin Ågren; +Cc: git, Paul Tan

On Sun, Oct 01, 2017 at 04:56:06PM +0200, Martin Ågren wrote:

> After we have taken the lock using `LOCK_DIE_ON_ERROR`, we know that
> `newfd` is non-negative. So when we check for exactly that property
> before calling `write_locked_index()`, the outcome is guaranteed.
> 
> If we write and commit successfully, we set `newfd = -1`, so that we can
> later avoid calling `rollback_lock_file` on an already-committed lock.
> But we might just as well unconditionally call `rollback_lock_file()` --
> it will be a no-op if we have already committed.
> 
> All in all, we use `newfd` as a bool and the only benefit we get from it
> is that we can avoid calling a no-op. Remove `newfd` so that we have one
> variable less to reason about.

Nice, this looks much simpler and the reasoning above is all sound.

I think cmd_checkout_index() has the exact same thing going on.

-Peff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 06/11] apply: move lockfile into `apply_state`
  2017-10-01 14:56 ` [PATCH 06/11] apply: move lockfile into `apply_state` Martin Ågren
@ 2017-10-02  5:48   ` Jeff King
  0 siblings, 0 replies; 69+ messages in thread
From: Jeff King @ 2017-10-02  5:48 UTC (permalink / raw)
  To: Martin Ågren; +Cc: git, Christian Couder

On Sun, Oct 01, 2017 at 04:56:07PM +0200, Martin Ågren wrote:

> We have two users of `struct apply_state` and the related functionality
> in apply.c. Each user sets up its `apply_state` by handing over a
> pointer to its static `lock_file`. (Before 076aa2cbd (tempfile:
> auto-allocate tempfiles on heap, 2017-09-05), we could never free
> lockfiles, so making them static was a reasonable approach.)
> 
> Other than that, they never directly access their `lock_file`s, which
> are instead handled by the functionality in apply.c.
> 
> To make life easier for the caller and to make it less tempting for a
> future caller to mess with the lock, make apply.c fully responsible for
> setting up the `lock_file`. As mentioned above, it is now safe to free a
> `lock_file`, so we can make the `struct apply_state` contain an actual
> `struct lock_file` instead of a pointer to one.
> 
> The user in builtin/apply.c is rather simple. For builtin/am.c, we might
> worry that the lock state is actually meant to be inherited across
> calls. But the lock is only taken as `apply_all_patches()` executes, and
> code inspection shows that it will always be released.
> 
> Alternatively, we can observe that the lock itself is never queried
> directly. When we decide whether we should lock, we check a related
> variable `newfd`. That variable is not inherited, so from the point of
> view of apply.c, the state machine really is reset with each call to
> `init_apply_state()`. (It would be a bug if `newfd` and the lock status
> were not in sync. The duplication of information in `newfd` and the lock
> will be addressed in the next patch.)

Nicely explained. I didn't re-verify your inspection of
apply_all_patches(), but I think it would be fine regardless. We don't
init_apply_state() again, so both before and after your patch we are
always looking at the "same" lock struct (and so if somebody forgot to
unlock, in either case we'd hit the "this tempfile is already active"
assertion).

-Peff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 07/11] apply: remove `newfd` from `struct apply_state`
  2017-10-01 14:56 ` [PATCH 07/11] apply: remove `newfd` from `struct apply_state` Martin Ågren
@ 2017-10-02  5:50   ` Jeff King
  0 siblings, 0 replies; 69+ messages in thread
From: Jeff King @ 2017-10-02  5:50 UTC (permalink / raw)
  To: Martin Ågren; +Cc: git, Christian Couder

On Sun, Oct 01, 2017 at 04:56:08PM +0200, Martin Ågren wrote:

> Similar to a previous patch, we do not need to use `newfd` to signal
> that we have a lockfile to clean up. We can just unconditionally call
> `rollback_lock_file`. If we do not hold the lock, it will be a no-op.
> 
> Where we check `newfd` to decide whether we need to take the lock, we
> can instead use `is_lock_file_locked()`.

Looks good. I was surprised you didn't have to replace any writing to
"newfd". But the writing it all happens via write_locked_index(), which
directly accesses the lock's fd member.

-Peff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 09/11] read-cache: require flags for `write_locked_index()`
  2017-10-01 14:56 ` [PATCH 09/11] read-cache: require flags for `write_locked_index()` Martin Ågren
  2017-10-02  3:49   ` Junio C Hamano
@ 2017-10-02  6:00   ` Jeff King
  1 sibling, 0 replies; 69+ messages in thread
From: Jeff King @ 2017-10-02  6:00 UTC (permalink / raw)
  To: Martin Ågren; +Cc: git, Nguyễn Thái Ngọc Duy

On Sun, Oct 01, 2017 at 04:56:10PM +0200, Martin Ågren wrote:

> `write_locked_index()` takes two flags: `COMMIT_LOCK` and `CLOSE_LOCK`.
> At most one is allowed. But it is also possible to use no flag, i.e.,
> `0`. But when `write_locked_index()` calls `do_write_index()`, the
> temporary file, a.k.a. the lockfile, will be closed. So passing `0` is
> effectively the same as `CLOSE_LOCK`, which seems like a bug.
> 
> We might feel tempted to restructure the code in order to close the file
> later, or conditionally. It also feels a bit unfortunate that we simply
> "happen" to close the lock by way of an implementation detail of
> lockfiles. But note that we need to close the temporary file before
> `stat`-ing it, at least on Windows. See 9f41c7a6b (read-cache: close
> index.lock in do_write_index, 2017-04-26).

Interesting. So it seems like this is potentially a bug introduced by
9f41c7a6b, as before then passing "0" for the flags would neither commit
nor close the lockfile.

At the same time, I cannot imagine why one would want that behavior. The
only use would be if you planned to append more to the index. Which
seems kind of odd to do outside of write_locked_index() itself. And
certainly there aren't any callers in the existing codebase who pass 0.
Your assert() should help catch any that are added.

-Peff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 10/11] read-cache: don't leave dangling pointer in `do_write_index()`
  2017-10-01 14:56 ` [PATCH 10/11] read-cache: don't leave dangling pointer in `do_write_index()` Martin Ågren
@ 2017-10-02  6:15   ` Jeff King
  2017-10-02  6:20     ` Jeff King
  0 siblings, 1 reply; 69+ messages in thread
From: Jeff King @ 2017-10-02  6:15 UTC (permalink / raw)
  To: Martin Ågren; +Cc: git, Nguyễn Thái Ngọc Duy

On Sun, Oct 01, 2017 at 04:56:11PM +0200, Martin Ågren wrote:

> If `do_write_index(..., struct tempfile *, ...)` fails to close the
> temporary file, it deletes it. This resets the pointer to NULL, but only
> the pointer which is local to `do_write_index()`, not the pointer that
> the caller holds. If the caller ever dereferences its pointer, we have
> undefined behavior (most likely a crash). One of the two callers tries
> to delete the temporary file on error, so it will dereference it.

Hrm. I think this was introduced by my lockfile series, since before
that the memory would have remained valid (but with its "active" flag
unset, which is enough for delete_tempfile() to happily return early).

Part of my reason for switching delete_tempfile() to take a
pointer-to-pointer was so that we had to investigate each such call. But
obviously I failed to analyze this case correctly. :(

> We could change the function to take a `struct tempfile **` instead. But
> we have another problem here. The other caller, `write_locked_index()`,
> passes in `lock->tempfile`. So if we close the temporary file and reset
> `lock->tempfile` to NULL, we have effectively rolled back the lock. That
> caller is `write_locked_index()` and if it is used with the
> `CLOSE_LOCK`-file, it means the lock is being rolled back against the
> wishes of the caller. (`write_locked_index()` used to call
> `close_lockfile()`, which would have rolled back on error. Commit
> 83a3069a3 (lockfile: do not rollback lock on failed close, 2017-09-05)
> changed to not rolling back.)

I'm not sure I follow here. It seems like write_locked_index() has three
outcomes:

  - if there was an error, the lock is rolled back

  - if we were successful and the caller asked for CLOSE_LOCK, we remain
    locked

  - if we were successful and the caller asked for COMMIT_LOCK, we
    commit the lock

It sounds like you are considering the first outcome a bug, but I think
it is intentional. Without that, every caller of write_locked_index()
would need to release the lock themselves. That's especially cumbersome
if they called with COMMIT_LOCK, as they otherwise can assume that
write_locked_index() has released the lock one way or another.

So I actually think that just switching to a "struct tempfile **" here
is a reasonable solution (I'd also be fine with doing this and then
having do_write_locked_index() rollback the lock itself on error).

Or am I missing something?

-Peff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 10/11] read-cache: don't leave dangling pointer in `do_write_index()`
  2017-10-02  6:15   ` Jeff King
@ 2017-10-02  6:20     ` Jeff King
  0 siblings, 0 replies; 69+ messages in thread
From: Jeff King @ 2017-10-02  6:20 UTC (permalink / raw)
  To: Martin Ågren; +Cc: git, Nguyễn Thái Ngọc Duy

On Mon, Oct 02, 2017 at 02:15:30AM -0400, Jeff King wrote:

> I'm not sure I follow here. It seems like write_locked_index() has three
> outcomes:
> 
>   - if there was an error, the lock is rolled back
> 
>   - if we were successful and the caller asked for CLOSE_LOCK, we remain
>     locked
> 
>   - if we were successful and the caller asked for COMMIT_LOCK, we
>     commit the lock
> 
> It sounds like you are considering the first outcome a bug, but I think
> it is intentional. Without that, every caller of write_locked_index()
> would need to release the lock themselves. That's especially cumbersome
> if they called with COMMIT_LOCK, as they otherwise can assume that
> write_locked_index() has released the lock one way or another.
> 
> So I actually think that just switching to a "struct tempfile **" here
> is a reasonable solution (I'd also be fine with doing this and then
> having do_write_locked_index() rollback the lock itself on error).
> 
> Or am I missing something?

Well, one thing I was certainly missing was reading your patch 11. :)

That fixes the COMMIT_LOCK case. We are still changing the behavior of
CLOSE_LOCK on error, though.  The existing callers all seem to die
immediately so it wouldn't matter either way, but there could in theory
be new ones in topics-in-flight.

The other thing I was missing is that we are absolutely inconsistent
about this "close on error". It only happens for _one_ of the error
returns of do_write_index(), and the others would have left the file
open and not rolled-back.

So in retrospect, I think your approach is the right direction.

-Peff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 00/11] various lockfile-leaks and -fixes
  2017-10-01 14:56 [PATCH 00/11] various lockfile-leaks and -fixes Martin Ågren
                   ` (11 preceding siblings ...)
  2017-10-02  2:37 ` [PATCH 00/11] various lockfile-leaks and -fixes Junio C Hamano
@ 2017-10-02  6:22 ` Jeff King
  2017-10-02  6:30   ` Junio C Hamano
  12 siblings, 1 reply; 69+ messages in thread
From: Jeff King @ 2017-10-02  6:22 UTC (permalink / raw)
  To: Martin Ågren
  Cc: git, Michael Haggerty, Paul Tan, Christian Couder,
	Nguyễn Thái Ngọc Duy

On Sun, Oct 01, 2017 at 04:56:01PM +0200, Martin Ågren wrote:

> A recent series allowed `struct lock_file`s to be freed [1], so I wanted
> to get rid of the "simple" leaks of this kind. I found a couple of
> lock-related cleanups along the way and it resulted in this series. It
> feels a bit scary at eleven patches -- especially as this is about
> locking -- but I hope I've managed to put this into understandable and
> reviewable patches. Reviews, thoughts, opinions welcome, as always.

Thanks for working on this (and cleaning up several messes that I left).

I've read through each and with the exception of patches 10 and 11, they
all look good to me.

For 10 and 11, I think you've convinced me that the current behavior is
buggy. I do wonder if the subtleties might be easier to understand as a
single patch, but I'm OK either way.

-Peff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 00/11] various lockfile-leaks and -fixes
  2017-10-02  6:22 ` Jeff King
@ 2017-10-02  6:30   ` Junio C Hamano
  2017-10-02 10:19     ` Martin Ågren
  0 siblings, 1 reply; 69+ messages in thread
From: Junio C Hamano @ 2017-10-02  6:30 UTC (permalink / raw)
  To: Jeff King
  Cc: Martin Ågren, git, Michael Haggerty, Paul Tan,
	Christian Couder, Nguyễn Thái Ngọc Duy

Jeff King <peff@peff.net> writes:

> I've read through each and with the exception of patches 10 and 11, they
> all look good to me.
>
> For 10 and 11, I think you've convinced me that the current behavior is
> buggy. I do wonder if the subtleties might be easier to understand as a
> single patch, but I'm OK either way.

I found it was not too hard to understand what is going on with
10/11 as separate patches, and I suspect that it would be also OK if
they were a single patch (but I cannot easily "unsee" them, so this
is merely a suspicion).

Thanks, both.  Let's merge this to 'next' soonish.



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 01/11] sha1_file: do not leak `lock_file`
  2017-10-02  5:26   ` Jeff King
@ 2017-10-02 10:15     ` Martin Ågren
  0 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-02 10:15 UTC (permalink / raw)
  To: Jeff King; +Cc: Git Mailing List

On 2 October 2017 at 07:26, Jeff King <peff@peff.net> wrote:
> On Sun, Oct 01, 2017 at 04:56:02PM +0200, Martin Ågren wrote:
>
> The original code is actually a bit confusing/unsafe, as we set the
> "found" flag early and rollback here:
>
>> @@ -481,17 +481,15 @@ void add_to_alternates_file(const char *reference)
>>               strbuf_release(&line);
>>               fclose(in);
>>
>> -             if (found) {
>> -                     rollback_lock_file(lock);
>> -                     lock = NULL;
>> -             }
>> +             if (found)
>> +                     rollback_lock_file(&lock);
>
> and that leaves the "out" filehandle dangling. It's correct because of
> the later "do we still have the lock" check:
>
>> -     if (lock) {
>> +     if (is_lock_file_locked(&lock)) {
>>               fprintf_or_die(out, "%s\n", reference);
>
> but the two spots must remain in sync. If I were writing it from scratch
> I might have bumped "found" to the scope of the whole function, and then
> replaced this final "do we still have the lock" with:
>
>   if (found)
>         rollback_lock_file(&lock);
>   else {
>         fprintf_or_die(out, "%s\n", reference);
>         if (commit_lock_file(&lock))
>         ...etc...
>   }
>
> I don't know if it's worth changing now or not.

I'd like to address the feedback on other commits in a re-roll. I will
use that opportunity to try your approach above instead of this patch.

Thanks.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 09/11] read-cache: require flags for `write_locked_index()`
  2017-10-02  4:14     ` Martin Ågren
@ 2017-10-02 10:16       ` Martin Ågren
  0 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-02 10:16 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Git Mailing List, Nguyễn Thái Ngọc Duy,
	Jeff King

On 2 October 2017 at 06:14, Martin Ågren <martin.agren@gmail.com> wrote:
> On 2 October 2017 at 05:49, Junio C Hamano <gitster@pobox.com> wrote:
>> Martin Ågren <martin.agren@gmail.com> writes:
>>
>>> ... Instead, require that one of the
>>> flags is set. Adjust documentation and the assert we already have for
>>> checking that we don't have too many flags. Add a macro `HAS_SINGLE_BIT`
>>> (inspired by `HAS_MULTI_BITS`) to simplify this check and similar checks
>>> in the future.
>>
>> I do not have a strong opinion against this approach, but if
>> something can take only one of two values, wouldn't it make more
>> sense to express it as a single boolean, I wonder.  Then there is no
>> need to invent a cute HAS_SINGLE_BIT() macro, either.
>>
>> "commit and leave it open" cannot be expressed with such a scheme,
>> but with the HAS_SINGLE_BIT() scheme it can't anyway, so...
>
> I did briefly consider renaming `flags` to `commit` and re-#defining the
> two flags to 0 and 1 (or even updating all the callers to use literal
> zeros and ones). It felt a bit awkward to downgrade `flags` to a bool
> -- normally we'd to the reverse change. But maybe I shouldn't have
> rejected that so easily. If we have a feeling we won't need other flags
> (or the "don't even close the file") any time soon, maybe it'd be good
> to tighten things up a bit. Thanks for looking at these.

Of course it wouldn't have to be as invasive. It could be "the lock will
always be closed and with COMMIT_LOCK, it will also be committed".
CLOSE_LOCK would be removed and the few current users of CLOSE_LOCK
would be converted to use 0.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 00/11] various lockfile-leaks and -fixes
  2017-10-02  6:30   ` Junio C Hamano
@ 2017-10-02 10:19     ` Martin Ågren
  2017-10-03  6:21       ` Junio C Hamano
  0 siblings, 1 reply; 69+ messages in thread
From: Martin Ågren @ 2017-10-02 10:19 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Jeff King, Git Mailing List, Michael Haggerty, Paul Tan,
	Christian Couder, Nguyễn Thái Ngọc Duy

On 2 October 2017 at 08:30, Junio C Hamano <gitster@pobox.com> wrote:
> Jeff King <peff@peff.net> writes:
>
>> I've read through each and with the exception of patches 10 and 11, they
>> all look good to me.
>>
>> For 10 and 11, I think you've convinced me that the current behavior is
>> buggy. I do wonder if the subtleties might be easier to understand as a
>> single patch, but I'm OK either way.
>
> I found it was not too hard to understand what is going on with
> 10/11 as separate patches, and I suspect that it would be also OK if
> they were a single patch (but I cannot easily "unsee" them, so this
> is merely a suspicion).
>
> Thanks, both.  Let's merge this to 'next' soonish.

Thanks both of you for your comments. Based on them, I have made the
following notes:

Patch 1: Make the logic a bit more future-proof/safe.

Patch 9: Instead of the two flags and the `HAS_SINGLE_BIT`-"cuteness",
just drop `CLOSE_LOCK` altogether. That should simplify things a bit
further. It might also help me come up with better explanations for
patches 10-11.

Patches 10-11: I'm sorry that I didn't succeed better in explaining
this. They have a couple of different things going on, but they are
obviously related and even a bit entangled. Let me try and see what I
can do, I'll try squashing them also. Maybe with `CLOSE_LOCK` gone, I'll
do better.

Especially 9-11 make me feel like wanting to re-roll this, for future
readers if nothing else. I expect to be able to do so in the middle of
this week (I don't know how this interferes with Junio's definition of
"soonish").

As fallout that could be taken care of now or later, I've noted this:

Patch 2: A follow-up could teach `update_index_if_able()` to always
commit or roll back.

Patch 5: A follow-up could clean up a similar construct in
`cmd_checkout_index()`.

Thanks again for your input. It's much appreciated.

Martin

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH 00/11] various lockfile-leaks and -fixes
  2017-10-02 10:19     ` Martin Ågren
@ 2017-10-03  6:21       ` Junio C Hamano
  2017-10-05 20:32         ` [PATCH v2 00/12] " Martin Ågren
  0 siblings, 1 reply; 69+ messages in thread
From: Junio C Hamano @ 2017-10-03  6:21 UTC (permalink / raw)
  To: Martin Ågren
  Cc: Jeff King, Git Mailing List, Michael Haggerty, Paul Tan,
	Christian Couder, Nguyễn Thái Ngọc Duy

Martin Ågren <martin.agren@gmail.com> writes:

> On 2 October 2017 at 08:30, Junio C Hamano <gitster@pobox.com> wrote:
> ...
>> Thanks, both.  Let's merge this to 'next' soonish.
>
> Thanks both of you for your comments. Based on them, I have made the
> following notes:
> ...
> Especially 9-11 make me feel like wanting to re-roll this, for future
> readers if nothing else. I expect to be able to do so in the middle of
> this week (I don't know how this interferes with Junio's definition of
> "soonish").

OK, then I'll hold off and expect such a reroll.

Thanks.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v2 00/12] Re: various lockfile-leaks and -fixes
  2017-10-03  6:21       ` Junio C Hamano
@ 2017-10-05 20:32         ` Martin Ågren
  2017-10-05 20:32           ` [PATCH v2 01/12] sha1_file: do not leak `lock_file` Martin Ågren
                             ` (11 more replies)
  0 siblings, 12 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-05 20:32 UTC (permalink / raw)
  To: git
  Cc: Michael Haggerty, Jeff King, Paul Tan, Christian Couder,
	Nguyễn Thái Ngọc Duy, Junio C Hamano

On 3 October 2017 at 08:21, Junio C Hamano <gitster@pobox.com> wrote:
> Martin Ågren <martin.agren@gmail.com> writes:
>> On 2 October 2017 at 08:30, Junio C Hamano <gitster@pobox.com> wrote:
>>
>> Thanks both of you for your comments. Based on them, I have made the
>> following notes:
>> ...
>> Especially 9-11 make me feel like wanting to re-roll this, for future
>> readers if nothing else. I expect to be able to do so in the middle of
>> this week (I don't know how this interferes with Junio's definition of
>> "soonish").
>
> OK, then I'll hold off and expect such a reroll.

Here it is. Several patches are unchanged (fetched from Junio's tree and
reposted). There are two new patches (5, 12) to address some similar or
related issues noticed by Peff and Junio.

I think this got better by getting rid of `CLOSE_LOCK` entirely instead
of playing with `HAS_SINGLE_BIT()`. Patch 11 also improved IMHO by
squashing patches 10-11 of v1 and presenting the issues a bit
differently in the commit message. Thanks for your feedback, both of
you.

Martin

Martin Ågren (12):
  sha1_file: do not leak `lock_file`
  treewide: prefer lockfiles on the stack
  lockfile: fix documentation on `close_lock_file_gently()`
  tempfile: fix documentation on `delete_tempfile()`
  checkout-index: simplify locking logic
  cache-tree: simplify locking logic
  apply: move lockfile into `apply_state`
  apply: remove `newfd` from `struct apply_state`
  cache.h: document `write_locked_index()`
  read-cache: drop explicit `CLOSE_LOCK`-flag
  read-cache: leave lock in right state in `write_locked_index()`
  read_cache: roll back lock in `update_index_if_able()`

 apply.c                  | 25 ++++++++-----------------
 apply.h                  |  8 +++-----
 builtin/am.c             | 27 ++++++++++++---------------
 builtin/apply.c          |  4 +---
 builtin/checkout-index.c |  8 +++-----
 builtin/checkout.c       | 14 ++++++--------
 builtin/clone.c          |  7 +++----
 builtin/commit.c         | 10 +++++-----
 builtin/diff.c           |  7 +++----
 builtin/difftool.c       |  1 -
 cache-tree.c             | 12 ++++--------
 cache.h                  | 25 ++++++++++++++++++++++++-
 config.c                 | 17 ++++++++---------
 lockfile.h               |  4 ++--
 merge-recursive.c        |  6 +++---
 merge.c                  |  8 +++-----
 read-cache.c             | 28 ++++++++++++++--------------
 sequencer.c              |  1 -
 sha1_file.c              | 19 ++++++++-----------
 tempfile.h               |  8 ++++----
 wt-status.c              |  8 ++++----
 21 files changed, 118 insertions(+), 129 deletions(-)

-- 
2.14.2.666.gea220ee40


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v2 01/12] sha1_file: do not leak `lock_file`
  2017-10-05 20:32         ` [PATCH v2 00/12] " Martin Ågren
@ 2017-10-05 20:32           ` Martin Ågren
  2017-10-06  1:17             ` Junio C Hamano
  2017-10-05 20:32           ` [PATCH v2 02/12] treewide: prefer lockfiles on the stack Martin Ågren
                             ` (10 subsequent siblings)
  11 siblings, 1 reply; 69+ messages in thread
From: Martin Ågren @ 2017-10-05 20:32 UTC (permalink / raw)
  To: git; +Cc: Jeff King, Junio C Hamano

There is no longer any need to allocate and leak a `struct lock_file`.
Initialize it on the stack instead.

Before this patch, we set `lock = NULL` to signal that we have already
rolled back, and that we should not do any more work. We need to take
another approach now that we cannot assign NULL. We could, e.g., use
`is_lock_file_locked()`. But we already have another variable that we
could use instead, `found`. Its scope is only too small.

Bump `found` to the scope of the whole function and rearrange the "roll
back or write?"-checks to a straightforward if-else on `found`. This
also future-proves the code by making it obvious that we intend to take
exactly one of these paths.

Improved-by: Jeff King <peff@peff.net>
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
---
v2: Moved the rollback to the end to have an obvious if-else instead of
retaining the original logic. Thanks Peff.

 sha1_file.c | 19 ++++++++-----------
 1 file changed, 8 insertions(+), 11 deletions(-)

diff --git a/sha1_file.c b/sha1_file.c
index 5a2014811..1d1747099 100644
--- a/sha1_file.c
+++ b/sha1_file.c
@@ -456,19 +456,19 @@ struct alternate_object_database *alloc_alt_odb(const char *dir)
 
 void add_to_alternates_file(const char *reference)
 {
-	struct lock_file *lock = xcalloc(1, sizeof(struct lock_file));
+	struct lock_file lock = LOCK_INIT;
 	char *alts = git_pathdup("objects/info/alternates");
 	FILE *in, *out;
+	int found = 0;
 
-	hold_lock_file_for_update(lock, alts, LOCK_DIE_ON_ERROR);
-	out = fdopen_lock_file(lock, "w");
+	hold_lock_file_for_update(&lock, alts, LOCK_DIE_ON_ERROR);
+	out = fdopen_lock_file(&lock, "w");
 	if (!out)
 		die_errno("unable to fdopen alternates lockfile");
 
 	in = fopen(alts, "r");
 	if (in) {
 		struct strbuf line = STRBUF_INIT;
-		int found = 0;
 
 		while (strbuf_getline(&line, in) != EOF) {
 			if (!strcmp(reference, line.buf)) {
@@ -480,18 +480,15 @@ void add_to_alternates_file(const char *reference)
 
 		strbuf_release(&line);
 		fclose(in);
-
-		if (found) {
-			rollback_lock_file(lock);
-			lock = NULL;
-		}
 	}
 	else if (errno != ENOENT)
 		die_errno("unable to read alternates file");
 
-	if (lock) {
+	if (found) {
+		rollback_lock_file(&lock);
+	} else {
 		fprintf_or_die(out, "%s\n", reference);
-		if (commit_lock_file(lock))
+		if (commit_lock_file(&lock))
 			die_errno("unable to move new alternates file into place");
 		if (alt_odb_tail)
 			link_alt_odb_entries(reference, '\n', NULL, 0);
-- 
2.14.2.666.gea220ee40


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 02/12] treewide: prefer lockfiles on the stack
  2017-10-05 20:32         ` [PATCH v2 00/12] " Martin Ågren
  2017-10-05 20:32           ` [PATCH v2 01/12] sha1_file: do not leak `lock_file` Martin Ågren
@ 2017-10-05 20:32           ` Martin Ågren
  2017-10-05 20:32           ` [PATCH v2 03/12] lockfile: fix documentation on `close_lock_file_gently()` Martin Ågren
                             ` (9 subsequent siblings)
  11 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-05 20:32 UTC (permalink / raw)
  To: git; +Cc: Jeff King, Junio C Hamano

There is no longer any need to allocate and leak a `struct lock_file`.
The previous patch addressed an instance where we needed a minor tweak
alongside the trivial changes.

Deal with the remaining instances where we allocate and leak a struct
within a single function. Change them to have the `struct lock_file` on
the stack instead.

These instances were identified by running `git grep "^\s*struct
lock_file\s*\*"`.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
v2: Identical.

 builtin/am.c       | 24 +++++++++++-------------
 builtin/checkout.c | 14 ++++++--------
 builtin/clone.c    |  7 +++----
 builtin/diff.c     |  7 +++----
 config.c           | 17 ++++++++---------
 merge-recursive.c  |  6 +++---
 merge.c            |  8 ++++----
 wt-status.c        |  8 ++++----
 8 files changed, 42 insertions(+), 49 deletions(-)

diff --git a/builtin/am.c b/builtin/am.c
index d7513f537..4e16fd428 100644
--- a/builtin/am.c
+++ b/builtin/am.c
@@ -1134,11 +1134,11 @@ static const char *msgnum(const struct am_state *state)
  */
 static void refresh_and_write_cache(void)
 {
-	struct lock_file *lock_file = xcalloc(1, sizeof(struct lock_file));
+	struct lock_file lock_file = LOCK_INIT;
 
-	hold_locked_index(lock_file, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
 	refresh_cache(REFRESH_QUIET);
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die(_("unable to write index file"));
 }
 
@@ -1946,15 +1946,14 @@ static void am_resolve(struct am_state *state)
  */
 static int fast_forward_to(struct tree *head, struct tree *remote, int reset)
 {
-	struct lock_file *lock_file;
+	struct lock_file lock_file = LOCK_INIT;
 	struct unpack_trees_options opts;
 	struct tree_desc t[2];
 
 	if (parse_tree(head) || parse_tree(remote))
 		return -1;
 
-	lock_file = xcalloc(1, sizeof(struct lock_file));
-	hold_locked_index(lock_file, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
 
 	refresh_cache(REFRESH_QUIET);
 
@@ -1970,11 +1969,11 @@ static int fast_forward_to(struct tree *head, struct tree *remote, int reset)
 	init_tree_desc(&t[1], remote->buffer, remote->size);
 
 	if (unpack_trees(2, t, &opts)) {
-		rollback_lock_file(lock_file);
+		rollback_lock_file(&lock_file);
 		return -1;
 	}
 
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die(_("unable to write new index file"));
 
 	return 0;
@@ -1986,15 +1985,14 @@ static int fast_forward_to(struct tree *head, struct tree *remote, int reset)
  */
 static int merge_tree(struct tree *tree)
 {
-	struct lock_file *lock_file;
+	struct lock_file lock_file = LOCK_INIT;
 	struct unpack_trees_options opts;
 	struct tree_desc t[1];
 
 	if (parse_tree(tree))
 		return -1;
 
-	lock_file = xcalloc(1, sizeof(struct lock_file));
-	hold_locked_index(lock_file, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
 
 	memset(&opts, 0, sizeof(opts));
 	opts.head_idx = 1;
@@ -2005,11 +2003,11 @@ static int merge_tree(struct tree *tree)
 	init_tree_desc(&t[0], tree->buffer, tree->size);
 
 	if (unpack_trees(1, t, &opts)) {
-		rollback_lock_file(lock_file);
+		rollback_lock_file(&lock_file);
 		return -1;
 	}
 
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die(_("unable to write new index file"));
 
 	return 0;
diff --git a/builtin/checkout.c b/builtin/checkout.c
index 3345a0d16..34c3c5b61 100644
--- a/builtin/checkout.c
+++ b/builtin/checkout.c
@@ -247,7 +247,7 @@ static int checkout_paths(const struct checkout_opts *opts,
 	struct object_id rev;
 	struct commit *head;
 	int errs = 0;
-	struct lock_file *lock_file;
+	struct lock_file lock_file = LOCK_INIT;
 
 	if (opts->track != BRANCH_TRACK_UNSPECIFIED)
 		die(_("'%s' cannot be used with updating paths"), "--track");
@@ -275,9 +275,7 @@ static int checkout_paths(const struct checkout_opts *opts,
 		return run_add_interactive(revision, "--patch=checkout",
 					   &opts->pathspec);
 
-	lock_file = xcalloc(1, sizeof(struct lock_file));
-
-	hold_locked_index(lock_file, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
 	if (read_cache_preload(&opts->pathspec) < 0)
 		return error(_("index file corrupt"));
 
@@ -376,7 +374,7 @@ static int checkout_paths(const struct checkout_opts *opts,
 	}
 	errs |= finish_delayed_checkout(&state);
 
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die(_("unable to write new index file"));
 
 	read_ref_full("HEAD", 0, rev.hash, NULL);
@@ -472,9 +470,9 @@ static int merge_working_tree(const struct checkout_opts *opts,
 			      int *writeout_error)
 {
 	int ret;
-	struct lock_file *lock_file = xcalloc(1, sizeof(struct lock_file));
+	struct lock_file lock_file = LOCK_INIT;
 
-	hold_locked_index(lock_file, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
 	if (read_cache_preload(NULL) < 0)
 		return error(_("index file corrupt"));
 
@@ -591,7 +589,7 @@ static int merge_working_tree(const struct checkout_opts *opts,
 	if (!cache_tree_fully_valid(active_cache_tree))
 		cache_tree_update(&the_index, WRITE_TREE_SILENT | WRITE_TREE_REPAIR);
 
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die(_("unable to write new index file"));
 
 	if (!opts->force && !opts->quiet)
diff --git a/builtin/clone.c b/builtin/clone.c
index dbddd98f8..96a3aaaa1 100644
--- a/builtin/clone.c
+++ b/builtin/clone.c
@@ -706,7 +706,7 @@ static int checkout(int submodule_progress)
 {
 	struct object_id oid;
 	char *head;
-	struct lock_file *lock_file;
+	struct lock_file lock_file = LOCK_INIT;
 	struct unpack_trees_options opts;
 	struct tree *tree;
 	struct tree_desc t;
@@ -733,8 +733,7 @@ static int checkout(int submodule_progress)
 	/* We need to be in the new work tree for the checkout */
 	setup_work_tree();
 
-	lock_file = xcalloc(1, sizeof(struct lock_file));
-	hold_locked_index(lock_file, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
 
 	memset(&opts, 0, sizeof opts);
 	opts.update = 1;
@@ -750,7 +749,7 @@ static int checkout(int submodule_progress)
 	if (unpack_trees(1, &t, &opts) < 0)
 		die(_("unable to checkout working tree"));
 
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die(_("unable to write new index file"));
 
 	err |= run_hook_le(NULL, "post-checkout", sha1_to_hex(null_sha1),
diff --git a/builtin/diff.c b/builtin/diff.c
index 7e3ebcea3..91995965d 100644
--- a/builtin/diff.c
+++ b/builtin/diff.c
@@ -203,17 +203,16 @@ static int builtin_diff_combined(struct rev_info *revs,
 
 static void refresh_index_quietly(void)
 {
-	struct lock_file *lock_file;
+	struct lock_file lock_file = LOCK_INIT;
 	int fd;
 
-	lock_file = xcalloc(1, sizeof(struct lock_file));
-	fd = hold_locked_index(lock_file, 0);
+	fd = hold_locked_index(&lock_file, 0);
 	if (fd < 0)
 		return;
 	discard_cache();
 	read_cache();
 	refresh_cache(REFRESH_QUIET|REFRESH_UNMERGED);
-	update_index_if_able(&the_index, lock_file);
+	update_index_if_able(&the_index, &lock_file);
 }
 
 static int builtin_diff_files(struct rev_info *revs, int argc, const char **argv)
diff --git a/config.c b/config.c
index 345d78c2b..0e61e863d 100644
--- a/config.c
+++ b/config.c
@@ -2748,7 +2748,7 @@ int git_config_rename_section_in_file(const char *config_filename,
 {
 	int ret = 0, remove = 0;
 	char *filename_buf = NULL;
-	struct lock_file *lock;
+	struct lock_file lock = LOCK_INIT;
 	int out_fd;
 	char buf[1024];
 	FILE *config_file = NULL;
@@ -2762,8 +2762,7 @@ int git_config_rename_section_in_file(const char *config_filename,
 	if (!config_filename)
 		config_filename = filename_buf = git_pathdup("config");
 
-	lock = xcalloc(1, sizeof(struct lock_file));
-	out_fd = hold_lock_file_for_update(lock, config_filename, 0);
+	out_fd = hold_lock_file_for_update(&lock, config_filename, 0);
 	if (out_fd < 0) {
 		ret = error("could not lock config file %s", config_filename);
 		goto out;
@@ -2782,9 +2781,9 @@ int git_config_rename_section_in_file(const char *config_filename,
 		goto out;
 	}
 
-	if (chmod(get_lock_file_path(lock), st.st_mode & 07777) < 0) {
+	if (chmod(get_lock_file_path(&lock), st.st_mode & 07777) < 0) {
 		ret = error_errno("chmod on %s failed",
-				  get_lock_file_path(lock));
+				  get_lock_file_path(&lock));
 		goto out;
 	}
 
@@ -2805,7 +2804,7 @@ int git_config_rename_section_in_file(const char *config_filename,
 				}
 				store.baselen = strlen(new_name);
 				if (write_section(out_fd, new_name) < 0) {
-					ret = write_error(get_lock_file_path(lock));
+					ret = write_error(get_lock_file_path(&lock));
 					goto out;
 				}
 				/*
@@ -2831,20 +2830,20 @@ int git_config_rename_section_in_file(const char *config_filename,
 			continue;
 		length = strlen(output);
 		if (write_in_full(out_fd, output, length) < 0) {
-			ret = write_error(get_lock_file_path(lock));
+			ret = write_error(get_lock_file_path(&lock));
 			goto out;
 		}
 	}
 	fclose(config_file);
 	config_file = NULL;
 commit_and_out:
-	if (commit_lock_file(lock) < 0)
+	if (commit_lock_file(&lock) < 0)
 		ret = error_errno("could not write config file %s",
 				  config_filename);
 out:
 	if (config_file)
 		fclose(config_file);
-	rollback_lock_file(lock);
+	rollback_lock_file(&lock);
 out_no_rollback:
 	free(filename_buf);
 	return ret;
diff --git a/merge-recursive.c b/merge-recursive.c
index 1d3f8f0d2..24c5c26a6 100644
--- a/merge-recursive.c
+++ b/merge-recursive.c
@@ -2162,7 +2162,7 @@ int merge_recursive_generic(struct merge_options *o,
 			    struct commit **result)
 {
 	int clean;
-	struct lock_file *lock = xcalloc(1, sizeof(struct lock_file));
+	struct lock_file lock = LOCK_INIT;
 	struct commit *head_commit = get_ref(head, o->branch1);
 	struct commit *next_commit = get_ref(merge, o->branch2);
 	struct commit_list *ca = NULL;
@@ -2178,14 +2178,14 @@ int merge_recursive_generic(struct merge_options *o,
 		}
 	}
 
-	hold_locked_index(lock, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock, LOCK_DIE_ON_ERROR);
 	clean = merge_recursive(o, head_commit, next_commit, ca,
 			result);
 	if (clean < 0)
 		return clean;
 
 	if (active_cache_changed &&
-	    write_locked_index(&the_index, lock, COMMIT_LOCK))
+	    write_locked_index(&the_index, &lock, COMMIT_LOCK))
 		return err(o, _("Unable to write index."));
 
 	return clean ? 0 : 1;
diff --git a/merge.c b/merge.c
index 1d441ad94..a18a452b5 100644
--- a/merge.c
+++ b/merge.c
@@ -53,11 +53,11 @@ int checkout_fast_forward(const struct object_id *head,
 	struct tree_desc t[MAX_UNPACK_TREES];
 	int i, nr_trees = 0;
 	struct dir_struct dir;
-	struct lock_file *lock_file = xcalloc(1, sizeof(struct lock_file));
+	struct lock_file lock_file = LOCK_INIT;
 
 	refresh_cache(REFRESH_QUIET);
 
-	if (hold_locked_index(lock_file, LOCK_REPORT_ON_ERROR) < 0)
+	if (hold_locked_index(&lock_file, LOCK_REPORT_ON_ERROR) < 0)
 		return -1;
 
 	memset(&trees, 0, sizeof(trees));
@@ -91,8 +91,8 @@ int checkout_fast_forward(const struct object_id *head,
 	}
 	if (unpack_trees(nr_trees, t, &opts))
 		return -1;
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK)) {
-		rollback_lock_file(lock_file);
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK)) {
+		rollback_lock_file(&lock_file);
 		return error(_("unable to write new index file"));
 	}
 	return 0;
diff --git a/wt-status.c b/wt-status.c
index 6f730ee8f..7d88e7ca8 100644
--- a/wt-status.c
+++ b/wt-status.c
@@ -2299,14 +2299,14 @@ int has_uncommitted_changes(int ignore_submodules)
  */
 int require_clean_work_tree(const char *action, const char *hint, int ignore_submodules, int gently)
 {
-	struct lock_file *lock_file = xcalloc(1, sizeof(*lock_file));
+	struct lock_file lock_file = LOCK_INIT;
 	int err = 0, fd;
 
-	fd = hold_locked_index(lock_file, 0);
+	fd = hold_locked_index(&lock_file, 0);
 	refresh_cache(REFRESH_QUIET);
 	if (0 <= fd)
-		update_index_if_able(&the_index, lock_file);
-	rollback_lock_file(lock_file);
+		update_index_if_able(&the_index, &lock_file);
+	rollback_lock_file(&lock_file);
 
 	if (has_unstaged_changes(ignore_submodules)) {
 		/* TRANSLATORS: the action is e.g. "pull with rebase" */
-- 
2.14.2.666.gea220ee40


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 03/12] lockfile: fix documentation on `close_lock_file_gently()`
  2017-10-05 20:32         ` [PATCH v2 00/12] " Martin Ågren
  2017-10-05 20:32           ` [PATCH v2 01/12] sha1_file: do not leak `lock_file` Martin Ågren
  2017-10-05 20:32           ` [PATCH v2 02/12] treewide: prefer lockfiles on the stack Martin Ågren
@ 2017-10-05 20:32           ` Martin Ågren
  2017-10-05 20:32           ` [PATCH v2 04/12] tempfile: fix documentation on `delete_tempfile()` Martin Ågren
                             ` (8 subsequent siblings)
  11 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-05 20:32 UTC (permalink / raw)
  To: git; +Cc: Jeff King, Junio C Hamano

Commit 83a3069a3 (lockfile: do not rollback lock on failed close,
2017-09-05) forgot to update the documentation by the function definition
to reflect that the lock is not rolled back in case closing fails.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
v2: Identical.

 lockfile.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/lockfile.h b/lockfile.h
index 7c1c484d7..f401c979f 100644
--- a/lockfile.h
+++ b/lockfile.h
@@ -240,8 +240,8 @@ extern char *get_locked_file_path(struct lock_file *lk);
  * If the lockfile is still open, close it (and the file pointer if it
  * has been opened using `fdopen_lock_file()`) without renaming the
  * lockfile over the file being locked. Return 0 upon success. On
- * failure to `close(2)`, return a negative value and roll back the
- * lock file. Usually `commit_lock_file()`, `commit_lock_file_to()`,
+ * failure to `close(2)`, return a negative value (the lockfile is not
+ * rolled back). Usually `commit_lock_file()`, `commit_lock_file_to()`,
  * or `rollback_lock_file()` should eventually be called.
  */
 static inline int close_lock_file_gently(struct lock_file *lk)
-- 
2.14.2.666.gea220ee40


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 04/12] tempfile: fix documentation on `delete_tempfile()`
  2017-10-05 20:32         ` [PATCH v2 00/12] " Martin Ågren
                             ` (2 preceding siblings ...)
  2017-10-05 20:32           ` [PATCH v2 03/12] lockfile: fix documentation on `close_lock_file_gently()` Martin Ågren
@ 2017-10-05 20:32           ` Martin Ågren
  2017-10-05 20:32           ` [PATCH v2 05/12] checkout-index: simplify locking logic Martin Ågren
                             ` (7 subsequent siblings)
  11 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-05 20:32 UTC (permalink / raw)
  To: git; +Cc: Michael Haggerty, Jeff King, Junio C Hamano

The function has always been documented as returning 0 or -1. It is in
fact `void`. Correct that. As part of the rearrangements we lose the
mention that `delete_tempfile()` might set `errno`. Because there is
no return value, the user can't really know whether it did anyway.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
v2: Identical.

 tempfile.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/tempfile.h b/tempfile.h
index b8f4b5e14..450908b2e 100644
--- a/tempfile.h
+++ b/tempfile.h
@@ -68,10 +68,10 @@
  * `create_tempfile()` returns an allocated tempfile on success or NULL
  * on failure. On errors, `errno` describes the reason for failure.
  *
- * `delete_tempfile()`, `rename_tempfile()`, and `close_tempfile_gently()`
- * return 0 on success. On failure they set `errno` appropriately and return
- * -1. `delete` and `rename` (but not `close`) do their best to delete the
- * temporary file before returning.
+ * `rename_tempfile()` and `close_tempfile_gently()` return 0 on success.
+ * On failure they set `errno` appropriately and return -1.
+ * `delete_tempfile()` and `rename` (but not `close`) do their best to
+ * delete the temporary file before returning.
  */
 
 struct tempfile {
-- 
2.14.2.666.gea220ee40


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 05/12] checkout-index: simplify locking logic
  2017-10-05 20:32         ` [PATCH v2 00/12] " Martin Ågren
                             ` (3 preceding siblings ...)
  2017-10-05 20:32           ` [PATCH v2 04/12] tempfile: fix documentation on `delete_tempfile()` Martin Ågren
@ 2017-10-05 20:32           ` Martin Ågren
  2017-10-06  1:21             ` Junio C Hamano
  2017-10-05 20:32           ` [PATCH v2 06/12] cache-tree: " Martin Ågren
                             ` (6 subsequent siblings)
  11 siblings, 1 reply; 69+ messages in thread
From: Martin Ågren @ 2017-10-05 20:32 UTC (permalink / raw)
  To: git; +Cc: Jeff King, Junio C Hamano

`newfd` starts out negative. If we then take the lock, `newfd` will
become non-negative. We later check for exactly that property before
calling `write_locked_index()`. That is, we are simply using `newfd` as
a boolean to keep track of whether we took the lock or not. (We always
use `newfd` and `lock_file` together, so they really are mirroring each
other.)

Drop `newfd` and check with `is_lock_file_locked()` instead. While at
it, move the `static struct lock_file` into `cmd_checkout_index()` and
make it non-static. It is only used in this function, and after
076aa2cbd (tempfile: auto-allocate tempfiles on heap, 2017-09-05), we
can have lockfiles on the stack.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
---
v2: New patch.

 builtin/checkout-index.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/builtin/checkout-index.c b/builtin/checkout-index.c
index 39c8be05d..b0e78b819 100644
--- a/builtin/checkout-index.c
+++ b/builtin/checkout-index.c
@@ -129,8 +129,6 @@ static const char * const builtin_checkout_index_usage[] = {
 	NULL
 };
 
-static struct lock_file lock_file;
-
 static int option_parse_stage(const struct option *opt,
 			      const char *arg, int unset)
 {
@@ -150,7 +148,7 @@ static int option_parse_stage(const struct option *opt,
 int cmd_checkout_index(int argc, const char **argv, const char *prefix)
 {
 	int i;
-	int newfd = -1;
+	struct lock_file lock_file = LOCK_INIT;
 	int all = 0;
 	int read_from_stdin = 0;
 	int prefix_length;
@@ -206,7 +204,7 @@ int cmd_checkout_index(int argc, const char **argv, const char *prefix)
 	if (index_opt && !state.base_dir_len && !to_tempfile) {
 		state.refresh_cache = 1;
 		state.istate = &the_index;
-		newfd = hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
+		hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
 	}
 
 	/* Check out named files first */
@@ -251,7 +249,7 @@ int cmd_checkout_index(int argc, const char **argv, const char *prefix)
 	if (all)
 		checkout_all(prefix, prefix_length);
 
-	if (0 <= newfd &&
+	if (is_lock_file_locked(&lock_file) &&
 	    write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die("Unable to write new index file");
 	return 0;
-- 
2.14.2.666.gea220ee40


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 06/12] cache-tree: simplify locking logic
  2017-10-05 20:32         ` [PATCH v2 00/12] " Martin Ågren
                             ` (4 preceding siblings ...)
  2017-10-05 20:32           ` [PATCH v2 05/12] checkout-index: simplify locking logic Martin Ågren
@ 2017-10-05 20:32           ` Martin Ågren
  2017-10-05 20:32           ` [PATCH v2 07/12] apply: move lockfile into `apply_state` Martin Ågren
                             ` (5 subsequent siblings)
  11 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-05 20:32 UTC (permalink / raw)
  To: git; +Cc: Jeff King, Paul Tan, Junio C Hamano

After we have taken the lock using `LOCK_DIE_ON_ERROR`, we know that
`newfd` is non-negative. So when we check for exactly that property
before calling `write_locked_index()`, the outcome is guaranteed.

If we write and commit successfully, we set `newfd = -1`, so that we can
later avoid calling `rollback_lock_file` on an already-committed lock.
But we might just as well unconditionally call `rollback_lock_file()` --
it will be a no-op if we have already committed.

All in all, we use `newfd` as a bool and the only benefit we get from it
is that we can avoid calling a no-op. Remove `newfd` so that we have one
variable less to reason about.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
v2: Identical.

 cache-tree.c | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/cache-tree.c b/cache-tree.c
index 71d092ed5..f646f5673 100644
--- a/cache-tree.c
+++ b/cache-tree.c
@@ -602,11 +602,11 @@ static struct cache_tree *cache_tree_find(struct cache_tree *it, const char *pat
 
 int write_index_as_tree(unsigned char *sha1, struct index_state *index_state, const char *index_path, int flags, const char *prefix)
 {
-	int entries, was_valid, newfd;
+	int entries, was_valid;
 	struct lock_file lock_file = LOCK_INIT;
 	int ret = 0;
 
-	newfd = hold_lock_file_for_update(&lock_file, index_path, LOCK_DIE_ON_ERROR);
+	hold_lock_file_for_update(&lock_file, index_path, LOCK_DIE_ON_ERROR);
 
 	entries = read_index_from(index_state, index_path);
 	if (entries < 0) {
@@ -625,10 +625,7 @@ int write_index_as_tree(unsigned char *sha1, struct index_state *index_state, co
 			ret = WRITE_TREE_UNMERGED_INDEX;
 			goto out;
 		}
-		if (0 <= newfd) {
-			if (!write_locked_index(index_state, &lock_file, COMMIT_LOCK))
-				newfd = -1;
-		}
+		write_locked_index(index_state, &lock_file, COMMIT_LOCK);
 		/* Not being able to write is fine -- we are only interested
 		 * in updating the cache-tree part, and if the next caller
 		 * ends up using the old index with unupdated cache-tree part
@@ -650,8 +647,7 @@ int write_index_as_tree(unsigned char *sha1, struct index_state *index_state, co
 		hashcpy(sha1, index_state->cache_tree->oid.hash);
 
 out:
-	if (0 <= newfd)
-		rollback_lock_file(&lock_file);
+	rollback_lock_file(&lock_file);
 	return ret;
 }
 
-- 
2.14.2.666.gea220ee40


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 07/12] apply: move lockfile into `apply_state`
  2017-10-05 20:32         ` [PATCH v2 00/12] " Martin Ågren
                             ` (5 preceding siblings ...)
  2017-10-05 20:32           ` [PATCH v2 06/12] cache-tree: " Martin Ågren
@ 2017-10-05 20:32           ` Martin Ågren
  2017-10-05 20:32           ` [PATCH v2 08/12] apply: remove `newfd` from `struct apply_state` Martin Ågren
                             ` (4 subsequent siblings)
  11 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-05 20:32 UTC (permalink / raw)
  To: git; +Cc: Jeff King, Christian Couder, Junio C Hamano

We have two users of `struct apply_state` and the related functionality
in apply.c. Each user sets up its `apply_state` by handing over a
pointer to its static `lock_file`. (Before 076aa2cbd (tempfile:
auto-allocate tempfiles on heap, 2017-09-05), we could never free
lockfiles, so making them static was a reasonable approach.)

Other than that, they never directly access their `lock_file`s, which
are instead handled by the functionality in apply.c.

To make life easier for the caller and to make it less tempting for a
future caller to mess with the lock, make apply.c fully responsible for
setting up the `lock_file`. As mentioned above, it is now safe to free a
`lock_file`, so we can make the `struct apply_state` contain an actual
`struct lock_file` instead of a pointer to one.

The user in builtin/apply.c is rather simple. For builtin/am.c, we might
worry that the lock state is actually meant to be inherited across
calls. But the lock is only taken as `apply_all_patches()` executes, and
code inspection shows that it will always be released.

Alternatively, we can observe that the lock itself is never queried
directly. When we decide whether we should lock, we check a related
variable `newfd`. That variable is not inherited, so from the point of
view of apply.c, the state machine really is reset with each call to
`init_apply_state()`. (It would be a bug if `newfd` and the lock status
were not in sync. The duplication of information in `newfd` and the lock
will be addressed in the next patch.)

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
v2: Identical.

 apply.c         | 14 +++++---------
 apply.h         |  5 ++---
 builtin/am.c    |  3 +--
 builtin/apply.c |  4 +---
 4 files changed, 9 insertions(+), 17 deletions(-)

diff --git a/apply.c b/apply.c
index c022af53a..5a6ca10a7 100644
--- a/apply.c
+++ b/apply.c
@@ -75,12 +75,10 @@ static int parse_ignorewhitespace_option(struct apply_state *state,
 }
 
 int init_apply_state(struct apply_state *state,
-		     const char *prefix,
-		     struct lock_file *lock_file)
+		     const char *prefix)
 {
 	memset(state, 0, sizeof(*state));
 	state->prefix = prefix;
-	state->lock_file = lock_file;
 	state->newfd = -1;
 	state->apply = 1;
 	state->line_termination = '\n';
@@ -146,8 +144,6 @@ int check_apply_state(struct apply_state *state, int force_apply)
 	}
 	if (state->check_index)
 		state->unsafe_paths = 0;
-	if (!state->lock_file)
-		return error("BUG: state->lock_file should not be NULL");
 
 	if (state->apply_verbosity <= verbosity_silent) {
 		state->saved_error_routine = get_error_routine();
@@ -4711,11 +4707,11 @@ static int apply_patch(struct apply_state *state,
 	state->update_index = state->check_index && state->apply;
 	if (state->update_index && state->newfd < 0) {
 		if (state->index_file)
-			state->newfd = hold_lock_file_for_update(state->lock_file,
+			state->newfd = hold_lock_file_for_update(&state->lock_file,
 								 state->index_file,
 								 LOCK_DIE_ON_ERROR);
 		else
-			state->newfd = hold_locked_index(state->lock_file, LOCK_DIE_ON_ERROR);
+			state->newfd = hold_locked_index(&state->lock_file, LOCK_DIE_ON_ERROR);
 	}
 
 	if (state->check_index && read_apply_cache(state) < 0) {
@@ -4911,7 +4907,7 @@ int apply_all_patches(struct apply_state *state,
 	}
 
 	if (state->update_index) {
-		res = write_locked_index(&the_index, state->lock_file, COMMIT_LOCK);
+		res = write_locked_index(&the_index, &state->lock_file, COMMIT_LOCK);
 		if (res) {
 			error(_("Unable to write new index file"));
 			res = -128;
@@ -4924,7 +4920,7 @@ int apply_all_patches(struct apply_state *state,
 
 end:
 	if (state->newfd >= 0) {
-		rollback_lock_file(state->lock_file);
+		rollback_lock_file(&state->lock_file);
 		state->newfd = -1;
 	}
 
diff --git a/apply.h b/apply.h
index d9b395770..cf00cda17 100644
--- a/apply.h
+++ b/apply.h
@@ -37,7 +37,7 @@ struct apply_state {
 	const char *prefix;
 
 	/* These are lock_file related */
-	struct lock_file *lock_file;
+	struct lock_file lock_file;
 	int newfd;
 
 	/* These control what gets looked at and modified */
@@ -116,8 +116,7 @@ extern int apply_parse_options(int argc, const char **argv,
 			       int *force_apply, int *options,
 			       const char * const *apply_usage);
 extern int init_apply_state(struct apply_state *state,
-			    const char *prefix,
-			    struct lock_file *lock_file);
+			    const char *prefix);
 extern void clear_apply_state(struct apply_state *state);
 extern int check_apply_state(struct apply_state *state, int force_apply);
 
diff --git a/builtin/am.c b/builtin/am.c
index 4e16fd428..40968428d 100644
--- a/builtin/am.c
+++ b/builtin/am.c
@@ -1488,11 +1488,10 @@ static int run_apply(const struct am_state *state, const char *index_file)
 	struct argv_array apply_opts = ARGV_ARRAY_INIT;
 	struct apply_state apply_state;
 	int res, opts_left;
-	static struct lock_file lock_file;
 	int force_apply = 0;
 	int options = 0;
 
-	if (init_apply_state(&apply_state, NULL, &lock_file))
+	if (init_apply_state(&apply_state, NULL))
 		die("BUG: init_apply_state() failed");
 
 	argv_array_push(&apply_opts, "apply");
diff --git a/builtin/apply.c b/builtin/apply.c
index 81b9a61c3..48d398933 100644
--- a/builtin/apply.c
+++ b/builtin/apply.c
@@ -9,8 +9,6 @@ static const char * const apply_usage[] = {
 	NULL
 };
 
-static struct lock_file lock_file;
-
 int cmd_apply(int argc, const char **argv, const char *prefix)
 {
 	int force_apply = 0;
@@ -18,7 +16,7 @@ int cmd_apply(int argc, const char **argv, const char *prefix)
 	int ret;
 	struct apply_state state;
 
-	if (init_apply_state(&state, prefix, &lock_file))
+	if (init_apply_state(&state, prefix))
 		exit(128);
 
 	argc = apply_parse_options(argc, argv,
-- 
2.14.2.666.gea220ee40


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 08/12] apply: remove `newfd` from `struct apply_state`
  2017-10-05 20:32         ` [PATCH v2 00/12] " Martin Ågren
                             ` (6 preceding siblings ...)
  2017-10-05 20:32           ` [PATCH v2 07/12] apply: move lockfile into `apply_state` Martin Ågren
@ 2017-10-05 20:32           ` Martin Ågren
  2017-10-05 20:32           ` [PATCH v2 09/12] cache.h: document `write_locked_index()` Martin Ågren
                             ` (3 subsequent siblings)
  11 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-05 20:32 UTC (permalink / raw)
  To: git; +Cc: Jeff King, Christian Couder, Junio C Hamano

Similar to a previous patch, we do not need to use `newfd` to signal
that we have a lockfile to clean up. We can just unconditionally call
`rollback_lock_file`. If we do not hold the lock, it will be a no-op.

Where we check `newfd` to decide whether we need to take the lock, we
can instead use `is_lock_file_locked()`.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
v2: Identical.

 apply.c | 17 ++++++-----------
 apply.h |  3 +--
 2 files changed, 7 insertions(+), 13 deletions(-)

diff --git a/apply.c b/apply.c
index 5a6ca10a7..d676debd5 100644
--- a/apply.c
+++ b/apply.c
@@ -79,7 +79,6 @@ int init_apply_state(struct apply_state *state,
 {
 	memset(state, 0, sizeof(*state));
 	state->prefix = prefix;
-	state->newfd = -1;
 	state->apply = 1;
 	state->line_termination = '\n';
 	state->p_value = 1;
@@ -4705,13 +4704,13 @@ static int apply_patch(struct apply_state *state,
 		state->apply = 0;
 
 	state->update_index = state->check_index && state->apply;
-	if (state->update_index && state->newfd < 0) {
+	if (state->update_index && !is_lock_file_locked(&state->lock_file)) {
 		if (state->index_file)
-			state->newfd = hold_lock_file_for_update(&state->lock_file,
-								 state->index_file,
-								 LOCK_DIE_ON_ERROR);
+			hold_lock_file_for_update(&state->lock_file,
+						  state->index_file,
+						  LOCK_DIE_ON_ERROR);
 		else
-			state->newfd = hold_locked_index(&state->lock_file, LOCK_DIE_ON_ERROR);
+			hold_locked_index(&state->lock_file, LOCK_DIE_ON_ERROR);
 	}
 
 	if (state->check_index && read_apply_cache(state) < 0) {
@@ -4913,16 +4912,12 @@ int apply_all_patches(struct apply_state *state,
 			res = -128;
 			goto end;
 		}
-		state->newfd = -1;
 	}
 
 	res = !!errs;
 
 end:
-	if (state->newfd >= 0) {
-		rollback_lock_file(&state->lock_file);
-		state->newfd = -1;
-	}
+	rollback_lock_file(&state->lock_file);
 
 	if (state->apply_verbosity <= verbosity_silent) {
 		set_error_routine(state->saved_error_routine);
diff --git a/apply.h b/apply.h
index cf00cda17..dc4a01905 100644
--- a/apply.h
+++ b/apply.h
@@ -36,9 +36,8 @@ enum apply_verbosity {
 struct apply_state {
 	const char *prefix;
 
-	/* These are lock_file related */
+	/* Lock file */
 	struct lock_file lock_file;
-	int newfd;
 
 	/* These control what gets looked at and modified */
 	int apply; /* this is not a dry-run */
-- 
2.14.2.666.gea220ee40


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 09/12] cache.h: document `write_locked_index()`
  2017-10-05 20:32         ` [PATCH v2 00/12] " Martin Ågren
                             ` (7 preceding siblings ...)
  2017-10-05 20:32           ` [PATCH v2 08/12] apply: remove `newfd` from `struct apply_state` Martin Ågren
@ 2017-10-05 20:32           ` Martin Ågren
  2017-10-05 20:32           ` [PATCH v2 10/12] read-cache: drop explicit `CLOSE_LOCK`-flag Martin Ågren
                             ` (2 subsequent siblings)
  11 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-05 20:32 UTC (permalink / raw)
  To: git; +Cc: Jeff King, Nguyễn Thái Ngọc Duy, Junio C Hamano

The next patches will tweak the behavior of this function. Document it
in order to establish a basis for those patches.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
---
v2: Tweaked to go better with the changed approach in the later patches.

 cache.h | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/cache.h b/cache.h
index ea6c236e0..e9d9556e3 100644
--- a/cache.h
+++ b/cache.h
@@ -601,9 +601,25 @@ extern int do_read_index(struct index_state *istate, const char *path,
 extern int read_index_from(struct index_state *, const char *path);
 extern int is_index_unborn(struct index_state *);
 extern int read_index_unmerged(struct index_state *);
+
+/* For use with `write_locked_index()`. */
 #define COMMIT_LOCK		(1 << 0)
 #define CLOSE_LOCK		(1 << 1)
+
+/*
+ * Write the index while holding an already-taken lock. The flags may
+ * contain at most one of `COMMIT_LOCK` and `CLOSE_LOCK`.
+ *
+ * Unless a split index is in use, write the index into the lockfile.
+ *
+ * With a split index, write the shared index to a temporary file,
+ * adjust its permissions and rename it into place, then write the
+ * split index to the lockfile. If the temporary file for the shared
+ * index cannot be created, fall back to the behavior described in
+ * the previous paragraph.
+ */
 extern int write_locked_index(struct index_state *, struct lock_file *lock, unsigned flags);
+
 extern int discard_index(struct index_state *);
 extern void move_index_extensions(struct index_state *dst, struct index_state *src);
 extern int unmerged_index(const struct index_state *);
-- 
2.14.2.666.gea220ee40


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 10/12] read-cache: drop explicit `CLOSE_LOCK`-flag
  2017-10-05 20:32         ` [PATCH v2 00/12] " Martin Ågren
                             ` (8 preceding siblings ...)
  2017-10-05 20:32           ` [PATCH v2 09/12] cache.h: document `write_locked_index()` Martin Ågren
@ 2017-10-05 20:32           ` Martin Ågren
  2017-10-06  1:39             ` Junio C Hamano
  2017-10-05 20:32           ` [PATCH v2 11/12] read-cache: leave lock in right state in `write_locked_index()` Martin Ågren
  2017-10-05 20:32           ` [PATCH v2 " Martin Ågren
  11 siblings, 1 reply; 69+ messages in thread
From: Martin Ågren @ 2017-10-05 20:32 UTC (permalink / raw)
  To: git; +Cc: Jeff King, Nguyễn Thái Ngọc Duy, Junio C Hamano

`write_locked_index()` takes two flags: `COMMIT_LOCK` and `CLOSE_LOCK`.
At most one is allowed. But it is also possible to use no flag, i.e.,
`0`. But when `write_locked_index()` calls `do_write_index()`, the
temporary file, a.k.a. the lockfile, will be closed. So passing `0` is
effectively the same as `CLOSE_LOCK`, which seems like a bug.

We might feel tempted to restructure the code in order to close the file
later, or conditionally. It also feels a bit unfortunate that we simply
"happen" to close the lock by way of an implementation detail of
lockfiles. But note that we need to close the temporary file before
`stat`-ing it, at least on Windows. See 9f41c7a6b (read-cache: close
index.lock in do_write_index, 2017-04-26).

Drop `CLOSE_LOCK` and make it explicit that `write_locked_index()`
always closes the lock. Whether it is also committed is governed by the
remaining flag, `COMMIT_LOCK`.

This means we neither have nor suggest that we have a mode to write the
index and leave the file open. Whatever extra contents we might
eventually want to write, we should probably write it from within
`write_locked_index()` itself anyway.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
---
v2: Drop `CLOSE_LOCK` altogether instead of requiring precisely one of
the flags to be set.

 builtin/commit.c | 10 +++++-----
 cache.h          |  5 ++---
 read-cache.c     | 11 +++++------
 3 files changed, 12 insertions(+), 14 deletions(-)

diff --git a/builtin/commit.c b/builtin/commit.c
index 0f8ddb686..32dc2101f 100644
--- a/builtin/commit.c
+++ b/builtin/commit.c
@@ -355,7 +355,7 @@ static const char *prepare_index(int argc, const char **argv, const char *prefix
 
 		refresh_cache_or_die(refresh_flags);
 
-		if (write_locked_index(&the_index, &index_lock, CLOSE_LOCK))
+		if (write_locked_index(&the_index, &index_lock, 0))
 			die(_("unable to create temporary index"));
 
 		old_index_env = getenv(INDEX_ENVIRONMENT);
@@ -374,7 +374,7 @@ static const char *prepare_index(int argc, const char **argv, const char *prefix
 		if (update_main_cache_tree(WRITE_TREE_SILENT) == 0) {
 			if (reopen_lock_file(&index_lock) < 0)
 				die(_("unable to write index file"));
-			if (write_locked_index(&the_index, &index_lock, CLOSE_LOCK))
+			if (write_locked_index(&the_index, &index_lock, 0))
 				die(_("unable to update temporary index"));
 		} else
 			warning(_("Failed to update main cache tree"));
@@ -401,7 +401,7 @@ static const char *prepare_index(int argc, const char **argv, const char *prefix
 		add_files_to_cache(also ? prefix : NULL, &pathspec, 0);
 		refresh_cache_or_die(refresh_flags);
 		update_main_cache_tree(WRITE_TREE_SILENT);
-		if (write_locked_index(&the_index, &index_lock, CLOSE_LOCK))
+		if (write_locked_index(&the_index, &index_lock, 0))
 			die(_("unable to write new_index file"));
 		commit_style = COMMIT_NORMAL;
 		ret = get_lock_file_path(&index_lock);
@@ -474,7 +474,7 @@ static const char *prepare_index(int argc, const char **argv, const char *prefix
 	add_remove_files(&partial);
 	refresh_cache(REFRESH_QUIET);
 	update_main_cache_tree(WRITE_TREE_SILENT);
-	if (write_locked_index(&the_index, &index_lock, CLOSE_LOCK))
+	if (write_locked_index(&the_index, &index_lock, 0))
 		die(_("unable to write new_index file"));
 
 	hold_lock_file_for_update(&false_lock,
@@ -486,7 +486,7 @@ static const char *prepare_index(int argc, const char **argv, const char *prefix
 	add_remove_files(&partial);
 	refresh_cache(REFRESH_QUIET);
 
-	if (write_locked_index(&the_index, &false_lock, CLOSE_LOCK))
+	if (write_locked_index(&the_index, &false_lock, 0))
 		die(_("unable to write temporary index file"));
 
 	discard_cache();
diff --git a/cache.h b/cache.h
index e9d9556e3..21a6856c5 100644
--- a/cache.h
+++ b/cache.h
@@ -604,11 +604,10 @@ extern int read_index_unmerged(struct index_state *);
 
 /* For use with `write_locked_index()`. */
 #define COMMIT_LOCK		(1 << 0)
-#define CLOSE_LOCK		(1 << 1)
 
 /*
- * Write the index while holding an already-taken lock. The flags may
- * contain at most one of `COMMIT_LOCK` and `CLOSE_LOCK`.
+ * Write the index while holding an already-taken lock. Close the lock,
+ * and if `COMMIT_LOCK` is given, commit it.
  *
  * Unless a split index is in use, write the index into the lockfile.
  *
diff --git a/read-cache.c b/read-cache.c
index 65f4fe837..1c917eba9 100644
--- a/read-cache.c
+++ b/read-cache.c
@@ -2343,14 +2343,13 @@ static int do_write_locked_index(struct index_state *istate, struct lock_file *l
 	int ret = do_write_index(istate, lock->tempfile, 0);
 	if (ret)
 		return ret;
-	assert((flags & (COMMIT_LOCK | CLOSE_LOCK)) !=
-	       (COMMIT_LOCK | CLOSE_LOCK));
 	if (flags & COMMIT_LOCK)
 		return commit_locked_index(lock);
-	else if (flags & CLOSE_LOCK)
-		return close_lock_file_gently(lock);
-	else
-		return ret;
+	/*
+	 * The lockfile already happens to have
+	 * been closed, but let's be specific.
+	 */
+	return close_lock_file_gently(lock);
 }
 
 static int write_split_index(struct index_state *istate,
-- 
2.14.2.666.gea220ee40


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 11/12] read-cache: leave lock in right state in `write_locked_index()`
  2017-10-05 20:32         ` [PATCH v2 00/12] " Martin Ågren
                             ` (9 preceding siblings ...)
  2017-10-05 20:32           ` [PATCH v2 10/12] read-cache: drop explicit `CLOSE_LOCK`-flag Martin Ågren
@ 2017-10-05 20:32           ` Martin Ågren
  2017-10-06  2:01             ` Junio C Hamano
  2017-10-05 20:32           ` [PATCH v2 " Martin Ågren
  11 siblings, 1 reply; 69+ messages in thread
From: Martin Ågren @ 2017-10-05 20:32 UTC (permalink / raw)
  To: git; +Cc: Jeff King, Nguyễn Thái Ngọc Duy, Junio C Hamano

If the original version of `write_locked_index()` returned with an
error, it didn't roll back the lockfile unless the error occured at the
very end, during closing/committing. See commit 03b866477 (read-cache:
new API write_locked_index instead of write_index/write_cache,
2014-06-13).

In commit 9f41c7a6b (read-cache: close index.lock in do_write_index,
2017-04-26), we learned to roll back the lock slightly earlier in the
callstack, but that was mostly a side-effect of lockfiles being
implemented using temporary files.

At that point, the behavior was still mostly the same as originally,
except 1) the file was closed (and possibly rolled back) a few
CPU-instructions earlier, and 2) the file was closed even if the caller
didn't ask us to close it. Case 2) is not very interesting since we
never had any such caller and the commit before this one removed the
possibility of asking to leave the lockfile open.

Recently, commit 076aa2cbd (tempfile: auto-allocate tempfiles on heap,
2017-09-05) introduced a subtle bug. If the lockfile is rolled back
(i.e., the temporary file is deleted), the tempfile-pointer in the
`struct lock_file` will be left dangling. Thus, an attempt to reuse the
lockfile, or even just to roll it back, will induce undefined behavior
-- most likely a crash.

Besides not crashing, we clearly want to make things consistent. But
should we roll back always on error, or never? The semantics which the
lockfile-machinery itself provides is A) if we ask to commit and it
fails, roll back, and B) if we ask to close and it fails, do _not_ roll
back.

We should note that commit 83a3069a3 (lockfile: do not rollback lock on
failed close, 2017-09-05) recently changed the behavior for B -- we used
to roll back. We might worry that our callers rely on us rolling back in
case of B. But we only did so for some errors, we never documented
anything, and all our in-tree callers (they are not many) `die()` in
case of an error. This is our opportunity for establishing a consistent
and predictable behavior going forward, so let's enforce the same
semantics that 83a3069a3 introduced to the lockfile-machinery itself.

Similarly, let's ensure that when we are asked to commit, that we always
either commit or roll back. Right now, we have some early return paths
which fail to roll back the lock.

So: Do not delete the temporary file in `do_write_index()`. One of its
callers, `write_locked_index()` will thereby avoid rolling back the
lock. The other caller, `write_shared_index()`, will delete its
temporary file anyway. Both of these callers will avoid undefined
behavior (crashing).

Teach `write_locked_index(..., COMMIT_LOCK)` to roll back the lock
before returning. If we have already succeeded and committed, it will be
a noop. Simplify the existing callers where we now have a superfluous
call to `rollback_lockfile()`. This should keep future readers from
wondering why the callers are inconsistent.

We still close the lock as we close the temporary file. This is what is
referred to as "1)" above. It does feel a bit unfortunate that we simply
"happen" to close the lock by way of an implementation-detail of
lockfiles. But note that we need to close the temporary file before
`stat`-ing it, at least on Windows. See 9f41c7a6b (read-cache: close
index.lock in do_write_index, 2017-04-26).

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
---
v2: Except for the slightly different documentation in cache.h, this is
a squash of the last two patches of v1. I hope the commit message is
better.

 builtin/difftool.c |  1 -
 cache.h            |  4 ++++
 merge.c            |  4 +---
 read-cache.c       | 14 ++++++++------
 sequencer.c        |  1 -
 5 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/builtin/difftool.c b/builtin/difftool.c
index b2d3ba753..bcc79d188 100644
--- a/builtin/difftool.c
+++ b/builtin/difftool.c
@@ -616,7 +616,6 @@ static int run_dir_diff(const char *extcmd, int symlinks, const char *prefix,
 			if (hold_lock_file_for_update(&lock, buf.buf, 0) < 0 ||
 			    write_locked_index(&wtindex, &lock, COMMIT_LOCK)) {
 				ret = error("could not write %s", buf.buf);
-				rollback_lock_file(&lock);
 				goto finish;
 			}
 			changed_files(&wt_modified, buf.buf, workdir);
diff --git a/cache.h b/cache.h
index 21a6856c5..0e26224b9 100644
--- a/cache.h
+++ b/cache.h
@@ -616,6 +616,10 @@ extern int read_index_unmerged(struct index_state *);
  * split index to the lockfile. If the temporary file for the shared
  * index cannot be created, fall back to the behavior described in
  * the previous paragraph.
+ *
+ * With `COMMIT_LOCK`, the lock is always committed or rolled back.
+ * Without it, the lock is closed, but neither committed nor rolled
+ * back.
  */
 extern int write_locked_index(struct index_state *, struct lock_file *lock, unsigned flags);
 
diff --git a/merge.c b/merge.c
index a18a452b5..e5d796c9f 100644
--- a/merge.c
+++ b/merge.c
@@ -91,9 +91,7 @@ int checkout_fast_forward(const struct object_id *head,
 	}
 	if (unpack_trees(nr_trees, t, &opts))
 		return -1;
-	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK)) {
-		rollback_lock_file(&lock_file);
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		return error(_("unable to write new index file"));
-	}
 	return 0;
 }
diff --git a/read-cache.c b/read-cache.c
index 1c917eba9..0090ab886 100644
--- a/read-cache.c
+++ b/read-cache.c
@@ -2182,9 +2182,8 @@ static int has_racy_timestamp(struct index_state *istate)
 void update_index_if_able(struct index_state *istate, struct lock_file *lockfile)
 {
 	if ((istate->cache_changed || has_racy_timestamp(istate)) &&
-	    verify_index(istate) &&
-	    write_locked_index(istate, lockfile, COMMIT_LOCK))
-		rollback_lock_file(lockfile);
+	    verify_index(istate))
+		write_locked_index(istate, lockfile, COMMIT_LOCK);
 }
 
 static int do_write_index(struct index_state *istate, struct tempfile *tempfile,
@@ -2314,7 +2313,6 @@ static int do_write_index(struct index_state *istate, struct tempfile *tempfile,
 		return -1;
 	if (close_tempfile_gently(tempfile)) {
 		error(_("could not close '%s'"), tempfile->filename.buf);
-		delete_tempfile(&tempfile);
 		return -1;
 	}
 	if (stat(tempfile->filename.buf, &st))
@@ -2498,7 +2496,8 @@ int write_locked_index(struct index_state *istate, struct lock_file *lock,
 	    (istate->cache_changed & ~EXTMASK)) {
 		if (si)
 			hashclr(si->base_sha1);
-		return do_write_locked_index(istate, lock, flags);
+		ret = do_write_locked_index(istate, lock, flags);
+		goto out;
 	}
 
 	if (getenv("GIT_TEST_SPLIT_INDEX")) {
@@ -2514,7 +2513,7 @@ int write_locked_index(struct index_state *istate, struct lock_file *lock,
 	if (new_shared_index) {
 		ret = write_shared_index(istate, lock, flags);
 		if (ret)
-			return ret;
+			goto out;
 	}
 
 	ret = write_split_index(istate, lock, flags);
@@ -2523,6 +2522,9 @@ int write_locked_index(struct index_state *istate, struct lock_file *lock,
 	if (!ret && !new_shared_index)
 		freshen_shared_index(sha1_to_hex(si->base_sha1), 1);
 
+out:
+	if (flags & COMMIT_LOCK)
+		rollback_lock_file(lock);
 	return ret;
 }
 
diff --git a/sequencer.c b/sequencer.c
index 60636ce54..d56c38081 100644
--- a/sequencer.c
+++ b/sequencer.c
@@ -1183,7 +1183,6 @@ static int read_and_refresh_cache(struct replay_opts *opts)
 	refresh_index(&the_index, REFRESH_QUIET|REFRESH_UNMERGED, NULL, NULL, NULL);
 	if (the_index.cache_changed && index_fd >= 0) {
 		if (write_locked_index(&the_index, &index_lock, COMMIT_LOCK)) {
-			rollback_lock_file(&index_lock);
 			return error(_("git %s: failed to refresh the index"),
 				_(action_name(opts)));
 		}
-- 
2.14.2.666.gea220ee40


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 12/12] read_cache: roll back lock in `update_index_if_able()`
  2017-10-05 20:32         ` [PATCH v2 00/12] " Martin Ågren
                             ` (10 preceding siblings ...)
  2017-10-05 20:32           ` [PATCH v2 11/12] read-cache: leave lock in right state in `write_locked_index()` Martin Ågren
@ 2017-10-05 20:32           ` Martin Ågren
  11 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-05 20:32 UTC (permalink / raw)
  To: git; +Cc: Jeff King, Nguyễn Thái Ngọc Duy, Junio C Hamano

`update_index_if_able()` used to always commit the lock or roll it back.
Commit 03b866477 (read-cache: new API write_locked_index instead of
write_index/write_cache, 2014-06-13) stopped rolling it back in case a
write was not even attempted. This change in behavior is not motivated
in the commit message and appears to be accidental: the `else`-path was
removed, although that changed the behavior in case the `if` shortcuts.

Reintroduce the rollback and document this behavior. While at it, move
the documentation on this function from the function definition to the
function declaration in cache.h.

If `write_locked_index(..., COMMIT_LOCK)` fails, it will roll back the
lock for us (see the previous commit).

Noticed-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
---
v2: New patch.

 cache.h      | 4 ++++
 read-cache.c | 5 ++---
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/cache.h b/cache.h
index 0e26224b9..8c2493766 100644
--- a/cache.h
+++ b/cache.h
@@ -734,6 +734,10 @@ extern void fill_stat_cache_info(struct cache_entry *ce, struct stat *st);
 extern int refresh_index(struct index_state *, unsigned int flags, const struct pathspec *pathspec, char *seen, const char *header_msg);
 extern struct cache_entry *refresh_cache_entry(struct cache_entry *, unsigned int);
 
+/*
+ * Opportunistically update the index but do not complain if we can't.
+ * The lockfile is always committed or rolled back.
+ */
 extern void update_index_if_able(struct index_state *, struct lock_file *);
 
 extern int hold_locked_index(struct lock_file *, int);
diff --git a/read-cache.c b/read-cache.c
index 0090ab886..92ea5df96 100644
--- a/read-cache.c
+++ b/read-cache.c
@@ -2176,14 +2176,13 @@ static int has_racy_timestamp(struct index_state *istate)
 	return 0;
 }
 
-/*
- * Opportunistically update the index but do not complain if we can't
- */
 void update_index_if_able(struct index_state *istate, struct lock_file *lockfile)
 {
 	if ((istate->cache_changed || has_racy_timestamp(istate)) &&
 	    verify_index(istate))
 		write_locked_index(istate, lockfile, COMMIT_LOCK);
+	else
+		rollback_lock_file(lockfile);
 }
 
 static int do_write_index(struct index_state *istate, struct tempfile *tempfile,
-- 
2.14.2.666.gea220ee40


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 01/12] sha1_file: do not leak `lock_file`
  2017-10-05 20:32           ` [PATCH v2 01/12] sha1_file: do not leak `lock_file` Martin Ågren
@ 2017-10-06  1:17             ` Junio C Hamano
  0 siblings, 0 replies; 69+ messages in thread
From: Junio C Hamano @ 2017-10-06  1:17 UTC (permalink / raw)
  To: Martin Ågren; +Cc: git, Jeff King

Martin Ågren <martin.agren@gmail.com> writes:

> Bump `found` to the scope of the whole function and rearrange the "roll
> back or write?"-checks to a straightforward if-else on `found`. This
> also future-proves the code by making it obvious that we intend to take
> exactly one of these paths.
>
> Improved-by: Jeff King <peff@peff.net>
> Signed-off-by: Martin Ågren <martin.agren@gmail.com>
> ---
> v2: Moved the rollback to the end to have an obvious if-else instead of
> retaining the original logic. Thanks Peff.

Yup, both are correct but this version is 10x easier to follow.
Thanks, both.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 05/12] checkout-index: simplify locking logic
  2017-10-05 20:32           ` [PATCH v2 05/12] checkout-index: simplify locking logic Martin Ågren
@ 2017-10-06  1:21             ` Junio C Hamano
  0 siblings, 0 replies; 69+ messages in thread
From: Junio C Hamano @ 2017-10-06  1:21 UTC (permalink / raw)
  To: Martin Ågren; +Cc: git, Jeff King

Martin Ågren <martin.agren@gmail.com> writes:

> `newfd` starts out negative. If we then take the lock, `newfd` will
> become non-negative. We later check for exactly that property before
> calling `write_locked_index()`. That is, we are simply using `newfd` as
> a boolean to keep track of whether we took the lock or not. (We always
> use `newfd` and `lock_file` together, so they really are mirroring each
> other.)
>
> Drop `newfd` and check with `is_lock_file_locked()` instead. While at
> it, move the `static struct lock_file` into `cmd_checkout_index()` and
> make it non-static. It is only used in this function, and after
> 076aa2cbd (tempfile: auto-allocate tempfiles on heap, 2017-09-05), we
> can have lockfiles on the stack.
>
> Signed-off-by: Martin Ågren <martin.agren@gmail.com>
> ---
> v2: New patch.

Well reasoned and explained.  Thanks.

>
>  builtin/checkout-index.c | 8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
>
> diff --git a/builtin/checkout-index.c b/builtin/checkout-index.c
> index 39c8be05d..b0e78b819 100644
> --- a/builtin/checkout-index.c
> +++ b/builtin/checkout-index.c
> @@ -129,8 +129,6 @@ static const char * const builtin_checkout_index_usage[] = {
>  	NULL
>  };
>  
> -static struct lock_file lock_file;
> -
>  static int option_parse_stage(const struct option *opt,
>  			      const char *arg, int unset)
>  {
> @@ -150,7 +148,7 @@ static int option_parse_stage(const struct option *opt,
>  int cmd_checkout_index(int argc, const char **argv, const char *prefix)
>  {
>  	int i;
> -	int newfd = -1;
> +	struct lock_file lock_file = LOCK_INIT;
>  	int all = 0;
>  	int read_from_stdin = 0;
>  	int prefix_length;
> @@ -206,7 +204,7 @@ int cmd_checkout_index(int argc, const char **argv, const char *prefix)
>  	if (index_opt && !state.base_dir_len && !to_tempfile) {
>  		state.refresh_cache = 1;
>  		state.istate = &the_index;
> -		newfd = hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
> +		hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
>  	}
>  
>  	/* Check out named files first */
> @@ -251,7 +249,7 @@ int cmd_checkout_index(int argc, const char **argv, const char *prefix)
>  	if (all)
>  		checkout_all(prefix, prefix_length);
>  
> -	if (0 <= newfd &&
> +	if (is_lock_file_locked(&lock_file) &&
>  	    write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
>  		die("Unable to write new index file");
>  	return 0;

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 10/12] read-cache: drop explicit `CLOSE_LOCK`-flag
  2017-10-05 20:32           ` [PATCH v2 10/12] read-cache: drop explicit `CLOSE_LOCK`-flag Martin Ågren
@ 2017-10-06  1:39             ` Junio C Hamano
  2017-10-06 11:02               ` Martin Ågren
  0 siblings, 1 reply; 69+ messages in thread
From: Junio C Hamano @ 2017-10-06  1:39 UTC (permalink / raw)
  To: Martin Ågren; +Cc: git, Jeff King, Nguyễn Thái Ngọc Duy

Martin Ågren <martin.agren@gmail.com> writes:

> diff --git a/read-cache.c b/read-cache.c
> index 65f4fe837..1c917eba9 100644
> --- a/read-cache.c
> +++ b/read-cache.c
> @@ -2343,14 +2343,13 @@ static int do_write_locked_index(struct index_state *istate, struct lock_file *l
>  	int ret = do_write_index(istate, lock->tempfile, 0);
>  	if (ret)
>  		return ret;
> -	assert((flags & (COMMIT_LOCK | CLOSE_LOCK)) !=
> -	       (COMMIT_LOCK | CLOSE_LOCK));
>  	if (flags & COMMIT_LOCK)
>  		return commit_locked_index(lock);
> -	else if (flags & CLOSE_LOCK)
> -		return close_lock_file_gently(lock);
> -	else
> -		return ret;
> +	/*
> +	 * The lockfile already happens to have
> +	 * been closed, but let's be specific.
> +	 */
> +	return close_lock_file_gently(lock);

"already happens to have been" is quite a mouthful, and is not quite
truthful, as we do not foresee ever wanting to change that (because
of that stat(2) issue you mentioned).  It might be better to declare
that do_write_index() closes the lockfile after successfully writing
the data out to it.  I dunno if that reasoning is strong enough to
remove this (extra) close, though.

When any of the ce_write() calls in do_write_index() fails, the
function returns -1 without hitting the close/stat (obviously).
Somebody very high in the callchain (e.g. write_locked_index())
would clean it up by calling rollback_lock_file() eventually, so
that would not be a problem ;-)




^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 11/12] read-cache: leave lock in right state in `write_locked_index()`
  2017-10-05 20:32           ` [PATCH v2 11/12] read-cache: leave lock in right state in `write_locked_index()` Martin Ågren
@ 2017-10-06  2:01             ` Junio C Hamano
  2017-10-06 11:04               ` Martin Ågren
  0 siblings, 1 reply; 69+ messages in thread
From: Junio C Hamano @ 2017-10-06  2:01 UTC (permalink / raw)
  To: Martin Ågren; +Cc: git, Jeff King, Nguyễn Thái Ngọc Duy

Martin Ågren <martin.agren@gmail.com> writes:

> v2: Except for the slightly different documentation in cache.h, this is
> a squash of the last two patches of v1. I hope the commit message is
> better.

Yeah, it is long ;-) but readable.

Thanks.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 10/12] read-cache: drop explicit `CLOSE_LOCK`-flag
  2017-10-06  1:39             ` Junio C Hamano
@ 2017-10-06 11:02               ` Martin Ågren
  0 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-06 11:02 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Git Mailing List, Jeff King,
	Nguyễn Thái Ngọc Duy

On 6 October 2017 at 03:39, Junio C Hamano <gitster@pobox.com> wrote:
> Martin Ågren <martin.agren@gmail.com> writes:
>
>> diff --git a/read-cache.c b/read-cache.c
>> index 65f4fe837..1c917eba9 100644
>> --- a/read-cache.c
>> +++ b/read-cache.c
>> @@ -2343,14 +2343,13 @@ static int do_write_locked_index(struct index_state *istate, struct lock_file *l
>>       int ret = do_write_index(istate, lock->tempfile, 0);
>>       if (ret)
>>               return ret;
>> -     assert((flags & (COMMIT_LOCK | CLOSE_LOCK)) !=
>> -            (COMMIT_LOCK | CLOSE_LOCK));
>>       if (flags & COMMIT_LOCK)
>>               return commit_locked_index(lock);
>> -     else if (flags & CLOSE_LOCK)
>> -             return close_lock_file_gently(lock);
>> -     else
>> -             return ret;
>> +     /*
>> +      * The lockfile already happens to have
>> +      * been closed, but let's be specific.
>> +      */
>> +     return close_lock_file_gently(lock);
>
> "already happens to have been" is quite a mouthful, and is not quite
> truthful, as we do not foresee ever wanting to change that (because
> of that stat(2) issue you mentioned).  It might be better to declare
> that do_write_index() closes the lockfile after successfully writing
> the data out to it.  I dunno if that reasoning is strong enough to
> remove this (extra) close, though.
>
> When any of the ce_write() calls in do_write_index() fails, the
> function returns -1 without hitting the close/stat (obviously).
> Somebody very high in the callchain (e.g. write_locked_index())
> would clean it up by calling rollback_lock_file() eventually, so
> that would not be a problem ;-)

When I wrote this, I was too stuck in the "it gets closed accidentally"
world view. It would indeed be cleaner to specify that the close happens
in `do_write_index()`. As you say, because of the stat-ing, we simply
have to close.

It's still an implementation detail that closing the temporary file is
the same as closing the lock. We might want to refactor to hand over the
lock instead of its tempfile. Except the other caller has no suitable
lock, only a temporary file. I guess that caller could use a lock
instead, but it feels like the wrong solution to the wrong problem.

I'm sure that something could be done here to improve the cleanliness.
For this series, I think I'll document better that `do_write_index()`
closes the temporary file on success, that this might mean that it
actually closes a *lock*file, but that the latter should not be relied
upon. I'll get to this later today.

Thanks.

Martin

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 11/12] read-cache: leave lock in right state in `write_locked_index()`
  2017-10-06  2:01             ` Junio C Hamano
@ 2017-10-06 11:04               ` Martin Ågren
  2017-10-06 12:02                 ` Junio C Hamano
  0 siblings, 1 reply; 69+ messages in thread
From: Martin Ågren @ 2017-10-06 11:04 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Git Mailing List, Jeff King,
	Nguyễn Thái Ngọc Duy

On 6 October 2017 at 04:01, Junio C Hamano <gitster@pobox.com> wrote:
> Martin Ågren <martin.agren@gmail.com> writes:
>
>> v2: Except for the slightly different documentation in cache.h, this is
>> a squash of the last two patches of v1. I hope the commit message is
>> better.
>
> Yeah, it is long ;-) but readable.

"Long but readable"... Yeah. When I rework the previous patch (document
the closing-behavior of `do_write_index()`) I could address this. I
think there are several interesting details here and I'm not sure which
I'd want to leave out, but yeah, they add up...

Martin

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 11/12] read-cache: leave lock in right state in `write_locked_index()`
  2017-10-06 11:04               ` Martin Ågren
@ 2017-10-06 12:02                 ` Junio C Hamano
  2017-10-06 19:44                   ` Martin Ågren
  0 siblings, 1 reply; 69+ messages in thread
From: Junio C Hamano @ 2017-10-06 12:02 UTC (permalink / raw)
  To: Martin Ågren
  Cc: Git Mailing List, Jeff King,
	Nguyễn Thái Ngọc Duy

Martin Ågren <martin.agren@gmail.com> writes:

> On 6 October 2017 at 04:01, Junio C Hamano <gitster@pobox.com> wrote:
>> Martin Ågren <martin.agren@gmail.com> writes:
>>
>>> v2: Except for the slightly different documentation in cache.h, this is
>>> a squash of the last two patches of v1. I hope the commit message is
>>> better.
>>
>> Yeah, it is long ;-) but readable.
>
> "Long but readable"... Yeah. When I rework the previous patch (document
> the closing-behavior of `do_write_index()`) I could address this. I
> think there are several interesting details here and I'm not sure which
> I'd want to leave out, but yeah, they add up...

I didn't mean "long is bad" at all in this case.  

Certainly, from time to time we find commits with overlong
explanation that only states obvious, and they are "long and bad".
But I do not think this one falls into the same category as those.



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 11/12] read-cache: leave lock in right state in `write_locked_index()`
  2017-10-06 12:02                 ` Junio C Hamano
@ 2017-10-06 19:44                   ` Martin Ågren
  2017-10-06 20:12                     ` [PATCH v3 00/12] Re: various lockfile-leaks and -fixes Martin Ågren
  0 siblings, 1 reply; 69+ messages in thread
From: Martin Ågren @ 2017-10-06 19:44 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Git Mailing List, Jeff King,
	Nguyễn Thái Ngọc Duy

On 6 October 2017 at 14:02, Junio C Hamano <gitster@pobox.com> wrote:
> Martin Ågren <martin.agren@gmail.com> writes:
>
>> On 6 October 2017 at 04:01, Junio C Hamano <gitster@pobox.com> wrote:
>>> Martin Ågren <martin.agren@gmail.com> writes:
>>>
>>>> v2: Except for the slightly different documentation in cache.h, this is
>>>> a squash of the last two patches of v1. I hope the commit message is
>>>> better.
>>>
>>> Yeah, it is long ;-) but readable.
>>
>> "Long but readable"... Yeah. When I rework the previous patch (document
>> the closing-behavior of `do_write_index()`) I could address this. I
>> think there are several interesting details here and I'm not sure which
>> I'd want to leave out, but yeah, they add up...
>
> I didn't mean "long is bad" at all in this case.
>
> Certainly, from time to time we find commits with overlong
> explanation that only states obvious, and they are "long and bad".
> But I do not think this one falls into the same category as those.

Ok, thanks. I've got a rerolled series running through the final checks
right now. I did end up making this log message a bit more succinct.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v3 00/12] Re: various lockfile-leaks and -fixes
  2017-10-06 19:44                   ` Martin Ågren
@ 2017-10-06 20:12                     ` Martin Ågren
  2017-10-06 20:12                       ` [PATCH v3 01/12] sha1_file: do not leak `lock_file` Martin Ågren
                                         ` (11 more replies)
  0 siblings, 12 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-06 20:12 UTC (permalink / raw)
  To: git
  Cc: Michael Haggerty, Jeff King, Paul Tan, Christian Couder,
	Nguyễn Thái Ngọc Duy, Junio C Hamano

On 6 October 2017 at 21:44, Martin Ågren <martin.agren@gmail.com> wrote:
> Ok, thanks. I've got a rerolled series running through the final checks
> right now. I did end up making this log message a bit more succinct.

This is v3 of this series. All patches are identical to
ma/lockfile-fixes in Junio's tree, except 10 and 11.

Martin Ågren (12):
  sha1_file: do not leak `lock_file`
  treewide: prefer lockfiles on the stack
  lockfile: fix documentation on `close_lock_file_gently()`
  tempfile: fix documentation on `delete_tempfile()`
  checkout-index: simplify locking logic
  cache-tree: simplify locking logic
  apply: move lockfile into `apply_state`
  apply: remove `newfd` from `struct apply_state`
  cache.h: document `write_locked_index()`
  read-cache: drop explicit `CLOSE_LOCK`-flag
  read-cache: leave lock in right state in `write_locked_index()`
  read_cache: roll back lock in `update_index_if_able()`

 apply.c                  | 25 ++++++++-----------------
 apply.h                  |  8 +++-----
 builtin/am.c             | 27 ++++++++++++---------------
 builtin/apply.c          |  4 +---
 builtin/checkout-index.c |  8 +++-----
 builtin/checkout.c       | 14 ++++++--------
 builtin/clone.c          |  7 +++----
 builtin/commit.c         | 10 +++++-----
 builtin/diff.c           |  7 +++----
 builtin/difftool.c       |  1 -
 cache-tree.c             | 12 ++++--------
 cache.h                  | 25 ++++++++++++++++++++++++-
 config.c                 | 17 ++++++++---------
 lockfile.h               |  4 ++--
 merge-recursive.c        |  6 +++---
 merge.c                  |  8 +++-----
 read-cache.c             | 31 +++++++++++++++++--------------
 sequencer.c              |  1 -
 sha1_file.c              | 19 ++++++++-----------
 tempfile.h               |  8 ++++----
 wt-status.c              |  8 ++++----
 21 files changed, 121 insertions(+), 129 deletions(-)

-- 
2.15.0.rc0


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v3 01/12] sha1_file: do not leak `lock_file`
  2017-10-06 20:12                     ` [PATCH v3 00/12] Re: various lockfile-leaks and -fixes Martin Ågren
@ 2017-10-06 20:12                       ` Martin Ågren
  2017-10-06 20:12                       ` [PATCH v3 02/12] treewide: prefer lockfiles on the stack Martin Ågren
                                         ` (10 subsequent siblings)
  11 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-06 20:12 UTC (permalink / raw)
  To: git
  Cc: Michael Haggerty, Jeff King, Paul Tan, Christian Couder,
	Nguyễn Thái Ngọc Duy, Junio C Hamano

There is no longer any need to allocate and leak a `struct lock_file`.
Initialize it on the stack instead.

Before this patch, we set `lock = NULL` to signal that we have already
rolled back, and that we should not do any more work. We need to take
another approach now that we cannot assign NULL. We could, e.g., use
`is_lock_file_locked()`. But we already have another variable that we
could use instead, `found`. Its scope is only too small.

Bump `found` to the scope of the whole function and rearrange the "roll
back or write?"-checks to a straightforward if-else on `found`. This
also future-proves the code by making it obvious that we intend to take
exactly one of these paths.

Improved-by: Jeff King <peff@peff.net>
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
 sha1_file.c | 19 ++++++++-----------
 1 file changed, 8 insertions(+), 11 deletions(-)

diff --git a/sha1_file.c b/sha1_file.c
index 5a2014811..1d1747099 100644
--- a/sha1_file.c
+++ b/sha1_file.c
@@ -456,19 +456,19 @@ struct alternate_object_database *alloc_alt_odb(const char *dir)
 
 void add_to_alternates_file(const char *reference)
 {
-	struct lock_file *lock = xcalloc(1, sizeof(struct lock_file));
+	struct lock_file lock = LOCK_INIT;
 	char *alts = git_pathdup("objects/info/alternates");
 	FILE *in, *out;
+	int found = 0;
 
-	hold_lock_file_for_update(lock, alts, LOCK_DIE_ON_ERROR);
-	out = fdopen_lock_file(lock, "w");
+	hold_lock_file_for_update(&lock, alts, LOCK_DIE_ON_ERROR);
+	out = fdopen_lock_file(&lock, "w");
 	if (!out)
 		die_errno("unable to fdopen alternates lockfile");
 
 	in = fopen(alts, "r");
 	if (in) {
 		struct strbuf line = STRBUF_INIT;
-		int found = 0;
 
 		while (strbuf_getline(&line, in) != EOF) {
 			if (!strcmp(reference, line.buf)) {
@@ -480,18 +480,15 @@ void add_to_alternates_file(const char *reference)
 
 		strbuf_release(&line);
 		fclose(in);
-
-		if (found) {
-			rollback_lock_file(lock);
-			lock = NULL;
-		}
 	}
 	else if (errno != ENOENT)
 		die_errno("unable to read alternates file");
 
-	if (lock) {
+	if (found) {
+		rollback_lock_file(&lock);
+	} else {
 		fprintf_or_die(out, "%s\n", reference);
-		if (commit_lock_file(lock))
+		if (commit_lock_file(&lock))
 			die_errno("unable to move new alternates file into place");
 		if (alt_odb_tail)
 			link_alt_odb_entries(reference, '\n', NULL, 0);
-- 
2.15.0.rc0


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v3 02/12] treewide: prefer lockfiles on the stack
  2017-10-06 20:12                     ` [PATCH v3 00/12] Re: various lockfile-leaks and -fixes Martin Ågren
  2017-10-06 20:12                       ` [PATCH v3 01/12] sha1_file: do not leak `lock_file` Martin Ågren
@ 2017-10-06 20:12                       ` Martin Ågren
  2017-10-06 20:12                       ` [PATCH v3 03/12] lockfile: fix documentation on `close_lock_file_gently()` Martin Ågren
                                         ` (9 subsequent siblings)
  11 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-06 20:12 UTC (permalink / raw)
  To: git
  Cc: Michael Haggerty, Jeff King, Paul Tan, Christian Couder,
	Nguyễn Thái Ngọc Duy, Junio C Hamano

There is no longer any need to allocate and leak a `struct lock_file`.
The previous patch addressed an instance where we needed a minor tweak
alongside the trivial changes.

Deal with the remaining instances where we allocate and leak a struct
within a single function. Change them to have the `struct lock_file` on
the stack instead.

These instances were identified by running `git grep "^\s*struct
lock_file\s*\*"`.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
 builtin/am.c       | 24 +++++++++++-------------
 builtin/checkout.c | 14 ++++++--------
 builtin/clone.c    |  7 +++----
 builtin/diff.c     |  7 +++----
 config.c           | 17 ++++++++---------
 merge-recursive.c  |  6 +++---
 merge.c            |  8 ++++----
 wt-status.c        |  8 ++++----
 8 files changed, 42 insertions(+), 49 deletions(-)

diff --git a/builtin/am.c b/builtin/am.c
index d7513f537..4e16fd428 100644
--- a/builtin/am.c
+++ b/builtin/am.c
@@ -1134,11 +1134,11 @@ static const char *msgnum(const struct am_state *state)
  */
 static void refresh_and_write_cache(void)
 {
-	struct lock_file *lock_file = xcalloc(1, sizeof(struct lock_file));
+	struct lock_file lock_file = LOCK_INIT;
 
-	hold_locked_index(lock_file, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
 	refresh_cache(REFRESH_QUIET);
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die(_("unable to write index file"));
 }
 
@@ -1946,15 +1946,14 @@ static void am_resolve(struct am_state *state)
  */
 static int fast_forward_to(struct tree *head, struct tree *remote, int reset)
 {
-	struct lock_file *lock_file;
+	struct lock_file lock_file = LOCK_INIT;
 	struct unpack_trees_options opts;
 	struct tree_desc t[2];
 
 	if (parse_tree(head) || parse_tree(remote))
 		return -1;
 
-	lock_file = xcalloc(1, sizeof(struct lock_file));
-	hold_locked_index(lock_file, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
 
 	refresh_cache(REFRESH_QUIET);
 
@@ -1970,11 +1969,11 @@ static int fast_forward_to(struct tree *head, struct tree *remote, int reset)
 	init_tree_desc(&t[1], remote->buffer, remote->size);
 
 	if (unpack_trees(2, t, &opts)) {
-		rollback_lock_file(lock_file);
+		rollback_lock_file(&lock_file);
 		return -1;
 	}
 
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die(_("unable to write new index file"));
 
 	return 0;
@@ -1986,15 +1985,14 @@ static int fast_forward_to(struct tree *head, struct tree *remote, int reset)
  */
 static int merge_tree(struct tree *tree)
 {
-	struct lock_file *lock_file;
+	struct lock_file lock_file = LOCK_INIT;
 	struct unpack_trees_options opts;
 	struct tree_desc t[1];
 
 	if (parse_tree(tree))
 		return -1;
 
-	lock_file = xcalloc(1, sizeof(struct lock_file));
-	hold_locked_index(lock_file, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
 
 	memset(&opts, 0, sizeof(opts));
 	opts.head_idx = 1;
@@ -2005,11 +2003,11 @@ static int merge_tree(struct tree *tree)
 	init_tree_desc(&t[0], tree->buffer, tree->size);
 
 	if (unpack_trees(1, t, &opts)) {
-		rollback_lock_file(lock_file);
+		rollback_lock_file(&lock_file);
 		return -1;
 	}
 
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die(_("unable to write new index file"));
 
 	return 0;
diff --git a/builtin/checkout.c b/builtin/checkout.c
index 3345a0d16..34c3c5b61 100644
--- a/builtin/checkout.c
+++ b/builtin/checkout.c
@@ -247,7 +247,7 @@ static int checkout_paths(const struct checkout_opts *opts,
 	struct object_id rev;
 	struct commit *head;
 	int errs = 0;
-	struct lock_file *lock_file;
+	struct lock_file lock_file = LOCK_INIT;
 
 	if (opts->track != BRANCH_TRACK_UNSPECIFIED)
 		die(_("'%s' cannot be used with updating paths"), "--track");
@@ -275,9 +275,7 @@ static int checkout_paths(const struct checkout_opts *opts,
 		return run_add_interactive(revision, "--patch=checkout",
 					   &opts->pathspec);
 
-	lock_file = xcalloc(1, sizeof(struct lock_file));
-
-	hold_locked_index(lock_file, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
 	if (read_cache_preload(&opts->pathspec) < 0)
 		return error(_("index file corrupt"));
 
@@ -376,7 +374,7 @@ static int checkout_paths(const struct checkout_opts *opts,
 	}
 	errs |= finish_delayed_checkout(&state);
 
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die(_("unable to write new index file"));
 
 	read_ref_full("HEAD", 0, rev.hash, NULL);
@@ -472,9 +470,9 @@ static int merge_working_tree(const struct checkout_opts *opts,
 			      int *writeout_error)
 {
 	int ret;
-	struct lock_file *lock_file = xcalloc(1, sizeof(struct lock_file));
+	struct lock_file lock_file = LOCK_INIT;
 
-	hold_locked_index(lock_file, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
 	if (read_cache_preload(NULL) < 0)
 		return error(_("index file corrupt"));
 
@@ -591,7 +589,7 @@ static int merge_working_tree(const struct checkout_opts *opts,
 	if (!cache_tree_fully_valid(active_cache_tree))
 		cache_tree_update(&the_index, WRITE_TREE_SILENT | WRITE_TREE_REPAIR);
 
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die(_("unable to write new index file"));
 
 	if (!opts->force && !opts->quiet)
diff --git a/builtin/clone.c b/builtin/clone.c
index dbddd98f8..96a3aaaa1 100644
--- a/builtin/clone.c
+++ b/builtin/clone.c
@@ -706,7 +706,7 @@ static int checkout(int submodule_progress)
 {
 	struct object_id oid;
 	char *head;
-	struct lock_file *lock_file;
+	struct lock_file lock_file = LOCK_INIT;
 	struct unpack_trees_options opts;
 	struct tree *tree;
 	struct tree_desc t;
@@ -733,8 +733,7 @@ static int checkout(int submodule_progress)
 	/* We need to be in the new work tree for the checkout */
 	setup_work_tree();
 
-	lock_file = xcalloc(1, sizeof(struct lock_file));
-	hold_locked_index(lock_file, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
 
 	memset(&opts, 0, sizeof opts);
 	opts.update = 1;
@@ -750,7 +749,7 @@ static int checkout(int submodule_progress)
 	if (unpack_trees(1, &t, &opts) < 0)
 		die(_("unable to checkout working tree"));
 
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die(_("unable to write new index file"));
 
 	err |= run_hook_le(NULL, "post-checkout", sha1_to_hex(null_sha1),
diff --git a/builtin/diff.c b/builtin/diff.c
index 7e3ebcea3..91995965d 100644
--- a/builtin/diff.c
+++ b/builtin/diff.c
@@ -203,17 +203,16 @@ static int builtin_diff_combined(struct rev_info *revs,
 
 static void refresh_index_quietly(void)
 {
-	struct lock_file *lock_file;
+	struct lock_file lock_file = LOCK_INIT;
 	int fd;
 
-	lock_file = xcalloc(1, sizeof(struct lock_file));
-	fd = hold_locked_index(lock_file, 0);
+	fd = hold_locked_index(&lock_file, 0);
 	if (fd < 0)
 		return;
 	discard_cache();
 	read_cache();
 	refresh_cache(REFRESH_QUIET|REFRESH_UNMERGED);
-	update_index_if_able(&the_index, lock_file);
+	update_index_if_able(&the_index, &lock_file);
 }
 
 static int builtin_diff_files(struct rev_info *revs, int argc, const char **argv)
diff --git a/config.c b/config.c
index 345d78c2b..0e61e863d 100644
--- a/config.c
+++ b/config.c
@@ -2748,7 +2748,7 @@ int git_config_rename_section_in_file(const char *config_filename,
 {
 	int ret = 0, remove = 0;
 	char *filename_buf = NULL;
-	struct lock_file *lock;
+	struct lock_file lock = LOCK_INIT;
 	int out_fd;
 	char buf[1024];
 	FILE *config_file = NULL;
@@ -2762,8 +2762,7 @@ int git_config_rename_section_in_file(const char *config_filename,
 	if (!config_filename)
 		config_filename = filename_buf = git_pathdup("config");
 
-	lock = xcalloc(1, sizeof(struct lock_file));
-	out_fd = hold_lock_file_for_update(lock, config_filename, 0);
+	out_fd = hold_lock_file_for_update(&lock, config_filename, 0);
 	if (out_fd < 0) {
 		ret = error("could not lock config file %s", config_filename);
 		goto out;
@@ -2782,9 +2781,9 @@ int git_config_rename_section_in_file(const char *config_filename,
 		goto out;
 	}
 
-	if (chmod(get_lock_file_path(lock), st.st_mode & 07777) < 0) {
+	if (chmod(get_lock_file_path(&lock), st.st_mode & 07777) < 0) {
 		ret = error_errno("chmod on %s failed",
-				  get_lock_file_path(lock));
+				  get_lock_file_path(&lock));
 		goto out;
 	}
 
@@ -2805,7 +2804,7 @@ int git_config_rename_section_in_file(const char *config_filename,
 				}
 				store.baselen = strlen(new_name);
 				if (write_section(out_fd, new_name) < 0) {
-					ret = write_error(get_lock_file_path(lock));
+					ret = write_error(get_lock_file_path(&lock));
 					goto out;
 				}
 				/*
@@ -2831,20 +2830,20 @@ int git_config_rename_section_in_file(const char *config_filename,
 			continue;
 		length = strlen(output);
 		if (write_in_full(out_fd, output, length) < 0) {
-			ret = write_error(get_lock_file_path(lock));
+			ret = write_error(get_lock_file_path(&lock));
 			goto out;
 		}
 	}
 	fclose(config_file);
 	config_file = NULL;
 commit_and_out:
-	if (commit_lock_file(lock) < 0)
+	if (commit_lock_file(&lock) < 0)
 		ret = error_errno("could not write config file %s",
 				  config_filename);
 out:
 	if (config_file)
 		fclose(config_file);
-	rollback_lock_file(lock);
+	rollback_lock_file(&lock);
 out_no_rollback:
 	free(filename_buf);
 	return ret;
diff --git a/merge-recursive.c b/merge-recursive.c
index 1d3f8f0d2..24c5c26a6 100644
--- a/merge-recursive.c
+++ b/merge-recursive.c
@@ -2162,7 +2162,7 @@ int merge_recursive_generic(struct merge_options *o,
 			    struct commit **result)
 {
 	int clean;
-	struct lock_file *lock = xcalloc(1, sizeof(struct lock_file));
+	struct lock_file lock = LOCK_INIT;
 	struct commit *head_commit = get_ref(head, o->branch1);
 	struct commit *next_commit = get_ref(merge, o->branch2);
 	struct commit_list *ca = NULL;
@@ -2178,14 +2178,14 @@ int merge_recursive_generic(struct merge_options *o,
 		}
 	}
 
-	hold_locked_index(lock, LOCK_DIE_ON_ERROR);
+	hold_locked_index(&lock, LOCK_DIE_ON_ERROR);
 	clean = merge_recursive(o, head_commit, next_commit, ca,
 			result);
 	if (clean < 0)
 		return clean;
 
 	if (active_cache_changed &&
-	    write_locked_index(&the_index, lock, COMMIT_LOCK))
+	    write_locked_index(&the_index, &lock, COMMIT_LOCK))
 		return err(o, _("Unable to write index."));
 
 	return clean ? 0 : 1;
diff --git a/merge.c b/merge.c
index 1d441ad94..a18a452b5 100644
--- a/merge.c
+++ b/merge.c
@@ -53,11 +53,11 @@ int checkout_fast_forward(const struct object_id *head,
 	struct tree_desc t[MAX_UNPACK_TREES];
 	int i, nr_trees = 0;
 	struct dir_struct dir;
-	struct lock_file *lock_file = xcalloc(1, sizeof(struct lock_file));
+	struct lock_file lock_file = LOCK_INIT;
 
 	refresh_cache(REFRESH_QUIET);
 
-	if (hold_locked_index(lock_file, LOCK_REPORT_ON_ERROR) < 0)
+	if (hold_locked_index(&lock_file, LOCK_REPORT_ON_ERROR) < 0)
 		return -1;
 
 	memset(&trees, 0, sizeof(trees));
@@ -91,8 +91,8 @@ int checkout_fast_forward(const struct object_id *head,
 	}
 	if (unpack_trees(nr_trees, t, &opts))
 		return -1;
-	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK)) {
-		rollback_lock_file(lock_file);
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK)) {
+		rollback_lock_file(&lock_file);
 		return error(_("unable to write new index file"));
 	}
 	return 0;
diff --git a/wt-status.c b/wt-status.c
index 6f730ee8f..7d88e7ca8 100644
--- a/wt-status.c
+++ b/wt-status.c
@@ -2299,14 +2299,14 @@ int has_uncommitted_changes(int ignore_submodules)
  */
 int require_clean_work_tree(const char *action, const char *hint, int ignore_submodules, int gently)
 {
-	struct lock_file *lock_file = xcalloc(1, sizeof(*lock_file));
+	struct lock_file lock_file = LOCK_INIT;
 	int err = 0, fd;
 
-	fd = hold_locked_index(lock_file, 0);
+	fd = hold_locked_index(&lock_file, 0);
 	refresh_cache(REFRESH_QUIET);
 	if (0 <= fd)
-		update_index_if_able(&the_index, lock_file);
-	rollback_lock_file(lock_file);
+		update_index_if_able(&the_index, &lock_file);
+	rollback_lock_file(&lock_file);
 
 	if (has_unstaged_changes(ignore_submodules)) {
 		/* TRANSLATORS: the action is e.g. "pull with rebase" */
-- 
2.15.0.rc0


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v3 03/12] lockfile: fix documentation on `close_lock_file_gently()`
  2017-10-06 20:12                     ` [PATCH v3 00/12] Re: various lockfile-leaks and -fixes Martin Ågren
  2017-10-06 20:12                       ` [PATCH v3 01/12] sha1_file: do not leak `lock_file` Martin Ågren
  2017-10-06 20:12                       ` [PATCH v3 02/12] treewide: prefer lockfiles on the stack Martin Ågren
@ 2017-10-06 20:12                       ` Martin Ågren
  2017-10-06 20:12                       ` [PATCH v3 04/12] tempfile: fix documentation on `delete_tempfile()` Martin Ågren
                                         ` (8 subsequent siblings)
  11 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-06 20:12 UTC (permalink / raw)
  To: git
  Cc: Michael Haggerty, Jeff King, Paul Tan, Christian Couder,
	Nguyễn Thái Ngọc Duy, Junio C Hamano

Commit 83a3069a3 (lockfile: do not rollback lock on failed close,
2017-09-05) forgot to update the documentation by the function definition
to reflect that the lock is not rolled back in case closing fails.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
 lockfile.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/lockfile.h b/lockfile.h
index 7c1c484d7..f401c979f 100644
--- a/lockfile.h
+++ b/lockfile.h
@@ -240,8 +240,8 @@ extern char *get_locked_file_path(struct lock_file *lk);
  * If the lockfile is still open, close it (and the file pointer if it
  * has been opened using `fdopen_lock_file()`) without renaming the
  * lockfile over the file being locked. Return 0 upon success. On
- * failure to `close(2)`, return a negative value and roll back the
- * lock file. Usually `commit_lock_file()`, `commit_lock_file_to()`,
+ * failure to `close(2)`, return a negative value (the lockfile is not
+ * rolled back). Usually `commit_lock_file()`, `commit_lock_file_to()`,
  * or `rollback_lock_file()` should eventually be called.
  */
 static inline int close_lock_file_gently(struct lock_file *lk)
-- 
2.15.0.rc0


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v3 04/12] tempfile: fix documentation on `delete_tempfile()`
  2017-10-06 20:12                     ` [PATCH v3 00/12] Re: various lockfile-leaks and -fixes Martin Ågren
                                         ` (2 preceding siblings ...)
  2017-10-06 20:12                       ` [PATCH v3 03/12] lockfile: fix documentation on `close_lock_file_gently()` Martin Ågren
@ 2017-10-06 20:12                       ` Martin Ågren
  2017-10-06 20:12                       ` [PATCH v3 05/12] checkout-index: simplify locking logic Martin Ågren
                                         ` (7 subsequent siblings)
  11 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-06 20:12 UTC (permalink / raw)
  To: git
  Cc: Michael Haggerty, Jeff King, Paul Tan, Christian Couder,
	Nguyễn Thái Ngọc Duy, Junio C Hamano

The function has always been documented as returning 0 or -1. It is in
fact `void`. Correct that. As part of the rearrangements we lose the
mention that `delete_tempfile()` might set `errno`. Because there is
no return value, the user can't really know whether it did anyway.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
 tempfile.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/tempfile.h b/tempfile.h
index b8f4b5e14..450908b2e 100644
--- a/tempfile.h
+++ b/tempfile.h
@@ -68,10 +68,10 @@
  * `create_tempfile()` returns an allocated tempfile on success or NULL
  * on failure. On errors, `errno` describes the reason for failure.
  *
- * `delete_tempfile()`, `rename_tempfile()`, and `close_tempfile_gently()`
- * return 0 on success. On failure they set `errno` appropriately and return
- * -1. `delete` and `rename` (but not `close`) do their best to delete the
- * temporary file before returning.
+ * `rename_tempfile()` and `close_tempfile_gently()` return 0 on success.
+ * On failure they set `errno` appropriately and return -1.
+ * `delete_tempfile()` and `rename` (but not `close`) do their best to
+ * delete the temporary file before returning.
  */
 
 struct tempfile {
-- 
2.15.0.rc0


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v3 05/12] checkout-index: simplify locking logic
  2017-10-06 20:12                     ` [PATCH v3 00/12] Re: various lockfile-leaks and -fixes Martin Ågren
                                         ` (3 preceding siblings ...)
  2017-10-06 20:12                       ` [PATCH v3 04/12] tempfile: fix documentation on `delete_tempfile()` Martin Ågren
@ 2017-10-06 20:12                       ` Martin Ågren
  2017-10-06 20:12                       ` [PATCH v3 06/12] cache-tree: " Martin Ågren
                                         ` (6 subsequent siblings)
  11 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-06 20:12 UTC (permalink / raw)
  To: git
  Cc: Michael Haggerty, Jeff King, Paul Tan, Christian Couder,
	Nguyễn Thái Ngọc Duy, Junio C Hamano

`newfd` starts out negative. If we then take the lock, `newfd` will
become non-negative. We later check for exactly that property before
calling `write_locked_index()`. That is, we are simply using `newfd` as
a boolean to keep track of whether we took the lock or not. (We always
use `newfd` and `lock_file` together, so they really are mirroring each
other.)

Drop `newfd` and check with `is_lock_file_locked()` instead. While at
it, move the `static struct lock_file` into `cmd_checkout_index()` and
make it non-static. It is only used in this function, and after
076aa2cbd (tempfile: auto-allocate tempfiles on heap, 2017-09-05), we
can have lockfiles on the stack.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
 builtin/checkout-index.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/builtin/checkout-index.c b/builtin/checkout-index.c
index 39c8be05d..b0e78b819 100644
--- a/builtin/checkout-index.c
+++ b/builtin/checkout-index.c
@@ -129,8 +129,6 @@ static const char * const builtin_checkout_index_usage[] = {
 	NULL
 };
 
-static struct lock_file lock_file;
-
 static int option_parse_stage(const struct option *opt,
 			      const char *arg, int unset)
 {
@@ -150,7 +148,7 @@ static int option_parse_stage(const struct option *opt,
 int cmd_checkout_index(int argc, const char **argv, const char *prefix)
 {
 	int i;
-	int newfd = -1;
+	struct lock_file lock_file = LOCK_INIT;
 	int all = 0;
 	int read_from_stdin = 0;
 	int prefix_length;
@@ -206,7 +204,7 @@ int cmd_checkout_index(int argc, const char **argv, const char *prefix)
 	if (index_opt && !state.base_dir_len && !to_tempfile) {
 		state.refresh_cache = 1;
 		state.istate = &the_index;
-		newfd = hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
+		hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
 	}
 
 	/* Check out named files first */
@@ -251,7 +249,7 @@ int cmd_checkout_index(int argc, const char **argv, const char *prefix)
 	if (all)
 		checkout_all(prefix, prefix_length);
 
-	if (0 <= newfd &&
+	if (is_lock_file_locked(&lock_file) &&
 	    write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die("Unable to write new index file");
 	return 0;
-- 
2.15.0.rc0


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v3 06/12] cache-tree: simplify locking logic
  2017-10-06 20:12                     ` [PATCH v3 00/12] Re: various lockfile-leaks and -fixes Martin Ågren
                                         ` (4 preceding siblings ...)
  2017-10-06 20:12                       ` [PATCH v3 05/12] checkout-index: simplify locking logic Martin Ågren
@ 2017-10-06 20:12                       ` Martin Ågren
  2017-10-06 20:12                       ` [PATCH v3 07/12] apply: move lockfile into `apply_state` Martin Ågren
                                         ` (5 subsequent siblings)
  11 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-06 20:12 UTC (permalink / raw)
  To: git
  Cc: Michael Haggerty, Jeff King, Paul Tan, Christian Couder,
	Nguyễn Thái Ngọc Duy, Junio C Hamano

After we have taken the lock using `LOCK_DIE_ON_ERROR`, we know that
`newfd` is non-negative. So when we check for exactly that property
before calling `write_locked_index()`, the outcome is guaranteed.

If we write and commit successfully, we set `newfd = -1`, so that we can
later avoid calling `rollback_lock_file` on an already-committed lock.
But we might just as well unconditionally call `rollback_lock_file()` --
it will be a no-op if we have already committed.

All in all, we use `newfd` as a bool and the only benefit we get from it
is that we can avoid calling a no-op. Remove `newfd` so that we have one
variable less to reason about.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
 cache-tree.c | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/cache-tree.c b/cache-tree.c
index 71d092ed5..f646f5673 100644
--- a/cache-tree.c
+++ b/cache-tree.c
@@ -602,11 +602,11 @@ static struct cache_tree *cache_tree_find(struct cache_tree *it, const char *pat
 
 int write_index_as_tree(unsigned char *sha1, struct index_state *index_state, const char *index_path, int flags, const char *prefix)
 {
-	int entries, was_valid, newfd;
+	int entries, was_valid;
 	struct lock_file lock_file = LOCK_INIT;
 	int ret = 0;
 
-	newfd = hold_lock_file_for_update(&lock_file, index_path, LOCK_DIE_ON_ERROR);
+	hold_lock_file_for_update(&lock_file, index_path, LOCK_DIE_ON_ERROR);
 
 	entries = read_index_from(index_state, index_path);
 	if (entries < 0) {
@@ -625,10 +625,7 @@ int write_index_as_tree(unsigned char *sha1, struct index_state *index_state, co
 			ret = WRITE_TREE_UNMERGED_INDEX;
 			goto out;
 		}
-		if (0 <= newfd) {
-			if (!write_locked_index(index_state, &lock_file, COMMIT_LOCK))
-				newfd = -1;
-		}
+		write_locked_index(index_state, &lock_file, COMMIT_LOCK);
 		/* Not being able to write is fine -- we are only interested
 		 * in updating the cache-tree part, and if the next caller
 		 * ends up using the old index with unupdated cache-tree part
@@ -650,8 +647,7 @@ int write_index_as_tree(unsigned char *sha1, struct index_state *index_state, co
 		hashcpy(sha1, index_state->cache_tree->oid.hash);
 
 out:
-	if (0 <= newfd)
-		rollback_lock_file(&lock_file);
+	rollback_lock_file(&lock_file);
 	return ret;
 }
 
-- 
2.15.0.rc0


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v3 07/12] apply: move lockfile into `apply_state`
  2017-10-06 20:12                     ` [PATCH v3 00/12] Re: various lockfile-leaks and -fixes Martin Ågren
                                         ` (5 preceding siblings ...)
  2017-10-06 20:12                       ` [PATCH v3 06/12] cache-tree: " Martin Ågren
@ 2017-10-06 20:12                       ` Martin Ågren
  2017-10-06 20:12                       ` [PATCH v3 08/12] apply: remove `newfd` from `struct apply_state` Martin Ågren
                                         ` (4 subsequent siblings)
  11 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-06 20:12 UTC (permalink / raw)
  To: git
  Cc: Michael Haggerty, Jeff King, Paul Tan, Christian Couder,
	Nguyễn Thái Ngọc Duy, Junio C Hamano

We have two users of `struct apply_state` and the related functionality
in apply.c. Each user sets up its `apply_state` by handing over a
pointer to its static `lock_file`. (Before 076aa2cbd (tempfile:
auto-allocate tempfiles on heap, 2017-09-05), we could never free
lockfiles, so making them static was a reasonable approach.)

Other than that, they never directly access their `lock_file`s, which
are instead handled by the functionality in apply.c.

To make life easier for the caller and to make it less tempting for a
future caller to mess with the lock, make apply.c fully responsible for
setting up the `lock_file`. As mentioned above, it is now safe to free a
`lock_file`, so we can make the `struct apply_state` contain an actual
`struct lock_file` instead of a pointer to one.

The user in builtin/apply.c is rather simple. For builtin/am.c, we might
worry that the lock state is actually meant to be inherited across
calls. But the lock is only taken as `apply_all_patches()` executes, and
code inspection shows that it will always be released.

Alternatively, we can observe that the lock itself is never queried
directly. When we decide whether we should lock, we check a related
variable `newfd`. That variable is not inherited, so from the point of
view of apply.c, the state machine really is reset with each call to
`init_apply_state()`. (It would be a bug if `newfd` and the lock status
were not in sync. The duplication of information in `newfd` and the lock
will be addressed in the next patch.)

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
 apply.c         | 14 +++++---------
 apply.h         |  5 ++---
 builtin/am.c    |  3 +--
 builtin/apply.c |  4 +---
 4 files changed, 9 insertions(+), 17 deletions(-)

diff --git a/apply.c b/apply.c
index c022af53a..5a6ca10a7 100644
--- a/apply.c
+++ b/apply.c
@@ -75,12 +75,10 @@ static int parse_ignorewhitespace_option(struct apply_state *state,
 }
 
 int init_apply_state(struct apply_state *state,
-		     const char *prefix,
-		     struct lock_file *lock_file)
+		     const char *prefix)
 {
 	memset(state, 0, sizeof(*state));
 	state->prefix = prefix;
-	state->lock_file = lock_file;
 	state->newfd = -1;
 	state->apply = 1;
 	state->line_termination = '\n';
@@ -146,8 +144,6 @@ int check_apply_state(struct apply_state *state, int force_apply)
 	}
 	if (state->check_index)
 		state->unsafe_paths = 0;
-	if (!state->lock_file)
-		return error("BUG: state->lock_file should not be NULL");
 
 	if (state->apply_verbosity <= verbosity_silent) {
 		state->saved_error_routine = get_error_routine();
@@ -4711,11 +4707,11 @@ static int apply_patch(struct apply_state *state,
 	state->update_index = state->check_index && state->apply;
 	if (state->update_index && state->newfd < 0) {
 		if (state->index_file)
-			state->newfd = hold_lock_file_for_update(state->lock_file,
+			state->newfd = hold_lock_file_for_update(&state->lock_file,
 								 state->index_file,
 								 LOCK_DIE_ON_ERROR);
 		else
-			state->newfd = hold_locked_index(state->lock_file, LOCK_DIE_ON_ERROR);
+			state->newfd = hold_locked_index(&state->lock_file, LOCK_DIE_ON_ERROR);
 	}
 
 	if (state->check_index && read_apply_cache(state) < 0) {
@@ -4911,7 +4907,7 @@ int apply_all_patches(struct apply_state *state,
 	}
 
 	if (state->update_index) {
-		res = write_locked_index(&the_index, state->lock_file, COMMIT_LOCK);
+		res = write_locked_index(&the_index, &state->lock_file, COMMIT_LOCK);
 		if (res) {
 			error(_("Unable to write new index file"));
 			res = -128;
@@ -4924,7 +4920,7 @@ int apply_all_patches(struct apply_state *state,
 
 end:
 	if (state->newfd >= 0) {
-		rollback_lock_file(state->lock_file);
+		rollback_lock_file(&state->lock_file);
 		state->newfd = -1;
 	}
 
diff --git a/apply.h b/apply.h
index d9b395770..cf00cda17 100644
--- a/apply.h
+++ b/apply.h
@@ -37,7 +37,7 @@ struct apply_state {
 	const char *prefix;
 
 	/* These are lock_file related */
-	struct lock_file *lock_file;
+	struct lock_file lock_file;
 	int newfd;
 
 	/* These control what gets looked at and modified */
@@ -116,8 +116,7 @@ extern int apply_parse_options(int argc, const char **argv,
 			       int *force_apply, int *options,
 			       const char * const *apply_usage);
 extern int init_apply_state(struct apply_state *state,
-			    const char *prefix,
-			    struct lock_file *lock_file);
+			    const char *prefix);
 extern void clear_apply_state(struct apply_state *state);
 extern int check_apply_state(struct apply_state *state, int force_apply);
 
diff --git a/builtin/am.c b/builtin/am.c
index 4e16fd428..40968428d 100644
--- a/builtin/am.c
+++ b/builtin/am.c
@@ -1488,11 +1488,10 @@ static int run_apply(const struct am_state *state, const char *index_file)
 	struct argv_array apply_opts = ARGV_ARRAY_INIT;
 	struct apply_state apply_state;
 	int res, opts_left;
-	static struct lock_file lock_file;
 	int force_apply = 0;
 	int options = 0;
 
-	if (init_apply_state(&apply_state, NULL, &lock_file))
+	if (init_apply_state(&apply_state, NULL))
 		die("BUG: init_apply_state() failed");
 
 	argv_array_push(&apply_opts, "apply");
diff --git a/builtin/apply.c b/builtin/apply.c
index 81b9a61c3..48d398933 100644
--- a/builtin/apply.c
+++ b/builtin/apply.c
@@ -9,8 +9,6 @@ static const char * const apply_usage[] = {
 	NULL
 };
 
-static struct lock_file lock_file;
-
 int cmd_apply(int argc, const char **argv, const char *prefix)
 {
 	int force_apply = 0;
@@ -18,7 +16,7 @@ int cmd_apply(int argc, const char **argv, const char *prefix)
 	int ret;
 	struct apply_state state;
 
-	if (init_apply_state(&state, prefix, &lock_file))
+	if (init_apply_state(&state, prefix))
 		exit(128);
 
 	argc = apply_parse_options(argc, argv,
-- 
2.15.0.rc0


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v3 08/12] apply: remove `newfd` from `struct apply_state`
  2017-10-06 20:12                     ` [PATCH v3 00/12] Re: various lockfile-leaks and -fixes Martin Ågren
                                         ` (6 preceding siblings ...)
  2017-10-06 20:12                       ` [PATCH v3 07/12] apply: move lockfile into `apply_state` Martin Ågren
@ 2017-10-06 20:12                       ` Martin Ågren
  2017-10-06 20:12                       ` [PATCH v3 09/12] cache.h: document `write_locked_index()` Martin Ågren
                                         ` (3 subsequent siblings)
  11 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-06 20:12 UTC (permalink / raw)
  To: git
  Cc: Michael Haggerty, Jeff King, Paul Tan, Christian Couder,
	Nguyễn Thái Ngọc Duy, Junio C Hamano

Similar to a previous patch, we do not need to use `newfd` to signal
that we have a lockfile to clean up. We can just unconditionally call
`rollback_lock_file`. If we do not hold the lock, it will be a no-op.

Where we check `newfd` to decide whether we need to take the lock, we
can instead use `is_lock_file_locked()`.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
 apply.c | 17 ++++++-----------
 apply.h |  3 +--
 2 files changed, 7 insertions(+), 13 deletions(-)

diff --git a/apply.c b/apply.c
index 5a6ca10a7..d676debd5 100644
--- a/apply.c
+++ b/apply.c
@@ -79,7 +79,6 @@ int init_apply_state(struct apply_state *state,
 {
 	memset(state, 0, sizeof(*state));
 	state->prefix = prefix;
-	state->newfd = -1;
 	state->apply = 1;
 	state->line_termination = '\n';
 	state->p_value = 1;
@@ -4705,13 +4704,13 @@ static int apply_patch(struct apply_state *state,
 		state->apply = 0;
 
 	state->update_index = state->check_index && state->apply;
-	if (state->update_index && state->newfd < 0) {
+	if (state->update_index && !is_lock_file_locked(&state->lock_file)) {
 		if (state->index_file)
-			state->newfd = hold_lock_file_for_update(&state->lock_file,
-								 state->index_file,
-								 LOCK_DIE_ON_ERROR);
+			hold_lock_file_for_update(&state->lock_file,
+						  state->index_file,
+						  LOCK_DIE_ON_ERROR);
 		else
-			state->newfd = hold_locked_index(&state->lock_file, LOCK_DIE_ON_ERROR);
+			hold_locked_index(&state->lock_file, LOCK_DIE_ON_ERROR);
 	}
 
 	if (state->check_index && read_apply_cache(state) < 0) {
@@ -4913,16 +4912,12 @@ int apply_all_patches(struct apply_state *state,
 			res = -128;
 			goto end;
 		}
-		state->newfd = -1;
 	}
 
 	res = !!errs;
 
 end:
-	if (state->newfd >= 0) {
-		rollback_lock_file(&state->lock_file);
-		state->newfd = -1;
-	}
+	rollback_lock_file(&state->lock_file);
 
 	if (state->apply_verbosity <= verbosity_silent) {
 		set_error_routine(state->saved_error_routine);
diff --git a/apply.h b/apply.h
index cf00cda17..dc4a01905 100644
--- a/apply.h
+++ b/apply.h
@@ -36,9 +36,8 @@ enum apply_verbosity {
 struct apply_state {
 	const char *prefix;
 
-	/* These are lock_file related */
+	/* Lock file */
 	struct lock_file lock_file;
-	int newfd;
 
 	/* These control what gets looked at and modified */
 	int apply; /* this is not a dry-run */
-- 
2.15.0.rc0


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v3 09/12] cache.h: document `write_locked_index()`
  2017-10-06 20:12                     ` [PATCH v3 00/12] Re: various lockfile-leaks and -fixes Martin Ågren
                                         ` (7 preceding siblings ...)
  2017-10-06 20:12                       ` [PATCH v3 08/12] apply: remove `newfd` from `struct apply_state` Martin Ågren
@ 2017-10-06 20:12                       ` Martin Ågren
  2017-10-06 20:12                       ` [PATCH v3 10/12] read-cache: drop explicit `CLOSE_LOCK`-flag Martin Ågren
                                         ` (2 subsequent siblings)
  11 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-06 20:12 UTC (permalink / raw)
  To: git
  Cc: Michael Haggerty, Jeff King, Paul Tan, Christian Couder,
	Nguyễn Thái Ngọc Duy, Junio C Hamano

The next patches will tweak the behavior of this function. Document it
in order to establish a basis for those patches.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
 cache.h | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/cache.h b/cache.h
index ea6c236e0..e9d9556e3 100644
--- a/cache.h
+++ b/cache.h
@@ -601,9 +601,25 @@ extern int do_read_index(struct index_state *istate, const char *path,
 extern int read_index_from(struct index_state *, const char *path);
 extern int is_index_unborn(struct index_state *);
 extern int read_index_unmerged(struct index_state *);
+
+/* For use with `write_locked_index()`. */
 #define COMMIT_LOCK		(1 << 0)
 #define CLOSE_LOCK		(1 << 1)
+
+/*
+ * Write the index while holding an already-taken lock. The flags may
+ * contain at most one of `COMMIT_LOCK` and `CLOSE_LOCK`.
+ *
+ * Unless a split index is in use, write the index into the lockfile.
+ *
+ * With a split index, write the shared index to a temporary file,
+ * adjust its permissions and rename it into place, then write the
+ * split index to the lockfile. If the temporary file for the shared
+ * index cannot be created, fall back to the behavior described in
+ * the previous paragraph.
+ */
 extern int write_locked_index(struct index_state *, struct lock_file *lock, unsigned flags);
+
 extern int discard_index(struct index_state *);
 extern void move_index_extensions(struct index_state *dst, struct index_state *src);
 extern int unmerged_index(const struct index_state *);
-- 
2.15.0.rc0


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v3 10/12] read-cache: drop explicit `CLOSE_LOCK`-flag
  2017-10-06 20:12                     ` [PATCH v3 00/12] Re: various lockfile-leaks and -fixes Martin Ågren
                                         ` (8 preceding siblings ...)
  2017-10-06 20:12                       ` [PATCH v3 09/12] cache.h: document `write_locked_index()` Martin Ågren
@ 2017-10-06 20:12                       ` Martin Ågren
  2017-10-06 20:12                       ` [PATCH v3 11/12] read-cache: leave lock in right state in `write_locked_index()` Martin Ågren
  2017-10-06 20:12                       ` [PATCH v3 12/12] read_cache: roll back lock in `update_index_if_able()` Martin Ågren
  11 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-06 20:12 UTC (permalink / raw)
  To: git
  Cc: Michael Haggerty, Jeff King, Paul Tan, Christian Couder,
	Nguyễn Thái Ngọc Duy, Junio C Hamano

`write_locked_index()` takes two flags: `COMMIT_LOCK` and `CLOSE_LOCK`.
At most one is allowed. But it is also possible to use no flag, i.e.,
`0`. But when `write_locked_index()` calls `do_write_index()`, the
temporary file, a.k.a. the lockfile, will be closed. So passing `0` is
effectively the same as `CLOSE_LOCK`, which seems like a bug.

We might feel tempted to restructure the code in order to close the file
later, or conditionally. It also feels a bit unfortunate that we simply
"happen" to close the lock by way of an implementation detail of
lockfiles. But note that we need to close the temporary file before
`stat`-ing it, at least on Windows. See 9f41c7a6b (read-cache: close
index.lock in do_write_index, 2017-04-26).

Drop `CLOSE_LOCK` and make it explicit that `write_locked_index()`
always closes the lock. Whether it is also committed is governed by the
remaining flag, `COMMIT_LOCK`.

This means we neither have nor suggest that we have a mode to write the
index and leave the file open. Whatever extra contents we might
eventually want to write, we should probably write it from within
`write_locked_index()` itself anyway.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
---
v3: Dropped the comment in `do_write_locked_index()` and documented how
`do_write_index()` handles the temporary file instead.

 builtin/commit.c | 10 +++++-----
 cache.h          |  5 ++---
 read-cache.c     | 14 ++++++++------
 3 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/builtin/commit.c b/builtin/commit.c
index 0f8ddb686..32dc2101f 100644
--- a/builtin/commit.c
+++ b/builtin/commit.c
@@ -355,7 +355,7 @@ static const char *prepare_index(int argc, const char **argv, const char *prefix
 
 		refresh_cache_or_die(refresh_flags);
 
-		if (write_locked_index(&the_index, &index_lock, CLOSE_LOCK))
+		if (write_locked_index(&the_index, &index_lock, 0))
 			die(_("unable to create temporary index"));
 
 		old_index_env = getenv(INDEX_ENVIRONMENT);
@@ -374,7 +374,7 @@ static const char *prepare_index(int argc, const char **argv, const char *prefix
 		if (update_main_cache_tree(WRITE_TREE_SILENT) == 0) {
 			if (reopen_lock_file(&index_lock) < 0)
 				die(_("unable to write index file"));
-			if (write_locked_index(&the_index, &index_lock, CLOSE_LOCK))
+			if (write_locked_index(&the_index, &index_lock, 0))
 				die(_("unable to update temporary index"));
 		} else
 			warning(_("Failed to update main cache tree"));
@@ -401,7 +401,7 @@ static const char *prepare_index(int argc, const char **argv, const char *prefix
 		add_files_to_cache(also ? prefix : NULL, &pathspec, 0);
 		refresh_cache_or_die(refresh_flags);
 		update_main_cache_tree(WRITE_TREE_SILENT);
-		if (write_locked_index(&the_index, &index_lock, CLOSE_LOCK))
+		if (write_locked_index(&the_index, &index_lock, 0))
 			die(_("unable to write new_index file"));
 		commit_style = COMMIT_NORMAL;
 		ret = get_lock_file_path(&index_lock);
@@ -474,7 +474,7 @@ static const char *prepare_index(int argc, const char **argv, const char *prefix
 	add_remove_files(&partial);
 	refresh_cache(REFRESH_QUIET);
 	update_main_cache_tree(WRITE_TREE_SILENT);
-	if (write_locked_index(&the_index, &index_lock, CLOSE_LOCK))
+	if (write_locked_index(&the_index, &index_lock, 0))
 		die(_("unable to write new_index file"));
 
 	hold_lock_file_for_update(&false_lock,
@@ -486,7 +486,7 @@ static const char *prepare_index(int argc, const char **argv, const char *prefix
 	add_remove_files(&partial);
 	refresh_cache(REFRESH_QUIET);
 
-	if (write_locked_index(&the_index, &false_lock, CLOSE_LOCK))
+	if (write_locked_index(&the_index, &false_lock, 0))
 		die(_("unable to write temporary index file"));
 
 	discard_cache();
diff --git a/cache.h b/cache.h
index e9d9556e3..21a6856c5 100644
--- a/cache.h
+++ b/cache.h
@@ -604,11 +604,10 @@ extern int read_index_unmerged(struct index_state *);
 
 /* For use with `write_locked_index()`. */
 #define COMMIT_LOCK		(1 << 0)
-#define CLOSE_LOCK		(1 << 1)
 
 /*
- * Write the index while holding an already-taken lock. The flags may
- * contain at most one of `COMMIT_LOCK` and `CLOSE_LOCK`.
+ * Write the index while holding an already-taken lock. Close the lock,
+ * and if `COMMIT_LOCK` is given, commit it.
  *
  * Unless a split index is in use, write the index into the lockfile.
  *
diff --git a/read-cache.c b/read-cache.c
index 65f4fe837..c7aa3632a 100644
--- a/read-cache.c
+++ b/read-cache.c
@@ -2187,6 +2187,13 @@ void update_index_if_able(struct index_state *istate, struct lock_file *lockfile
 		rollback_lock_file(lockfile);
 }
 
+/*
+ * On success, `tempfile` is closed. If it is the temporary file
+ * of a `struct lock_file`, we will therefore effectively perform
+ * a 'close_lock_file_gently()`. Since that is an implementation
+ * detail of lockfiles, callers of `do_write_index()` should not
+ * rely on it.
+ */
 static int do_write_index(struct index_state *istate, struct tempfile *tempfile,
 			  int strip_extensions)
 {
@@ -2343,14 +2350,9 @@ static int do_write_locked_index(struct index_state *istate, struct lock_file *l
 	int ret = do_write_index(istate, lock->tempfile, 0);
 	if (ret)
 		return ret;
-	assert((flags & (COMMIT_LOCK | CLOSE_LOCK)) !=
-	       (COMMIT_LOCK | CLOSE_LOCK));
 	if (flags & COMMIT_LOCK)
 		return commit_locked_index(lock);
-	else if (flags & CLOSE_LOCK)
-		return close_lock_file_gently(lock);
-	else
-		return ret;
+	return close_lock_file_gently(lock);
 }
 
 static int write_split_index(struct index_state *istate,
-- 
2.15.0.rc0


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v3 11/12] read-cache: leave lock in right state in `write_locked_index()`
  2017-10-06 20:12                     ` [PATCH v3 00/12] Re: various lockfile-leaks and -fixes Martin Ågren
                                         ` (9 preceding siblings ...)
  2017-10-06 20:12                       ` [PATCH v3 10/12] read-cache: drop explicit `CLOSE_LOCK`-flag Martin Ågren
@ 2017-10-06 20:12                       ` Martin Ågren
  2017-10-06 20:12                       ` [PATCH v3 12/12] read_cache: roll back lock in `update_index_if_able()` Martin Ågren
  11 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-06 20:12 UTC (permalink / raw)
  To: git
  Cc: Michael Haggerty, Jeff King, Paul Tan, Christian Couder,
	Nguyễn Thái Ngọc Duy, Junio C Hamano

If the original version of `write_locked_index()` returned with an
error, it didn't roll back the lockfile unless the error occured at the
very end, during closing/committing. See commit 03b866477 (read-cache:
new API write_locked_index instead of write_index/write_cache,
2014-06-13).

In commit 9f41c7a6b (read-cache: close index.lock in do_write_index,
2017-04-26), we learned to close the lock slightly earlier in the
callstack. That was mostly a side-effect of lockfiles being implemented
using temporary files, but didn't cause any real harm.

Recently, commit 076aa2cbd (tempfile: auto-allocate tempfiles on heap,
2017-09-05) introduced a subtle bug. If the temporary file is deleted
(i.e., the lockfile is rolled back), the tempfile-pointer in the `struct
lock_file` will be left dangling. Thus, an attempt to reuse the
lockfile, or even just to roll it back, will induce undefined behavior
-- most likely a crash.

Besides not crashing, we clearly want to make things consistent. The
guarantees which the lockfile-machinery itself provides is A) if we ask
to commit and it fails, roll back, and B) if we ask to close and it
fails, do _not_ roll back. Let's do the same for consistency.

Do not delete the temporary file in `do_write_index()`. One of its
callers, `write_locked_index()` will thereby avoid rolling back the
lock. The other caller, `write_shared_index()`, will delete its
temporary file anyway. Both of these callers will avoid undefined
behavior (crashing).

Teach `write_locked_index(..., COMMIT_LOCK)` to roll back the lock
before returning. If we have already succeeded and committed, it will be
a noop. Simplify the existing callers where we now have a superfluous
call to `rollback_lockfile()`. That should keep future readers from
wondering why the callers are inconsistent.

Signed-off-by: Martin Ågren <martin.agren@gmail.com>
---
v3: Maybe the commit message wasn't too long, but it was loong. For
example, some of it just duplicated stuff from the previous patch.

 builtin/difftool.c |  1 -
 cache.h            |  4 ++++
 merge.c            |  4 +---
 read-cache.c       | 14 ++++++++------
 sequencer.c        |  1 -
 5 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/builtin/difftool.c b/builtin/difftool.c
index b2d3ba753..bcc79d188 100644
--- a/builtin/difftool.c
+++ b/builtin/difftool.c
@@ -616,7 +616,6 @@ static int run_dir_diff(const char *extcmd, int symlinks, const char *prefix,
 			if (hold_lock_file_for_update(&lock, buf.buf, 0) < 0 ||
 			    write_locked_index(&wtindex, &lock, COMMIT_LOCK)) {
 				ret = error("could not write %s", buf.buf);
-				rollback_lock_file(&lock);
 				goto finish;
 			}
 			changed_files(&wt_modified, buf.buf, workdir);
diff --git a/cache.h b/cache.h
index 21a6856c5..0e26224b9 100644
--- a/cache.h
+++ b/cache.h
@@ -616,6 +616,10 @@ extern int read_index_unmerged(struct index_state *);
  * split index to the lockfile. If the temporary file for the shared
  * index cannot be created, fall back to the behavior described in
  * the previous paragraph.
+ *
+ * With `COMMIT_LOCK`, the lock is always committed or rolled back.
+ * Without it, the lock is closed, but neither committed nor rolled
+ * back.
  */
 extern int write_locked_index(struct index_state *, struct lock_file *lock, unsigned flags);
 
diff --git a/merge.c b/merge.c
index a18a452b5..e5d796c9f 100644
--- a/merge.c
+++ b/merge.c
@@ -91,9 +91,7 @@ int checkout_fast_forward(const struct object_id *head,
 	}
 	if (unpack_trees(nr_trees, t, &opts))
 		return -1;
-	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK)) {
-		rollback_lock_file(&lock_file);
+	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		return error(_("unable to write new index file"));
-	}
 	return 0;
 }
diff --git a/read-cache.c b/read-cache.c
index c7aa3632a..0d8d2dede 100644
--- a/read-cache.c
+++ b/read-cache.c
@@ -2182,9 +2182,8 @@ static int has_racy_timestamp(struct index_state *istate)
 void update_index_if_able(struct index_state *istate, struct lock_file *lockfile)
 {
 	if ((istate->cache_changed || has_racy_timestamp(istate)) &&
-	    verify_index(istate) &&
-	    write_locked_index(istate, lockfile, COMMIT_LOCK))
-		rollback_lock_file(lockfile);
+	    verify_index(istate))
+		write_locked_index(istate, lockfile, COMMIT_LOCK);
 }
 
 /*
@@ -2321,7 +2320,6 @@ static int do_write_index(struct index_state *istate, struct tempfile *tempfile,
 		return -1;
 	if (close_tempfile_gently(tempfile)) {
 		error(_("could not close '%s'"), tempfile->filename.buf);
-		delete_tempfile(&tempfile);
 		return -1;
 	}
 	if (stat(tempfile->filename.buf, &st))
@@ -2501,7 +2499,8 @@ int write_locked_index(struct index_state *istate, struct lock_file *lock,
 	    (istate->cache_changed & ~EXTMASK)) {
 		if (si)
 			hashclr(si->base_sha1);
-		return do_write_locked_index(istate, lock, flags);
+		ret = do_write_locked_index(istate, lock, flags);
+		goto out;
 	}
 
 	if (getenv("GIT_TEST_SPLIT_INDEX")) {
@@ -2517,7 +2516,7 @@ int write_locked_index(struct index_state *istate, struct lock_file *lock,
 	if (new_shared_index) {
 		ret = write_shared_index(istate, lock, flags);
 		if (ret)
-			return ret;
+			goto out;
 	}
 
 	ret = write_split_index(istate, lock, flags);
@@ -2526,6 +2525,9 @@ int write_locked_index(struct index_state *istate, struct lock_file *lock,
 	if (!ret && !new_shared_index)
 		freshen_shared_index(sha1_to_hex(si->base_sha1), 1);
 
+out:
+	if (flags & COMMIT_LOCK)
+		rollback_lock_file(lock);
 	return ret;
 }
 
diff --git a/sequencer.c b/sequencer.c
index 60636ce54..d56c38081 100644
--- a/sequencer.c
+++ b/sequencer.c
@@ -1183,7 +1183,6 @@ static int read_and_refresh_cache(struct replay_opts *opts)
 	refresh_index(&the_index, REFRESH_QUIET|REFRESH_UNMERGED, NULL, NULL, NULL);
 	if (the_index.cache_changed && index_fd >= 0) {
 		if (write_locked_index(&the_index, &index_lock, COMMIT_LOCK)) {
-			rollback_lock_file(&index_lock);
 			return error(_("git %s: failed to refresh the index"),
 				_(action_name(opts)));
 		}
-- 
2.15.0.rc0


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v3 12/12] read_cache: roll back lock in `update_index_if_able()`
  2017-10-06 20:12                     ` [PATCH v3 00/12] Re: various lockfile-leaks and -fixes Martin Ågren
                                         ` (10 preceding siblings ...)
  2017-10-06 20:12                       ` [PATCH v3 11/12] read-cache: leave lock in right state in `write_locked_index()` Martin Ågren
@ 2017-10-06 20:12                       ` Martin Ågren
  11 siblings, 0 replies; 69+ messages in thread
From: Martin Ågren @ 2017-10-06 20:12 UTC (permalink / raw)
  To: git
  Cc: Michael Haggerty, Jeff King, Paul Tan, Christian Couder,
	Nguyễn Thái Ngọc Duy, Junio C Hamano

`update_index_if_able()` used to always commit the lock or roll it back.
Commit 03b866477 (read-cache: new API write_locked_index instead of
write_index/write_cache, 2014-06-13) stopped rolling it back in case a
write was not even attempted. This change in behavior is not motivated
in the commit message and appears to be accidental: the `else`-path was
removed, although that changed the behavior in case the `if` shortcuts.

Reintroduce the rollback and document this behavior. While at it, move
the documentation on this function from the function definition to the
function declaration in cache.h.

If `write_locked_index(..., COMMIT_LOCK)` fails, it will roll back the
lock for us (see the previous commit).

Noticed-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
 cache.h      | 4 ++++
 read-cache.c | 5 ++---
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/cache.h b/cache.h
index 0e26224b9..8c2493766 100644
--- a/cache.h
+++ b/cache.h
@@ -734,6 +734,10 @@ extern void fill_stat_cache_info(struct cache_entry *ce, struct stat *st);
 extern int refresh_index(struct index_state *, unsigned int flags, const struct pathspec *pathspec, char *seen, const char *header_msg);
 extern struct cache_entry *refresh_cache_entry(struct cache_entry *, unsigned int);
 
+/*
+ * Opportunistically update the index but do not complain if we can't.
+ * The lockfile is always committed or rolled back.
+ */
 extern void update_index_if_able(struct index_state *, struct lock_file *);
 
 extern int hold_locked_index(struct lock_file *, int);
diff --git a/read-cache.c b/read-cache.c
index 0d8d2dede..87188d390 100644
--- a/read-cache.c
+++ b/read-cache.c
@@ -2176,14 +2176,13 @@ static int has_racy_timestamp(struct index_state *istate)
 	return 0;
 }
 
-/*
- * Opportunistically update the index but do not complain if we can't
- */
 void update_index_if_able(struct index_state *istate, struct lock_file *lockfile)
 {
 	if ((istate->cache_changed || has_racy_timestamp(istate)) &&
 	    verify_index(istate))
 		write_locked_index(istate, lockfile, COMMIT_LOCK);
+	else
+		rollback_lock_file(lockfile);
 }
 
 /*
-- 
2.15.0.rc0


^ permalink raw reply related	[flat|nested] 69+ messages in thread

end of thread, other threads:[~2017-10-06 20:13 UTC | newest]

Thread overview: 69+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-01 14:56 [PATCH 00/11] various lockfile-leaks and -fixes Martin Ågren
2017-10-01 14:56 ` [PATCH 01/11] sha1_file: do not leak `lock_file` Martin Ågren
2017-10-02  5:26   ` Jeff King
2017-10-02 10:15     ` Martin Ågren
2017-10-01 14:56 ` [PATCH 02/11] treewide: prefer lockfiles on the stack Martin Ågren
2017-10-02  3:37   ` Junio C Hamano
2017-10-02  4:12     ` Martin Ågren
2017-10-02  5:34   ` Jeff King
2017-10-01 14:56 ` [PATCH 03/11] lockfile: fix documentation on `close_lock_file_gently()` Martin Ågren
2017-10-02  5:35   ` Jeff King
2017-10-01 14:56 ` [PATCH 04/11] tempfile: fix documentation on `delete_tempfile()` Martin Ågren
2017-10-02  5:38   ` Jeff King
2017-10-01 14:56 ` [PATCH 05/11] cache-tree: simplify locking logic Martin Ågren
2017-10-02  3:40   ` Junio C Hamano
2017-10-02  5:41   ` Jeff King
2017-10-01 14:56 ` [PATCH 06/11] apply: move lockfile into `apply_state` Martin Ågren
2017-10-02  5:48   ` Jeff King
2017-10-01 14:56 ` [PATCH 07/11] apply: remove `newfd` from `struct apply_state` Martin Ågren
2017-10-02  5:50   ` Jeff King
2017-10-01 14:56 ` [PATCH 08/11] cache.h: document `write_locked_index()` Martin Ågren
2017-10-01 14:56 ` [PATCH 09/11] read-cache: require flags for `write_locked_index()` Martin Ågren
2017-10-02  3:49   ` Junio C Hamano
2017-10-02  4:14     ` Martin Ågren
2017-10-02 10:16       ` Martin Ågren
2017-10-02  6:00   ` Jeff King
2017-10-01 14:56 ` [PATCH 10/11] read-cache: don't leave dangling pointer in `do_write_index()` Martin Ågren
2017-10-02  6:15   ` Jeff King
2017-10-02  6:20     ` Jeff King
2017-10-01 14:56 ` [PATCH 11/11] read-cache: roll back lock on error with `COMMIT_LOCK` Martin Ågren
2017-10-02  4:01   ` Junio C Hamano
2017-10-02  2:37 ` [PATCH 00/11] various lockfile-leaks and -fixes Junio C Hamano
2017-10-02  6:22 ` Jeff King
2017-10-02  6:30   ` Junio C Hamano
2017-10-02 10:19     ` Martin Ågren
2017-10-03  6:21       ` Junio C Hamano
2017-10-05 20:32         ` [PATCH v2 00/12] " Martin Ågren
2017-10-05 20:32           ` [PATCH v2 01/12] sha1_file: do not leak `lock_file` Martin Ågren
2017-10-06  1:17             ` Junio C Hamano
2017-10-05 20:32           ` [PATCH v2 02/12] treewide: prefer lockfiles on the stack Martin Ågren
2017-10-05 20:32           ` [PATCH v2 03/12] lockfile: fix documentation on `close_lock_file_gently()` Martin Ågren
2017-10-05 20:32           ` [PATCH v2 04/12] tempfile: fix documentation on `delete_tempfile()` Martin Ågren
2017-10-05 20:32           ` [PATCH v2 05/12] checkout-index: simplify locking logic Martin Ågren
2017-10-06  1:21             ` Junio C Hamano
2017-10-05 20:32           ` [PATCH v2 06/12] cache-tree: " Martin Ågren
2017-10-05 20:32           ` [PATCH v2 07/12] apply: move lockfile into `apply_state` Martin Ågren
2017-10-05 20:32           ` [PATCH v2 08/12] apply: remove `newfd` from `struct apply_state` Martin Ågren
2017-10-05 20:32           ` [PATCH v2 09/12] cache.h: document `write_locked_index()` Martin Ågren
2017-10-05 20:32           ` [PATCH v2 10/12] read-cache: drop explicit `CLOSE_LOCK`-flag Martin Ågren
2017-10-06  1:39             ` Junio C Hamano
2017-10-06 11:02               ` Martin Ågren
2017-10-05 20:32           ` [PATCH v2 11/12] read-cache: leave lock in right state in `write_locked_index()` Martin Ågren
2017-10-06  2:01             ` Junio C Hamano
2017-10-06 11:04               ` Martin Ågren
2017-10-06 12:02                 ` Junio C Hamano
2017-10-06 19:44                   ` Martin Ågren
2017-10-06 20:12                     ` [PATCH v3 00/12] Re: various lockfile-leaks and -fixes Martin Ågren
2017-10-06 20:12                       ` [PATCH v3 01/12] sha1_file: do not leak `lock_file` Martin Ågren
2017-10-06 20:12                       ` [PATCH v3 02/12] treewide: prefer lockfiles on the stack Martin Ågren
2017-10-06 20:12                       ` [PATCH v3 03/12] lockfile: fix documentation on `close_lock_file_gently()` Martin Ågren
2017-10-06 20:12                       ` [PATCH v3 04/12] tempfile: fix documentation on `delete_tempfile()` Martin Ågren
2017-10-06 20:12                       ` [PATCH v3 05/12] checkout-index: simplify locking logic Martin Ågren
2017-10-06 20:12                       ` [PATCH v3 06/12] cache-tree: " Martin Ågren
2017-10-06 20:12                       ` [PATCH v3 07/12] apply: move lockfile into `apply_state` Martin Ågren
2017-10-06 20:12                       ` [PATCH v3 08/12] apply: remove `newfd` from `struct apply_state` Martin Ågren
2017-10-06 20:12                       ` [PATCH v3 09/12] cache.h: document `write_locked_index()` Martin Ågren
2017-10-06 20:12                       ` [PATCH v3 10/12] read-cache: drop explicit `CLOSE_LOCK`-flag Martin Ågren
2017-10-06 20:12                       ` [PATCH v3 11/12] read-cache: leave lock in right state in `write_locked_index()` Martin Ågren
2017-10-06 20:12                       ` [PATCH v3 12/12] read_cache: roll back lock in `update_index_if_able()` Martin Ågren
2017-10-05 20:32           ` [PATCH v2 " Martin Ågren

Code repositories for project(s) associated with this public inbox

	https://80x24.org/mirrors/git.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).