git@vger.kernel.org mailing list mirror (one of many)
 help / color / mirror / code / Atom feed
* [PATCH v6 0/6] blame: add the ability to ignore commits
@ 2019-04-10 16:24 Barret Rhoden
  2019-04-14 21:10 ` Michael Platings
  0 siblings, 1 reply; 10+ messages in thread
From: Barret Rhoden @ 2019-04-10 16:24 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, David Kastrup, Jeff King,
	Jeff Smith, Johannes Schindelin, Junio C Hamano,
	René Scharfe, Stefan Beller, Michael Platings

This patch set adds the ability to ignore a set of commits and their
changes when blaming.  This can be used to ignore a commit deemed 'not
interesting,' such as reformatting.

The last patch in the series changes the heuristic by which ignored
lines are attributed to specific lines in the parent commit.  This
increases the likelihood of blaming the 'right' commit, where 'right' is
subjective.  Perhaps we want another algorithm.  I'm using a relatively
simple one that uses the basics of Michael's fingerprinting code, but he
has another algorithm.

v5 -> v6
v5: https://public-inbox.org/git/20190403160207.149174-1-brho@google.com/
- The "guess" heuristic can now look anywhere in the parent file for a
  matching line, instead of just looking in the parent chunk.  The
  chunks passed to blame_chunk() are smaller than you'd expect: they are
  just adjacent '-' and '+' sections.  Any diff 'context' is a chunk
  boundary.
- Fixed the parent_len calculation.  I had been basing it off of
  e->num_lines, and treating the blame entry as if it was the target
  chunk, but the individual blame entries are subsets of the chunk.  I
  just pass the parent chunk info all the way through now.
- Use Michael's newest fingerprinting code, which is a large speedup.
- Made a config option to zero the hash for an ignored line when the
  heuristic could not find a line in the parent to blame.  Previously,
  this was always 'on'.
- Moved the for loop variable declarations out of the for ().
- Rebased on master.

v4 -> v5
v4: https://public-inbox.org/git/20190226170648.211847-1-brho@google.com/
- Changed the handling of blame_entries from ignored commits so that you
  can use any algorithm you want to map lines from the diff chunk to
  different parts of the parent commit.
- fill_origin_blob() optionally can track the offsets of the start of
  every line, similar to what we do in the scoreboard for the final
  file.  This can be used by the matching algorithm.  It has no effect
  if you are not ignoring commits.
- RFC of a fuzzy/fingerprinting heuristic, based on Michael Platings RFC
  at https://public-inbox.org/git/20190324235020.49706-2-michael@platin.gs/
- Made the tests that detect unblamable entries more resilient to
  different heuristics.
- Fixed a few bugs:
	- tests were not grepping the line number from --line-porcelain
	  correctly.
	- In the old version, when I passed the "upper" part of the
	  blame entry to the target and marked unblamable, the suspect
	  was incorrectly marked as the parent.  The s_lno was also in
	  the parent's address space.

v3 -> v4
v3: https://public-inbox.org/git/20190212222722.240676-1-brho@google.com/
- Cleaned up the tests, especially removing usage of sed -i.
- Squashed the 'tests' commit into the other blame commits.  Let me know
  if you'd like further squashing.

v2 -> v3
v2: https://public-inbox.org/git/20190117202919.157326-1-brho@google.com/
- SHA-1 -> "object name", and fixed other comments
- Changed error string for oidset_parse_file()
- Adjusted existing fsck tests to handle those string changes
- Return hash of all zeros for lines we know we cannot identify
- Allow repeated options for blame.ignoreRevsFile and
  --ignore-revs-file.  An empty file name resets the list.  Config
  options are parsed before the command line options.
- Rebased to master
- Added regression tests

v1 -> v2
v1: https://public-inbox.org/git/20190107213013.231514-1-brho@google.com/
- extracted the skiplist from fsck to avoid duplicating code
- overhauled the interface and options
- split out markIgnoredFiles
- handled merges


Barret Rhoden (6):
  Move init_skiplist() outside of fsck
  blame: use a helper function in blame_chunk()
  blame: add the ability to ignore commits and their changes
  blame: add config options to handle output for ignored lines
  blame: optionally track line fingerprints during fill_blame_origin()
  blame: use a fingerprint heuristic to match ignored lines

 Documentation/blame-options.txt |  18 ++
 Documentation/config/blame.txt  |  16 ++
 Documentation/git-blame.txt     |   1 +
 blame.c                         | 446 ++++++++++++++++++++++++++++----
 blame.h                         |   6 +
 builtin/blame.c                 |  56 ++++
 fsck.c                          |  37 +--
 oidset.c                        |  35 +++
 oidset.h                        |   8 +
 t/t5504-fetch-receive-strict.sh |  14 +-
 t/t8013-blame-ignore-revs.sh    | 202 +++++++++++++++
 11 files changed, 740 insertions(+), 99 deletions(-)
 create mode 100755 t/t8013-blame-ignore-revs.sh

-- 
2.21.0.392.gf8f6787159e-goog


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v6 0/6] blame: add the ability to ignore commits
  2019-04-10 16:24 Barret Rhoden
@ 2019-04-14 21:10 ` Michael Platings
  2019-04-15 13:23   ` Barret Rhoden
  0 siblings, 1 reply; 10+ messages in thread
From: Michael Platings @ 2019-04-14 21:10 UTC (permalink / raw)
  To: Barret Rhoden
  Cc: Git mailing list, Ævar Arnfjörð Bjarmason,
	David Kastrup, Jeff King, Jeff Smith, Johannes Schindelin,
	Junio C Hamano, René Scharfe, Stefan Beller

Hi Barret,

This works pretty well for the typical reformatting use case now. I've
run it over every commit of every .c file in the git project root,
both forwards and backwards with every combination of -w/-M/-C and
can't get it to crash so I think it's good in that respect.

However, it can still attribute lines to the wrong parent line. See
https://pypi.org/project/autopep8/#usage for an example reformatting
that it gets a bit confused on. The patch I submitted handles this
case correctly because it uses information about the more similar
lines to decide how more ambiguous lines should be matched.

You also gave an example of:

        commit-a 11) void new_func_1(void *x, void *y);
        commit-b 12) void new_func_2(void *x, void *y);

Being reformatted to:

        commit-a 11) void new_func_1(void *x,
        commit-b 12)                 void *y);
        commit-b 13) void new_func_2(void *x,
        commit-b 14)                 void *y);

The patch I submitted handles this case correctly, assigning line 12
to commit-a because it scales the parent line numbers according to the
relative diff chunk sizes instead of assuming a 1-1 mapping.

So I do ask that you incorporate more of my patch, including the test
code. It is more complex but I hope this demonstrates that there are
reasons for that. Happy to provide more examples or explanation if it
would help. On the other hand if you have examples where it falls
short then I'd be interested to know.

The other major use case that I'm interested in is renaming. In this
case, the git-hyper-blame approach of mapping line numbers 1-1 works
perfectly. Here's an example. Before:

        commit-a 11) Position MyClass::location(Offset O) {
        commit-b 12)    return P + O;
        commit-c 13) }

After:

        commit-a 11) Position MyClass::location(Offset offset) {
        commit-a 12)    return position + offset;
        commit-c 13) }

With the fuzzy matching, line 12 gets incorrectly matched to parent
line 11 because the similarity of "position" and "offset" outweighs
the similarity of "return". I'm considering adding even more
complexity to my patch such that parts of a line that have already
been matched can't be matched again by other lines.

But the other possibility is that we let the user choose the
heuristic. For a commit where they know that line numbers haven't
changed they could choose 1-1 matching, while for a reformatting
commit they could use fuzzy matching. I welcome your thoughts.

-Michael

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v6 0/6] blame: add the ability to ignore commits
  2019-04-14 21:10 ` Michael Platings
@ 2019-04-15 13:23   ` Barret Rhoden
  2019-04-15 21:54     ` Michael Platings
  0 siblings, 1 reply; 10+ messages in thread
From: Barret Rhoden @ 2019-04-15 13:23 UTC (permalink / raw)
  To: Michael Platings
  Cc: Git mailing list, Ævar Arnfjörð Bjarmason,
	David Kastrup, Jeff King, Jeff Smith, Johannes Schindelin,
	Junio C Hamano, René Scharfe, Stefan Beller

Hi Michael -

On 4/14/19 5:10 PM, Michael Platings wrote:
> Hi Barret,
> 
> This works pretty well for the typical reformatting use case now. I've
> run it over every commit of every .c file in the git project root,
> both forwards and backwards with every combination of -w/-M/-C and
> can't get it to crash so I think it's good in that respect.
> 
> However, it can still attribute lines to the wrong parent line. See
> https://pypi.org/project/autopep8/#usage for an example reformatting
> that it gets a bit confused on. The patch I submitted handles this
> case correctly because it uses information about the more similar
> lines to decide how more ambiguous lines should be matched.

Yeah - I ran your tests against it and noticed a few cases weren't handled.

> You also gave an example of:
> 
>          commit-a 11) void new_func_1(void *x, void *y);
>          commit-b 12) void new_func_2(void *x, void *y);
> 
> Being reformatted to:
> 
>          commit-a 11) void new_func_1(void *x,
>          commit-b 12)                 void *y);
>          commit-b 13) void new_func_2(void *x,
>          commit-b 14)                 void *y);
> 
> The patch I submitted handles this case correctly, assigning line 12
> to commit-a because it scales the parent line numbers according to the
> relative diff chunk sizes instead of assuming a 1-1 mapping.
> 
> So I do ask that you incorporate more of my patch, including the test
> code. It is more complex but I hope this demonstrates that there are
> reasons for that. Happy to provide more examples or explanation if it
> would help. On the other hand if you have examples where it falls
> short then I'd be interested to know.

My main concerns:
- Can your version reach outside of a diff chunk?  such as in my "header 
moved" case.  That was a simplified version of something that pops up in 
a major file reformatting of mine, where a "return 0;" was matched as 
context and broke a diff chunk up into two blame_chunk() calls.  I tend 
to think of this as the "split diff chunk."

- Complexity and possibly performance.  The recursive stuff made me 
wonder about it a bit.  It's no reason not to use it, just need to check 
it more closely.

Is the latest version of your stuff still the one you posted last week 
or so?  If we had a patch applied onto this one with something like an 
ifdef or a dirt-simple toggle, we can play with both of them in the same 
codebase.

Similarly, do you think the "two pass" approach I have (check the chunk, 
then check the parent file) would work with your recursive partitioning 
style?  That might make yours able to handle the "split diff chunk" case.

> The other major use case that I'm interested in is renaming. In this
> case, the git-hyper-blame approach of mapping line numbers 1-1 works
> perfectly. Here's an example. Before:
> 
>          commit-a 11) Position MyClass::location(Offset O) {
>          commit-b 12)    return P + O;
>          commit-c 13) }
> 
> After:
> 
>          commit-a 11) Position MyClass::location(Offset offset) {
>          commit-a 12)    return position + offset;
>          commit-c 13) }
> 
> With the fuzzy matching, line 12 gets incorrectly matched to parent
> line 11 because the similarity of "position" and "offset" outweighs
> the similarity of "return". I'm considering adding even more
> complexity to my patch such that parts of a line that have already
> been matched can't be matched again by other lines.
> 
> But the other possibility is that we let the user choose the
> heuristic. For a commit where they know that line numbers haven't
> changed they could choose 1-1 matching, while for a reformatting
> commit they could use fuzzy matching. I welcome your thoughts.

No algorithm will work for all cases.  The one you just gave had the 
simple heuristic working better than a complex one.  We could make it 
more complex, but then another example may be worse.  I can live with 
some inaccuracy in exchange for simplicity.

I ran into something similar with the THRESHOLD #defines.  You want it 
to be able to match certain things, but not other things.  How similar 
does something have to be?  Should it depend on how far away the 
matching line is from the source line?  I went with a "close enough is 
good enough" approach, since we're marking with a '*' or something 
anyways, so the user should know to not trust it 100%.

Thanks,

Barret


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v6 0/6] blame: add the ability to ignore commits
  2019-04-15 13:23   ` Barret Rhoden
@ 2019-04-15 21:54     ` Michael Platings
  0 siblings, 0 replies; 10+ messages in thread
From: Michael Platings @ 2019-04-15 21:54 UTC (permalink / raw)
  To: Barret Rhoden
  Cc: Git mailing list, Ævar Arnfjörð Bjarmason,
	David Kastrup, Jeff King, Jeff Smith, Johannes Schindelin,
	Junio C Hamano, René Scharfe, Stefan Beller

> My main concerns:
> - Can your version reach outside of a diff chunk?

Currently no. It's optimised for reformatting and renaming, both of
which preserve ordering. I could look into allowing disordered matches
where the similarity is high, while still being biased towards ordered
matches. If you can post more examples that would be helpful.

> - Complexity and possibly performance.  The recursive stuff made me
> wonder about it a bit.  It's no reason not to use it, just need to check
> it more closely.

Complexity I can't deny, I can only mitigate it with
documentation/comments. I optimised the code pretty heavily and tested
on some contrived worst-case scenarios and the performance was still
good so I'm not worried about that.

> Is the latest version of your stuff still the one you posted last week
> or so?

Yes. But reaching outside the chunk might lead to a significantly
different API in the next version...

> Similarly, do you think the "two pass" approach I have (check the chunk,
> then check the parent file) would work with your recursive partitioning
> style?  That might make yours able to handle the "split diff chunk" case.

Yes, should do. I'll see what I can come up with this week.

> No algorithm will work for all cases.  The one you just gave had the
> simple heuristic working better than a complex one.  We could make it
> more complex, but then another example may be worse.  I can live with
> some inaccuracy in exchange for simplicity.

Exactly, no algorithm will work for all cases. So what I'm suggesting
is that it might be best to let the user choose which heuristic is
appropriate for a given commit. If they know that the simple heuristic
works best then perhaps we should let them choose that rather than
only offering a one-size-fits-all option. But if we do want to go for
one-size-fits-all then I'm very keen to make sure it at least solves
the specific cases that we know about.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH v6 0/6] blame: add the ability to ignore commits
       [not found] <[PATCH v6 0/6] blame: add the ability to ignore commits>
@ 2019-04-22 22:26 ` michael
  2019-04-23 14:23   ` Barret Rhoden
                     ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: michael @ 2019-04-22 22:26 UTC (permalink / raw)
  To: git
  Cc: Jeff King, Stefan Beller, Jeff Smith, Junio C Hamano,
	René Scharfe, Ævar Arnfjörð Bjarmason,
	David Kastrup, Johannes Schindelin, Barret Rhoden,
	Michael Platings

From: Michael Platings <michael@platin.gs>

Hi Barret,

This patch is on top of your patch v6 4/6.

Previously I pointed out that my code couldn't handle this case correctly:
Before:

        commit-a 11) Position MyClass::location(Offset O) {
        commit-b 12)    return P + O;
        commit-c 13) }

After:

        commit-a 11) Position MyClass::location(Offset offset) {
        commit-a 12)    return position + offset;
        commit-c 13) }

With this patch, line 12 is now correctly matched to commit-b even though it is
more similar to line 11.

The significant change here is that when a line is matched, its fingerprint is
subtracted from the matched parent line's fingerprint. This prevents two lines
matching the same part of a parent line.

This algorithm is now very good at matching lines *as long as line ordering is
unchanged*. When matching lines in a single diff chunk it makes sense to assume
that lines are ordered because if they're not then there's a good chance the
true match is outside the chunk. I'm very happy with how this algorithm behaves
and I'm struggling to fault it for the refactoring & renaming use cases.

To address reordered lines I suggest a combination of this algorithm and your
algorithm - in the first path my algorithm tries to match lines within a
single chunk, and in the second pass your algorithm tries to find matches for
unblamed lines out of order and outside their chunk.

It would also be possible to adapt my algorithm to not assume that lines are
ordered, but I think that would make it O(n^2) whereas it's typically
O(n log n) right now. But I could dig into that more if you prefer.

Thanks,
-Michael
---
 Makefile |   1 +
 blame.c  |  95 +++++++-----
 blame.h  |   2 +
 fuzzy.c  | 434 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 fuzzy.h  |  18 +++
 5 files changed, 515 insertions(+), 35 deletions(-)
 create mode 100644 fuzzy.c
 create mode 100644 fuzzy.h

diff --git a/Makefile b/Makefile
index 3e03290d8f..4725060c54 100644
--- a/Makefile
+++ b/Makefile
@@ -893,6 +893,7 @@ LIB_OBJS += fetch-object.o
 LIB_OBJS += fetch-pack.o
 LIB_OBJS += fsck.o
 LIB_OBJS += fsmonitor.o
+LIB_OBJS += fuzzy.o
 LIB_OBJS += gettext.o
 LIB_OBJS += gpg-interface.o
 LIB_OBJS += graph.o
diff --git a/blame.c b/blame.c
index a98ae00e2c..43cdfcc259 100644
--- a/blame.c
+++ b/blame.c
@@ -9,6 +9,7 @@
 #include "blame.h"
 #include "alloc.h"
 #include "commit-slab.h"
+#include "fuzzy.h"
 
 define_commit_slab(blame_suspects, struct blame_origin *);
 static struct blame_suspects blame_suspects;
@@ -311,12 +312,42 @@ static int diff_hunks(mmfile_t *file_a, mmfile_t *file_b,
 	return xdi_diff(file_a, file_b, &xpp, &xecfg, &ecb);
 }
 
+static const char *get_next_line(const char *start, const char *end)
+{
+	const char *nl = memchr(start, '\n', end - start);
+
+	return nl ? nl + 1 : end;
+}
+
+static int find_line_starts(int **line_starts, const char *buf,
+			    unsigned long len)
+{
+	const char *end = buf + len;
+	const char *p;
+	int *lineno;
+	int num = 0;
+
+	for (p = buf; p < end; p = get_next_line(p, end))
+		num++;
+
+	ALLOC_ARRAY(*line_starts, num + 1);
+	lineno = *line_starts;
+
+	for (p = buf; p < end; p = get_next_line(p, end))
+		*lineno++ = p - buf;
+
+	*lineno = len;
+
+	return num;
+}
+
 /*
  * Given an origin, prepare mmfile_t structure to be used by the
  * diff machinery
  */
 static void fill_origin_blob(struct diff_options *opt,
-			     struct blame_origin *o, mmfile_t *file, int *num_read_blob)
+			     struct blame_origin *o, mmfile_t *file,
+			     int *num_read_blob, int fill_line_starts)
 {
 	if (!o->file.ptr) {
 		enum object_type type;
@@ -340,11 +371,16 @@ static void fill_origin_blob(struct diff_options *opt,
 	}
 	else
 		*file = o->file;
+	if (fill_line_starts && !o->line_starts)
+		o->num_lines = find_line_starts(&o->line_starts, o->file.ptr,
+						o->file.size);
 }
 
 static void drop_origin_blob(struct blame_origin *o)
 {
 	FREE_AND_NULL(o->file.ptr);
+	FREE_AND_NULL(o->line_starts);
+	o->num_lines = 0;
 }
 
 /*
@@ -891,19 +927,27 @@ static void guess_line_blames(struct blame_entry *e,
 			      int offset, int parent_slno, int parent_len,
 			      struct blame_line_tracker *line_blames)
 {
-	int i, parent_idx;
+	int i;
+	int *matching_lines = fuzzy_find_matching_lines(parent->file.ptr,
+							target->file.ptr,
+							parent->line_starts,
+							target->line_starts,
+							e->s_lno + offset,
+							e->s_lno,
+							parent_len,
+							e->num_lines);
 
 	for (i = 0; i < e->num_lines; i++) {
-		parent_idx = e->s_lno + i + offset;
-		if (parent_slno <= parent_idx &&
-		    parent_idx < parent_slno + parent_len) {
+		if (matching_lines[i] >= 0) {
 			line_blames[i].is_parent = 1;
-			line_blames[i].s_lno = parent_idx;
+			line_blames[i].s_lno = matching_lines[i];
 		} else {
 			line_blames[i].is_parent = 0;
 			line_blames[i].s_lno = e->s_lno + i;
 		}
 	}
+
+	free(matching_lines);
 }
 
 /*
@@ -1136,8 +1180,10 @@ static void pass_blame_to_parent(struct blame_scoreboard *sb,
 	d.ignore_diffs = ignore_diffs;
 	d.dstq = &newdest; d.srcq = &target->suspects;
 
-	fill_origin_blob(&sb->revs->diffopt, parent, &file_p, &sb->num_read_blob);
-	fill_origin_blob(&sb->revs->diffopt, target, &file_o, &sb->num_read_blob);
+	fill_origin_blob(&sb->revs->diffopt, parent, &file_p,
+			 &sb->num_read_blob, ignore_diffs);
+	fill_origin_blob(&sb->revs->diffopt, target, &file_o,
+			 &sb->num_read_blob, ignore_diffs);
 	sb->num_get_patch++;
 
 	if (diff_hunks(&file_p, &file_o, blame_chunk_cb, &d, sb->xdl_opts))
@@ -1348,7 +1394,8 @@ static void find_move_in_parent(struct blame_scoreboard *sb,
 	if (!unblamed)
 		return; /* nothing remains for this target */
 
-	fill_origin_blob(&sb->revs->diffopt, parent, &file_p, &sb->num_read_blob);
+	fill_origin_blob(&sb->revs->diffopt, parent, &file_p,
+			 &sb->num_read_blob, 0);
 	if (!file_p.ptr)
 		return;
 
@@ -1477,7 +1524,8 @@ static void find_copy_in_parent(struct blame_scoreboard *sb,
 			norigin = get_origin(parent, p->one->path);
 			oidcpy(&norigin->blob_oid, &p->one->oid);
 			norigin->mode = p->one->mode;
-			fill_origin_blob(&sb->revs->diffopt, norigin, &file_p, &sb->num_read_blob);
+			fill_origin_blob(&sb->revs->diffopt, norigin, &file_p,
+					 &sb->num_read_blob, 0);
 			if (!file_p.ptr)
 				continue;
 
@@ -1816,37 +1864,14 @@ void assign_blame(struct blame_scoreboard *sb, int opt)
 	}
 }
 
-static const char *get_next_line(const char *start, const char *end)
-{
-	const char *nl = memchr(start, '\n', end - start);
-	return nl ? nl + 1 : end;
-}
-
 /*
  * To allow quick access to the contents of nth line in the
  * final image, prepare an index in the scoreboard.
  */
 static int prepare_lines(struct blame_scoreboard *sb)
 {
-	const char *buf = sb->final_buf;
-	unsigned long len = sb->final_buf_size;
-	const char *end = buf + len;
-	const char *p;
-	int *lineno;
-	int num = 0;
-
-	for (p = buf; p < end; p = get_next_line(p, end))
-		num++;
-
-	ALLOC_ARRAY(sb->lineno, num + 1);
-	lineno = sb->lineno;
-
-	for (p = buf; p < end; p = get_next_line(p, end))
-		*lineno++ = p - buf;
-
-	*lineno = len;
-
-	sb->num_lines = num;
+	sb->num_lines = find_line_starts(&sb->lineno, sb->final_buf,
+					 sb->final_buf_size);
 	return sb->num_lines;
 }
 
diff --git a/blame.h b/blame.h
index 53df8b4c5b..f7755920c9 100644
--- a/blame.h
+++ b/blame.h
@@ -51,6 +51,8 @@ struct blame_origin {
 	 */
 	struct blame_entry *suspects;
 	mmfile_t file;
+	int num_lines;
+	int *line_starts;
 	struct object_id blob_oid;
 	unsigned mode;
 	/* guilty gets set when shipping any suspects to the final
diff --git a/fuzzy.c b/fuzzy.c
new file mode 100644
index 0000000000..c5b09a0eb7
--- /dev/null
+++ b/fuzzy.c
@@ -0,0 +1,434 @@
+#include "fuzzy.h"
+#include <ctype.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include <string.h>
+#include "git-compat-util.h"
+#include "hashmap.h"
+
+struct fingerprint_entry {
+	struct hashmap_entry entry;
+	int count;
+};
+struct fingerprint {
+	struct hashmap map;
+	struct fingerprint_entry *entries;
+};
+
+static void get_fingerprint(struct fingerprint *result,
+			    const char *line_begin,
+			    const char *line_end) {
+	unsigned hash;
+	char c0, c1;
+	int map_entry_count = line_end - line_begin - 1;
+	struct fingerprint_entry *entry = xcalloc(map_entry_count,
+		sizeof(struct fingerprint_entry));
+	struct fingerprint_entry *found_entry;
+	hashmap_init(&result->map, NULL, NULL, map_entry_count);
+	result->entries = entry;
+	for (const char *p = line_begin; p + 1 < line_end; ++p, ++entry) {
+		c0 = *p;
+		c1 = *(p + 1);
+		/* Ignore whitespace pairs */
+		if (isspace(c0) && isspace(c1))
+			continue;
+		hash = tolower(c0) | (tolower(c1) << 8);
+		hashmap_entry_init(entry, hash);
+
+		if ((found_entry = hashmap_get(&result->map, entry, NULL))) {
+			found_entry->count += 1;
+		}
+		else {
+			entry->count = 1;
+			hashmap_add(&result->map, entry);
+		}
+	}
+}
+
+static void free_fingerprint(struct fingerprint *f) {
+	hashmap_free(&f->map, 0);
+	free(f->entries);
+}
+
+static int fingerprint_similarity(struct fingerprint *a,
+				  struct fingerprint *b) {
+	int intersection = 0;
+	struct hashmap_iter iter;
+	const struct fingerprint_entry *entry_a, *entry_b;
+	hashmap_iter_init(&b->map, &iter);
+
+	while ((entry_b = hashmap_iter_next(&iter))) {
+		if ((entry_a = hashmap_get(&a->map, entry_b, NULL))) {
+			intersection += entry_a->count < entry_b->count ?
+					entry_a->count : entry_b->count;
+		}
+	}
+	return intersection;
+}
+
+static void fingerprint_subtract(struct fingerprint *a,
+				 struct fingerprint *b) {
+	struct hashmap_iter iter;
+	struct fingerprint_entry *entry_a;
+	const struct fingerprint_entry *entry_b;
+	hashmap_iter_init(&b->map, &iter);
+
+	while ((entry_b = hashmap_iter_next(&iter))) {
+		if ((entry_a = hashmap_get(&a->map, entry_b, NULL))) {
+			if (entry_a->count <= entry_b->count) {
+				hashmap_remove(&a->map, entry_b, NULL);
+			}
+			else {
+				entry_a->count -= entry_b->count;
+			}
+		}
+	}
+}
+
+static void get_line_fingerprints(struct fingerprint *fingerprints,
+				  const char *content,
+				  const int *line_starts,
+				  long chunk_start,
+				  long chunk_length) {
+	int i;
+	const char *linestart, *lineend;
+	line_starts += chunk_start;
+	for (i = 0; i != chunk_length; ++i) {
+		linestart = content + line_starts[i];
+		lineend = content + line_starts[i + 1];
+		get_fingerprint(fingerprints + i, linestart, lineend);
+	}
+}
+
+static int get_closest_local_line(int start_a,
+			    int local_line_b,
+			    int closest_line_calc_offset1,
+			    int closest_line_calc_offset2,
+			    int closest_line_calc_numerator,
+			    int closest_line_calc_denominator) {
+	return ((local_line_b + closest_line_calc_offset1) * 2 + 1) *
+		closest_line_calc_numerator /
+		(closest_line_calc_denominator * 2) +
+		closest_line_calc_offset2 - start_a;
+}
+
+static int *get_similarity(int *similarities, int max_search_distance_a,
+			   int local_line_a, int local_line_b,
+			   int closest_local_line_a) {
+	assert(abs(local_line_a - closest_local_line_a) <= max_search_distance_a);
+	return similarities + local_line_a - closest_local_line_a +
+		max_search_distance_a +
+		local_line_b * (max_search_distance_a * 2 + 1);
+}
+
+#define CERTAIN_NOTHING_MATCHES -2
+#define CERTAINTY_NOT_CALCULATED -1
+
+static void find_best_line_matches(const int max_search_distance_a,
+				   int start_a,
+				   int length_a,
+				   int local_line_b,
+				   struct fingerprint *fingerprints_a,
+				   struct fingerprint *fingerprints_b,
+				   int *similarities,
+				   int *certainties,
+				   int *second_best_result,
+				   int *result,
+				   int closest_line_calc_offset1,
+				   int closest_line_calc_offset2,
+				   int closest_line_calc_numerator,
+				   int closest_line_calc_denominator) {
+
+	int i, search_start, search_end, closest_local_line_a, *similarity,
+		best_similarity = 0, second_best_similarity = 0,
+		best_similarity_index = 0, second_best_similarity_index = 0;
+
+	if (certainties[local_line_b] != CERTAINTY_NOT_CALCULATED)
+		return;
+
+	closest_local_line_a = get_closest_local_line(start_a,
+					  local_line_b,
+					  closest_line_calc_offset1,
+					  closest_line_calc_offset2,
+					  closest_line_calc_numerator,
+					  closest_line_calc_denominator);
+
+	search_start = closest_local_line_a - max_search_distance_a;
+	if (search_start < 0)
+		search_start = 0;
+
+	search_end = closest_local_line_a + max_search_distance_a + 1;
+	if (search_end > length_a)
+		search_end = length_a;
+
+	for (i = search_start; i < search_end; ++i) {
+		similarity = get_similarity(similarities, max_search_distance_a,
+					    i, local_line_b,
+					    closest_local_line_a);
+		if (*similarity == -1) {
+			*similarity = fingerprint_similarity(
+				fingerprints_b + local_line_b,
+				fingerprints_a + i) *
+				(1000 - abs(i - closest_local_line_a));
+		}
+		if (*similarity > best_similarity) {
+			second_best_similarity = best_similarity;
+			second_best_similarity_index = best_similarity_index;
+			best_similarity = *similarity;
+			best_similarity_index = i;
+		}
+		else if (*similarity > second_best_similarity) {
+			second_best_similarity = *similarity;
+			second_best_similarity_index = i;
+		}
+	}
+
+	if (best_similarity == 0) {
+		certainties[local_line_b] = CERTAIN_NOTHING_MATCHES;
+		result[local_line_b] = -1;
+	}
+	else {
+		certainties[local_line_b] = best_similarity * 2 -
+			second_best_similarity;
+		result[local_line_b] = start_a + best_similarity_index;
+		second_best_result[local_line_b] =
+			start_a + second_best_similarity_index;
+	}
+}
+
+/*
+ * This finds the line that we can match with the most confidence, and
+ * uses it as a partition. It then calls itself on the lines on either side of
+ * that partition. In this way we avoid lines appearing out of order, and
+ * retain a sensible line ordering.
+ */
+static void fuzzy_find_matching_lines_recurse(
+	int max_search_distance_a,
+	int max_search_distance_b,
+	int start_a, int start_b,
+	int length_a, int length_b,
+	struct fingerprint *fingerprints_a,
+	struct fingerprint *fingerprints_b,
+	int *similarities,
+	int *certainties,
+	int *second_best_result,
+	int *result,
+	int closest_line_calc_offset1,
+	int closest_line_calc_offset2,
+	int closest_line_calc_numerator,
+	int closest_line_calc_denominator) {
+
+	int i, barrier, invalidate_min, invalidate_max, offset_b,
+		second_half_start_a, second_half_start_b,
+		second_half_length_a, second_half_length_b,
+		most_certain_line_a, most_certain_local_line_b = -1,
+		most_certain_line_certainty = -1,
+		closest_local_line_a;
+
+	for (i = 0; i < length_b; ++i) {
+		find_best_line_matches(max_search_distance_a,
+				       start_a,
+				       length_a,
+				       i,
+				       fingerprints_a,
+				       fingerprints_b,
+				       similarities,
+				       certainties,
+				       second_best_result,
+				       result,
+				       closest_line_calc_offset1,
+				       closest_line_calc_offset2,
+				       closest_line_calc_numerator,
+				       closest_line_calc_denominator);
+
+		if (certainties[i] > most_certain_line_certainty) {
+			most_certain_line_certainty = certainties[i];
+			most_certain_local_line_b = i;
+		}
+	}
+
+	if (most_certain_local_line_b == -1) {
+		return;
+	}
+
+	most_certain_line_a = result[most_certain_local_line_b];
+
+	/* Subtract the most certain line's fingerprint in b from the
+	 matched fingerprint in a. This means that other lines in b can't also
+	 match the same parts of the line in a. */
+	fingerprint_subtract(fingerprints_a + most_certain_line_a - start_a,
+			     fingerprints_b + most_certain_local_line_b);
+
+
+	/* Invalidate results that may be affected by the choice of pivot. */
+	invalidate_min = most_certain_local_line_b - max_search_distance_b;
+	invalidate_max = most_certain_local_line_b + max_search_distance_b + 1;
+	if (invalidate_min < 0)
+		invalidate_min = 0;
+	if (invalidate_max > length_b)
+		invalidate_max = length_b;
+
+	for (i = invalidate_min; i < invalidate_max; ++i) {
+		closest_local_line_a = get_closest_local_line(
+			start_a, i,
+			closest_line_calc_offset1,
+			closest_line_calc_offset2,
+			closest_line_calc_numerator,
+			closest_line_calc_denominator);
+		*get_similarity(similarities, max_search_distance_a,
+				most_certain_line_a - start_a, i,
+				closest_local_line_a) = -1;
+	}
+
+	barrier = most_certain_line_a;
+
+	for (i = most_certain_local_line_b - 1; i >= invalidate_min; --i) {
+		if (certainties[i] >= 0 &&
+		    (result[i] >= barrier || second_best_result[i] >= barrier)) {
+			    certainties[i] = CERTAINTY_NOT_CALCULATED;
+			    barrier = result[i];
+			    invalidate_min = i - max_search_distance_b;
+			    if (invalidate_min < 0)
+				    invalidate_min = 0;
+		    }
+	}
+
+	barrier = most_certain_line_a;
+
+	for (i = most_certain_local_line_b + 1; i < invalidate_max; ++i) {
+		if (certainties[i] >= 0 &&
+		    (result[i] <= barrier || second_best_result[i] <= barrier)) {
+			    certainties[i] = CERTAINTY_NOT_CALCULATED;
+			    barrier = result[i];
+			    invalidate_max = i + max_search_distance_b + 1;
+			    if (invalidate_max > length_b)
+				    invalidate_max = length_b;
+		    }
+	}
+
+
+	if (most_certain_local_line_b > 0) {
+		fuzzy_find_matching_lines_recurse(
+			max_search_distance_a,
+			max_search_distance_b,
+			start_a, start_b,
+			most_certain_line_a + 1 - start_a,
+			most_certain_local_line_b,
+			fingerprints_a, fingerprints_b, similarities,
+			certainties, second_best_result, result,
+			closest_line_calc_offset1, closest_line_calc_offset2,
+			closest_line_calc_numerator,
+			closest_line_calc_denominator);
+	}
+	if (most_certain_local_line_b + 1 < length_b) {
+		second_half_start_a = most_certain_line_a;
+		offset_b = most_certain_local_line_b + 1;
+		second_half_start_b = start_b + offset_b;
+		second_half_length_a =
+			length_a + start_a - second_half_start_a;
+		second_half_length_b =
+			length_b + start_b - second_half_start_b;
+		fuzzy_find_matching_lines_recurse(
+			max_search_distance_a,
+			max_search_distance_b,
+			second_half_start_a, second_half_start_b,
+			second_half_length_a, second_half_length_b,
+			fingerprints_a + second_half_start_a - start_a,
+			fingerprints_b + offset_b,
+			similarities +
+				offset_b * (max_search_distance_a * 2 + 1),
+			certainties + offset_b,
+			second_best_result + offset_b, result + offset_b,
+			closest_line_calc_offset1 + offset_b,
+			closest_line_calc_offset2,
+			closest_line_calc_numerator,
+			closest_line_calc_denominator);
+	}
+}
+
+int *fuzzy_find_matching_lines(const char *content_a,
+			       const char *content_b,
+			       const int *line_starts_a,
+			       const int *line_starts_b,
+			       int start_a,
+			       int start_b,
+			       int length_a,
+			       int length_b) {
+
+	int i, *result, *second_best_result,
+		*certainties, *similarities, similarity_count;
+	struct fingerprint *fingerprints_a, *fingerprints_b;
+
+	/* max_search_distance_a means that given a line in `b`, compare it to
+	the line in `a` that is closest to its position, and the lines in `a`
+	that are no greater than max_search_distance_a lines away from the
+	closest line in `a`.
+	max_search_distance_b is an upper bound on the greatest possible
+	distance between lines in `b` such that they will both be compared with
+	the same line in `a` according to max_search_distance_a. */
+	int max_search_distance_a = 10, max_search_distance_b;
+
+	if (max_search_distance_a >= length_a)
+		max_search_distance_a = length_a ? length_a - 1 : 0;
+
+	if (length_a == 0) {
+		max_search_distance_b = 0;
+	}
+	else {
+		max_search_distance_b = ((2 * max_search_distance_a + 1) *
+			length_b - 1) / length_a;
+	}
+
+	result = malloc(sizeof(int) * length_b);
+	second_best_result = malloc(sizeof(int) * length_b);
+	certainties = malloc(sizeof(int) * length_b);
+	similarity_count = length_b * (max_search_distance_a * 2 + 1);
+	similarities = malloc(sizeof(int) * similarity_count);
+
+	for (i = 0; i < length_b; ++i) {
+		result[i] = -1;
+		second_best_result[i] = -1;
+		certainties[i] = CERTAINTY_NOT_CALCULATED;
+	}
+
+	for (i = 0; i < similarity_count; ++i) {
+		similarities[i] = -1;
+	}
+
+	fingerprints_a = xmalloc(sizeof(struct fingerprint) * length_a);
+	fingerprints_b = xmalloc(sizeof(struct fingerprint) * length_b);
+
+	get_line_fingerprints(fingerprints_a, content_a,
+			      line_starts_a,
+			      start_a, length_a);
+	get_line_fingerprints(fingerprints_b, content_b,
+			      line_starts_b,
+			      start_b, length_b);
+
+	fuzzy_find_matching_lines_recurse(max_search_distance_a,
+					  max_search_distance_b,
+					  start_a, start_b,
+					  length_a, length_b,
+					  fingerprints_a,
+					  fingerprints_b,
+					  similarities,
+					  certainties,
+					  second_best_result,
+					  result,
+					  0, start_a, length_a, length_b);
+
+	for (i = 0; i < length_b; ++i) {
+		free_fingerprint(fingerprints_b + i);
+	}
+	for (i = 0; i < length_a; ++i) {
+		free_fingerprint(fingerprints_a + i);
+	}
+	free(fingerprints_b);
+	free(fingerprints_a);
+	free(similarities);
+	free(certainties);
+	free(second_best_result);
+
+	return result;
+}
+
diff --git a/fuzzy.h b/fuzzy.h
new file mode 100644
index 0000000000..bd6d86ae45
--- /dev/null
+++ b/fuzzy.h
@@ -0,0 +1,18 @@
+#ifndef FUZZY_H
+#define FUZZY_H
+
+/*
+ * Find line numbers in "a" that match with lines in "b"
+ * Returns an array of either line indices or -1 where no match is found.
+ * The returned array must be free()d after use.
+ */
+int *fuzzy_find_matching_lines(const char *content_a,
+			       const char *content_b,
+			       const int *line_starts_a,
+			       const int *line_starts_b,
+			       int start_a,
+			       int start_b,
+			       int length_a,
+			       int length_b);
+
+#endif
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v6 0/6] blame: add the ability to ignore commits
  2019-04-22 22:26 ` [PATCH v6 0/6] blame: add the ability to ignore commits michael
@ 2019-04-23 14:23   ` Barret Rhoden
  2019-04-23 18:13   ` Barret Rhoden
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Barret Rhoden @ 2019-04-23 14:23 UTC (permalink / raw)
  To: michael, git
  Cc: Jeff King, Stefan Beller, Jeff Smith, Junio C Hamano,
	René Scharfe, Ævar Arnfjörð Bjarmason,
	David Kastrup, Johannes Schindelin

On 4/22/19 6:26 PM, michael@platin.gs wrote:
> From: Michael Platings <michael@platin.gs>
> 
> Hi Barret,
> 
> This patch is on top of your patch v6 4/6.

Thanks, I'll take a look.  I was working on taking your old version and 
integrating it with my v6 6/6.  That way it gets the 
origin-fingerprint-filling code and can be easily compared to my 6/6 style.

[snip]

> 
> To address reordered lines I suggest a combination of this algorithm and your
> algorithm - in the first path my algorithm tries to match lines within a
> single chunk, and in the second pass your algorithm tries to find matches for
> unblamed lines out of order and outside their chunk.

I was thinking something similar.  Yesterday I did this with your older 
patch set - applied on my 6/6.  Two passes, one with your fuzzy matcher, 
then if we didn't find anything, do a scan of the entire parent (as my 
6/6 does now).

This approached worked for the cases I had (e.g. "header reordering"). 
I ran into an issue last night where your scan was finding matches where 
it shouldn't - might have been an issue with how I hooked it up.  I'll 
try your latest code and see how it goes.

Thanks,

Barret


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v6 0/6] blame: add the ability to ignore commits
  2019-04-22 22:26 ` [PATCH v6 0/6] blame: add the ability to ignore commits michael
  2019-04-23 14:23   ` Barret Rhoden
@ 2019-04-23 18:13   ` Barret Rhoden
  2019-04-23 20:17   ` Barret Rhoden
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Barret Rhoden @ 2019-04-23 18:13 UTC (permalink / raw)
  To: michael, git
  Cc: Jeff King, Stefan Beller, Jeff Smith, Junio C Hamano,
	René Scharfe, Ævar Arnfjörð Bjarmason,
	David Kastrup, Johannes Schindelin

Hi Michael -

On 4/22/19 6:26 PM, michael@platin.gs wrote:
> +	int *matching_lines = fuzzy_find_matching_lines(parent->file.ptr,
> +							target->file.ptr,
> +							parent->line_starts,
> +							target->line_starts,
> +							e->s_lno + offset,
> +							e->s_lno,
> +							parent_len,
> +							e->num_lines);

Here was the issue I ran into, and it's due to translations between 
e->s_lno and the parent's "address space".

The short version is that "e->s_lno + offset" is not always the 
parent_slno, and parent_len is based off of parent_slno.

guess_line_blames() gives you parent_slno and parent_len, as well as 
offset.  'offset' is how you convert from the target's space to the 
parent's.  parent_slno and parent_len describe the whole chunk given to 
us from the diff engine.  However, there may be multiple blame_entries 
covering that chunk.

So e->s_lno is in the target, but it's not necessarily the beginning of 
the entire diff chunk.  This is related to that page fault you found a 
while back.

Passing e->s_lno + offset for where fuzzy() starts looking in the parent 
is fine, but then the length in the parent needs to be adjusted.  For 
instance, I have this at the top of my modified 
fuzzy_find_matching_lines() (changed to take the origins and variables 
from guess_line_blames()):

         // XXX conversions to michael's variable names
	int start_a = e->s_lno + offset;
         //int length_a = parent_len;    // XXX this fails the test
         int length_a = (parent_slno + parent_len) - (e->s_lno + offset);

	int start_b = e->s_lno;
         int length_b = e->num_lines;

Plus we need a check for length_a <= 0.  I had to work to make it be 
negative, but it's possible.  parent_slno = tlno + offset, so we're 
looking at:

	length_a = tlno + parent_len - e->s_lno;

That just requires a blame entry split such that e->s_lno > tlno, and a 
parent chunk that had 0 lines.  I found a case that did that.  Basically 
in one commit you add a bunch of lines.  In another, you change one line 
in the middle of that bunch.  That causes a split of the diff chunk into 
more than one, such that e->s_lno > tlno.  That original commit only 
added lines, so parent_len == 0.

The intuition for the "negative length_a" isn't that the parent_len is 
negative, it's that the e->s_lno chunk (when offset) is outside the 
window of the parent's change.  I have a simple test for this.

Oh, and we have to length_a == 0, due to this:

	max_search_distance = length_a - 1;

Anyway, I'll take what I've got and apply your latest and see what I 
come up with.  =)  Plus, I have fixes for all of the other stuff brought 
up in the v6 discussion.

Barret


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v6 0/6] blame: add the ability to ignore commits
  2019-04-22 22:26 ` [PATCH v6 0/6] blame: add the ability to ignore commits michael
  2019-04-23 14:23   ` Barret Rhoden
  2019-04-23 18:13   ` Barret Rhoden
@ 2019-04-23 20:17   ` Barret Rhoden
  2019-04-23 21:21   ` Barret Rhoden
  2019-04-24 21:07   ` Barret Rhoden
  4 siblings, 0 replies; 10+ messages in thread
From: Barret Rhoden @ 2019-04-23 20:17 UTC (permalink / raw)
  To: michael, git
  Cc: Jeff King, Stefan Beller, Jeff Smith, Junio C Hamano,
	René Scharfe, Ævar Arnfjörð Bjarmason,
	David Kastrup, Johannes Schindelin

Hi Michael -

FYI, here are a few style nits that I changed in my version, based on a 
quick scan.  Not sure if these are all Git's style, but I think it's 
mostly linux-kernel style.

I mention this mostly for future reference - specifically if we keep 
separate versions of this code.  Hopefully we won't.  =)

I also did all the fingerprint prep in fill_origin_blob, which you can 
see in an upcoming patch.

On 4/22/19 6:26 PM, michael@platin.gs wrote:
> diff --git a/fuzzy.c b/fuzzy.c
> new file mode 100644
> index 0000000000..c5b09a0eb7
> --- /dev/null
> +++ b/fuzzy.c
> @@ -0,0 +1,434 @@
> +#include "fuzzy.h"
> +#include <ctype.h>
> +#include <stdint.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#include "git-compat-util.h"
> +#include "hashmap.h"
> +
> +struct fingerprint_entry {
> +	struct hashmap_entry entry;
> +	int count;
> +};
> +struct fingerprint {
> +	struct hashmap map;
> +	struct fingerprint_entry *entries;
> +};
> +
> +static void get_fingerprint(struct fingerprint *result,
> +			    const char *line_begin,
> +			    const char *line_end) {
                                                  ^newline here, so { is 
at the start of a line.  (on all of the functions)

> +	unsigned hash;
                 ^int

> +	char c0, c1;
> +	int map_entry_count = line_end - line_begin - 1;
> +	struct fingerprint_entry *entry = xcalloc(map_entry_count,
> +		sizeof(struct fingerprint_entry));
> +	struct fingerprint_entry *found_entry;

Blank line here, between declarations and code.  Did this for all of the 
functions.

> +	hashmap_init(&result->map, NULL, NULL, map_entry_count);
> +	result->entries = entry;
> +	for (const char *p = line_begin; p + 1 < line_end; ++p, ++entry) {
              ^moved this declaration outside the for loop

> +		c0 = *p;
> +		c1 = *(p + 1);
> +		/* Ignore whitespace pairs */
> +		if (isspace(c0) && isspace(c1))
> +			continue;
> +		hash = tolower(c0) | (tolower(c1) << 8);
> +		hashmap_entry_init(entry, hash);
> +
> +		if ((found_entry = hashmap_get(&result->map, entry, NULL))) {
                     ^moved this assignment outside the if ()
> +			found_entry->count += 1;
> +		}
> +		else {
                 ^ made this } else {.

Also in a couple other places below.

> +			entry->count = 1;
> +			hashmap_add(&result->map, entry);
> +		}
> +	}
> +}
> +
> +static void free_fingerprint(struct fingerprint *f) {
> +	hashmap_free(&f->map, 0);
> +	free(f->entries);
> +}
> +
> +static int fingerprint_similarity(struct fingerprint *a,
> +				  struct fingerprint *b) {

put fingerprint b on the same line as a (within 80 chars).  made similar 
changes wherever that was possible and looked nice.

> +	int intersection = 0;
> +	struct hashmap_iter iter;
> +	const struct fingerprint_entry *entry_a, *entry_b;
> +	hashmap_iter_init(&b->map, &iter);
> +
> +	while ((entry_b = hashmap_iter_next(&iter))) {
> +		if ((entry_a = hashmap_get(&a->map, entry_b, NULL))) {
> +			intersection += entry_a->count < entry_b->count ?
> +					entry_a->count : entry_b->count;
> +		}
> +	}
> +	return intersection;
> +}
> +
> +static void fingerprint_subtract(struct fingerprint *a,
> +				 struct fingerprint *b) {
> +	struct hashmap_iter iter;
> +	struct fingerprint_entry *entry_a;
> +	const struct fingerprint_entry *entry_b;
> +	hashmap_iter_init(&b->map, &iter);
> +
> +	while ((entry_b = hashmap_iter_next(&iter))) {
> +		if ((entry_a = hashmap_get(&a->map, entry_b, NULL))) {
> +			if (entry_a->count <= entry_b->count) {
> +				hashmap_remove(&a->map, entry_b, NULL);
> +			}
> +			else {
> +				entry_a->count -= entry_b->count;
> +			}
> +		}
> +	}
> +}
> +
> +static void get_line_fingerprints(struct fingerprint *fingerprints,
> +				  const char *content,
> +				  const int *line_starts,
> +				  long chunk_start,
> +				  long chunk_length) {
> +	int i;
> +	const char *linestart, *lineend;
> +	line_starts += chunk_start;
> +	for (i = 0; i != chunk_length; ++i) {
                       ^ any reason for '!=' versus '<'  ?

> +		linestart = content + line_starts[i];
> +		lineend = content + line_starts[i + 1];
> +		get_fingerprint(fingerprints + i, linestart, lineend);
> +	}
> +}
> +
> +static int get_closest_local_line(int start_a,
> +			    int local_line_b,
> +			    int closest_line_calc_offset1,
> +			    int closest_line_calc_offset2,
> +			    int closest_line_calc_numerator,
> +			    int closest_line_calc_denominator) {
> +	return ((local_line_b + closest_line_calc_offset1) * 2 + 1) *
> +		closest_line_calc_numerator /
> +		(closest_line_calc_denominator * 2) +
> +		closest_line_calc_offset2 - start_a;
                ^ i moved these three lines one space left, so that they 
don't line up with the open paren.  o/w it looked like they may be 
inside the '('.


> +}
> +
> +static int *get_similarity(int *similarities, int max_search_distance_a,
> +			   int local_line_a, int local_line_b,
> +			   int closest_local_line_a) {
> +	assert(abs(local_line_a - closest_local_line_a) <= max_search_distance_a);
> +	return similarities + local_line_a - closest_local_line_a +
> +		max_search_distance_a +
> +		local_line_b * (max_search_distance_a * 2 + 1);
> +}
> +
> +#define CERTAIN_NOTHING_MATCHES -2
> +#define CERTAINTY_NOT_CALCULATED -1
> +
> +static void find_best_line_matches(const int max_search_distance_a,
> +				   int start_a,
> +				   int length_a,
> +				   int local_line_b,
> +				   struct fingerprint *fingerprints_a,
> +				   struct fingerprint *fingerprints_b,
> +				   int *similarities,
> +				   int *certainties,
> +				   int *second_best_result,
> +				   int *result,
> +				   int closest_line_calc_offset1,
> +				   int closest_line_calc_offset2,
> +				   int closest_line_calc_numerator,
> +				   int closest_line_calc_denominator) {

                                    ^intense number of arguments.  =)

Not sure if there's much to do about that.  It does make some of the 
callsites busier.

> +
> +	int i, search_start, search_end, closest_local_line_a, *similarity,
> +		best_similarity = 0, second_best_similarity = 0,
> +		best_similarity_index = 0, second_best_similarity_index = 0;
> +
> +	if (certainties[local_line_b] != CERTAINTY_NOT_CALCULATED)
> +		return;
> +
> +	closest_local_line_a = get_closest_local_line(start_a,
> +					  local_line_b,
> +					  closest_line_calc_offset1,
> +					  closest_line_calc_offset2,
> +					  closest_line_calc_numerator,
> +					  closest_line_calc_denominator);
> +
> +	search_start = closest_local_line_a - max_search_distance_a;
> +	if (search_start < 0)
> +		search_start = 0;
> +
> +	search_end = closest_local_line_a + max_search_distance_a + 1;
> +	if (search_end > length_a)
> +		search_end = length_a;
> +
> +	for (i = search_start; i < search_end; ++i) {
> +		similarity = get_similarity(similarities, max_search_distance_a,
> +					    i, local_line_b,
> +					    closest_local_line_a);
> +		if (*similarity == -1) {
> +			*similarity = fingerprint_similarity(
> +				fingerprints_b + local_line_b,
> +				fingerprints_a + i) *
> +				(1000 - abs(i - closest_local_line_a));
> +		}
> +		if (*similarity > best_similarity) {
> +			second_best_similarity = best_similarity;
> +			second_best_similarity_index = best_similarity_index;
> +			best_similarity = *similarity;
> +			best_similarity_index = i;
> +		}
> +		else if (*similarity > second_best_similarity) {
> +			second_best_similarity = *similarity;
> +			second_best_similarity_index = i;
> +		}
> +	}
> +
> +	if (best_similarity == 0) {
> +		certainties[local_line_b] = CERTAIN_NOTHING_MATCHES;
> +		result[local_line_b] = -1;
> +	}
> +	else {
> +		certainties[local_line_b] = best_similarity * 2 -
> +			second_best_similarity;
> +		result[local_line_b] = start_a + best_similarity_index;
> +		second_best_result[local_line_b] =
> +			start_a + second_best_similarity_index;
> +	}
> +}
> +
> +/*
> + * This finds the line that we can match with the most confidence, and
> + * uses it as a partition. It then calls itself on the lines on either side of
> + * that partition. In this way we avoid lines appearing out of order, and
> + * retain a sensible line ordering.
> + */
> +static void fuzzy_find_matching_lines_recurse(
> +	int max_search_distance_a,
> +	int max_search_distance_b,
> +	int start_a, int start_b,
> +	int length_a, int length_b,
> +	struct fingerprint *fingerprints_a,
> +	struct fingerprint *fingerprints_b,
> +	int *similarities,
> +	int *certainties,
> +	int *second_best_result,
> +	int *result,
> +	int closest_line_calc_offset1,
> +	int closest_line_calc_offset2,
> +	int closest_line_calc_numerator,
> +	int closest_line_calc_denominator) {
> +
> +	int i, barrier, invalidate_min, invalidate_max, offset_b,
> +		second_half_start_a, second_half_start_b,
> +		second_half_length_a, second_half_length_b,
> +		most_certain_line_a, most_certain_local_line_b = -1,
> +		most_certain_line_certainty = -1,
> +		closest_local_line_a;
> +
> +	for (i = 0; i < length_b; ++i) {
> +		find_best_line_matches(max_search_distance_a,
> +				       start_a,
> +				       length_a,
> +				       i,
> +				       fingerprints_a,
> +				       fingerprints_b,
> +				       similarities,
> +				       certainties,
> +				       second_best_result,
> +				       result,
> +				       closest_line_calc_offset1,
> +				       closest_line_calc_offset2,
> +				       closest_line_calc_numerator,
> +				       closest_line_calc_denominator);
> +
> +		if (certainties[i] > most_certain_line_certainty) {
> +			most_certain_line_certainty = certainties[i];
> +			most_certain_local_line_b = i;
> +		}
> +	}
> +
> +	if (most_certain_local_line_b == -1) {
> +		return;
> +	}
         ^ removed the {} for a single-line if block.

> +
> +	most_certain_line_a = result[most_certain_local_line_b];
> +
> +	/* Subtract the most certain line's fingerprint in b from the
> +	 matched fingerprint in a. This means that other lines in b can't also
> +	 match the same parts of the line in a. */
         ^ multiline block commits as such:
         /*
          * foo
          * bar
          */
> +	fingerprint_subtract(fingerprints_a + most_certain_line_a - start_a,
> +			     fingerprints_b + most_certain_local_line_b);
> +
> +
> +	/* Invalidate results that may be affected by the choice of pivot. */
> +	invalidate_min = most_certain_local_line_b - max_search_distance_b;
> +	invalidate_max = most_certain_local_line_b + max_search_distance_b + 1;
> +	if (invalidate_min < 0)
> +		invalidate_min = 0;
> +	if (invalidate_max > length_b)
> +		invalidate_max = length_b;
> +
> +	for (i = invalidate_min; i < invalidate_max; ++i) {
> +		closest_local_line_a = get_closest_local_line(
> +			start_a, i,
> +			closest_line_calc_offset1,
> +			closest_line_calc_offset2,
> +			closest_line_calc_numerator,
> +			closest_line_calc_denominator);
> +		*get_similarity(similarities, max_search_distance_a,
> +				most_certain_line_a - start_a, i,
> +				closest_local_line_a) = -1;
> +	}
> +
> +	barrier = most_certain_line_a;
> +
> +	for (i = most_certain_local_line_b - 1; i >= invalidate_min; --i) {
> +		if (certainties[i] >= 0 &&
> +		    (result[i] >= barrier || second_best_result[i] >= barrier)) {

over 80 chars (same below)

> +			    certainties[i] = CERTAINTY_NOT_CALCULATED;
> +			    barrier = result[i];
> +			    invalidate_min = i - max_search_distance_b;
> +			    if (invalidate_min < 0)
> +				    invalidate_min = 0;
> +		    }
> +	}
> +
> +	barrier = most_certain_line_a;
> +
> +	for (i = most_certain_local_line_b + 1; i < invalidate_max; ++i) {
> +		if (certainties[i] >= 0 &&
> +		    (result[i] <= barrier || second_best_result[i] <= barrier)) {
> +			    certainties[i] = CERTAINTY_NOT_CALCULATED;
> +			    barrier = result[i];
> +			    invalidate_max = i + max_search_distance_b + 1;
> +			    if (invalidate_max > length_b)
> +				    invalidate_max = length_b;
> +		    }
> +	}
> +
> +
> +	if (most_certain_local_line_b > 0) {
> +		fuzzy_find_matching_lines_recurse(
> +			max_search_distance_a,
> +			max_search_distance_b,
> +			start_a, start_b,
> +			most_certain_line_a + 1 - start_a,
> +			most_certain_local_line_b,
> +			fingerprints_a, fingerprints_b, similarities,
> +			certainties, second_best_result, result,
> +			closest_line_calc_offset1, closest_line_calc_offset2,
> +			closest_line_calc_numerator,
> +			closest_line_calc_denominator);
> +	}
> +	if (most_certain_local_line_b + 1 < length_b) {
> +		second_half_start_a = most_certain_line_a;
> +		offset_b = most_certain_local_line_b + 1;
> +		second_half_start_b = start_b + offset_b;
> +		second_half_length_a =
> +			length_a + start_a - second_half_start_a;
> +		second_half_length_b =
> +			length_b + start_b - second_half_start_b;
> +		fuzzy_find_matching_lines_recurse(
> +			max_search_distance_a,
> +			max_search_distance_b,
> +			second_half_start_a, second_half_start_b,
> +			second_half_length_a, second_half_length_b,
> +			fingerprints_a + second_half_start_a - start_a,
> +			fingerprints_b + offset_b,
> +			similarities +
> +				offset_b * (max_search_distance_a * 2 + 1),
> +			certainties + offset_b,
> +			second_best_result + offset_b, result + offset_b,
> +			closest_line_calc_offset1 + offset_b,
> +			closest_line_calc_offset2,
> +			closest_line_calc_numerator,
> +			closest_line_calc_denominator);
> +	}
> +}
> +
> +int *fuzzy_find_matching_lines(const char *content_a,
> +			       const char *content_b,
> +			       const int *line_starts_a,
> +			       const int *line_starts_b,
> +			       int start_a,
> +			       int start_b,
> +			       int length_a,
> +			       int length_b) {
> +
> +	int i, *result, *second_best_result,
> +		*certainties, *similarities, similarity_count;
> +	struct fingerprint *fingerprints_a, *fingerprints_b;
> +
> +	/* max_search_distance_a means that given a line in `b`, compare it to
> +	the line in `a` that is closest to its position, and the lines in `a`
> +	that are no greater than max_search_distance_a lines away from the
> +	closest line in `a`.
> +	max_search_distance_b is an upper bound on the greatest possible
> +	distance between lines in `b` such that they will both be compared with
> +	the same line in `a` according to max_search_distance_a. */
> +	int max_search_distance_a = 10, max_search_distance_b;
> +
> +	if (max_search_distance_a >= length_a)
> +		max_search_distance_a = length_a ? length_a - 1 : 0;
> +
> +	if (length_a == 0) {
> +		max_search_distance_b = 0;
> +	}
> +	else {
> +		max_search_distance_b = ((2 * max_search_distance_a + 1) *
> +			length_b - 1) / length_a;
> +	}
> +
> +	result = malloc(sizeof(int) * length_b);
> +	second_best_result = malloc(sizeof(int) * length_b);
> +	certainties = malloc(sizeof(int) * length_b);
> +	similarity_count = length_b * (max_search_distance_a * 2 + 1);
> +	similarities = malloc(sizeof(int) * similarity_count);

xcalloc(x, y) instead of malloc (x * y).  or at least xmalloc.

> +
> +	for (i = 0; i < length_b; ++i) {
> +		result[i] = -1;
> +		second_best_result[i] = -1;
> +		certainties[i] = CERTAINTY_NOT_CALCULATED;
> +	}
> +
> +	for (i = 0; i < similarity_count; ++i) {
> +		similarities[i] = -1;
> +	}
	^ removed {}, as as with the 'if' statements


Barret



> +
> +	fingerprints_a = xmalloc(sizeof(struct fingerprint) * length_a);
> +	fingerprints_b = xmalloc(sizeof(struct fingerprint) * length_b);
> +
> +	get_line_fingerprints(fingerprints_a, content_a,
> +			      line_starts_a,
> +			      start_a, length_a);
> +	get_line_fingerprints(fingerprints_b, content_b,
> +			      line_starts_b,
> +			      start_b, length_b);
> +
> +	fuzzy_find_matching_lines_recurse(max_search_distance_a,
> +					  max_search_distance_b,
> +					  start_a, start_b,
> +					  length_a, length_b,
> +					  fingerprints_a,
> +					  fingerprints_b,
> +					  similarities,
> +					  certainties,
> +					  second_best_result,
> +					  result,
> +					  0, start_a, length_a, length_b);
> +
> +	for (i = 0; i < length_b; ++i) {
> +		free_fingerprint(fingerprints_b + i);
> +	}
> +	for (i = 0; i < length_a; ++i) {
> +		free_fingerprint(fingerprints_a + i);
> +	}
> +	free(fingerprints_b);
> +	free(fingerprints_a);
> +	free(similarities);
> +	free(certainties);
> +	free(second_best_result);
> +
> +	return result;
> +}
> +

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v6 0/6] blame: add the ability to ignore commits
  2019-04-22 22:26 ` [PATCH v6 0/6] blame: add the ability to ignore commits michael
                     ` (2 preceding siblings ...)
  2019-04-23 20:17   ` Barret Rhoden
@ 2019-04-23 21:21   ` Barret Rhoden
  2019-04-24 21:07   ` Barret Rhoden
  4 siblings, 0 replies; 10+ messages in thread
From: Barret Rhoden @ 2019-04-23 21:21 UTC (permalink / raw)
  To: michael, git
  Cc: Jeff King, Stefan Beller, Jeff Smith, Junio C Hamano,
	René Scharfe, Ævar Arnfjörð Bjarmason,
	David Kastrup, Johannes Schindelin

Hi Michael -

I cobbled something together that passes my old git tests and all but 
one of your tests, though an assertion fails when I use it for real. 
See below.

On 4/22/19 6:26 PM, michael@platin.gs wrote:
[snip]
> +static int *get_similarity(int *similarities, int max_search_distance_a,
> +			   int local_line_a, int local_line_b,
> +			   int closest_local_line_a) {
> +	assert(abs(local_line_a - closest_local_line_a) <= max_search_distance_a);

This assert fails.  In my example,
	local_line_a = 1
	closest_local_line_a = 12
	max_search_distance_a = 10

Backtrace tells me it was called here:

> +static void fuzzy_find_matching_lines_recurse(

[snip]

> +	for (i = invalidate_min; i < invalidate_max; ++i) {
> +		closest_local_line_a = get_closest_local_line(
> +			start_a, i,
> +			closest_line_calc_offset1,
> +			closest_line_calc_offset2,
> +			closest_line_calc_numerator,
> +			closest_line_calc_denominator);
> +		*get_similarity(similarities, max_search_distance_a,
> +				most_certain_line_a - start_a, i,
> +				closest_local_line_a) = -1;
> +	}

So it looks like that '12' came from get_closest_local_line(),  The args 
to that call were:

	start_a 258, i 3, off1 0, off2 258 num 83, denom 23

The equation reduces to 12 + off2 - start_a = 12

I don't know what those values mean, but it looks like A's values affect 
off2 and start_a, but those are only used at the very end of the 
calculation.  Does 'A' affect off1, numer, or denom?

Any thoughts?  What all is going on here?

If you want to play with it, it's at

	git@github.com:brho/git.git (master)

(Beware the push -f / forced update.  And I figured the repo would be 
preferable to spamming with unrelated patches).

If you want an example commit the assert fails on, it's this repo:
	
	git@github.com:brho/akaros.git
	master branch
	ignore-rev-file=.git-blame-ignore-revs
	file user/vmm/virtio_mmio.c

On an unrelated note:

 > The significant change here is that when a line is matched, its 
      > fingerprint is subtracted from the matched parent line's 
fingerprint. > This prevents two lines matching the same part of a 
parent line.

Does this limit us so that our second pass (the fallback when fuzzy 
failed) won't be able to match to a parent line that was already matched?


The test that is failing is your Expand Lines test.  The 'final' diff was:

--- a/1
+++ b/1
@@ -1,5 +1,7 @@
  aaa
-bbb
+bbbx
+bbbx
  ccc
-ddd
+dddx
+dddx
  eee

Which should be two diff chunks and two calls to guess_line_blames().

And the 'actual' was:

1
2
Final  (not 2)
3
4
Final  (not 2)
5

I didn't dig much, but your older stuff (when merged with mine) also 
didn't pass that test.  Maybe something with the offset/parent_slno stuff.

Thanks,

Barret


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v6 0/6] blame: add the ability to ignore commits
  2019-04-22 22:26 ` [PATCH v6 0/6] blame: add the ability to ignore commits michael
                     ` (3 preceding siblings ...)
  2019-04-23 21:21   ` Barret Rhoden
@ 2019-04-24 21:07   ` Barret Rhoden
  4 siblings, 0 replies; 10+ messages in thread
From: Barret Rhoden @ 2019-04-24 21:07 UTC (permalink / raw)
  To: michael, git
  Cc: Jeff King, Stefan Beller, Jeff Smith, Junio C Hamano,
	René Scharfe, Ævar Arnfjörð Bjarmason,
	David Kastrup, Johannes Schindelin


Hi Michael -

I dug into the code a bit more and have some more comments below.  Take 
anything I say with a grain of salt; I'm still trying to understand how 
it all works.  =)

If you do have updates to your patch, see if you can make it apply onto 
my branch at https://github.com/brho/git/commits/master.  I have a 
commit there called "WIP-michael-fuzzy", which is mostly all of your 
stuff plus the style changes and whatnot I was talking about.  Though if 
you give me something else, I can make that work too.

All in all, I think we're pretty close to having something cool.  =)

On 4/22/19 6:26 PM, michael@platin.gs wrote:

> @@ -0,0 +1,434 @@
> +#include "fuzzy.h"
> +#include <ctype.h>
> +#include <stdint.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#include "git-compat-util.h"
> +#include "hashmap.h"
> +
> +struct fingerprint_entry {
> +	struct hashmap_entry entry;
> +	int count;
> +};
> +struct fingerprint {
> +	struct hashmap map;
> +	struct fingerprint_entry *entries;
> +};
> +

The whole fingerprinting section could use a good comment.  What is a 
fingerprint, why/how we use them, what it means to be similar, etc.

> +static void get_fingerprint(struct fingerprint *result,
> +			    const char *line_begin,
> +			    const char *line_end) {
> +	unsigned hash;
> +	char c0, c1;
> +	int map_entry_count = line_end - line_begin - 1;
> +	struct fingerprint_entry *entry = xcalloc(map_entry_count,
> +		sizeof(struct fingerprint_entry));
> +	struct fingerprint_entry *found_entry;
> +	hashmap_init(&result->map, NULL, NULL, map_entry_count);
> +	result->entries = entry;
> +	for (const char *p = line_begin; p + 1 < line_end; ++p, ++entry) {
> +		c0 = *p;
> +		c1 = *(p + 1);
> +		/* Ignore whitespace pairs */
> +		if (isspace(c0) && isspace(c1))
> +			continue;
> +		hash = tolower(c0) | (tolower(c1) << 8);
> +		hashmap_entry_init(entry, hash);
> +
> +		if ((found_entry = hashmap_get(&result->map, entry, NULL))) {
> +			found_entry->count += 1;
> +		}
> +		else {
> +			entry->count = 1;
> +			hashmap_add(&result->map, entry);
> +		}
> +	}
> +}

[snip]





> +static int get_closest_local_line(int start_a,
> +			    int local_line_b,
> +			    int closest_line_calc_offset1,
> +			    int closest_line_calc_offset2,
> +			    int closest_line_calc_numerator,
> +			    int closest_line_calc_denominator) {
> +	return ((local_line_b + closest_line_calc_offset1) * 2 + 1) *
> +		closest_line_calc_numerator /
> +		(closest_line_calc_denominator * 2) +
> +		closest_line_calc_offset2 - start_a;
> +}

Overall, I found the final four parameters (offset1, offset2, numer, and 
denom) to be confusing, used here and passed through all of the functions.

What are they exactly?  Do they ever change from the first invocation of 
fuzzy_find_matching_lines_recurse()?  The only one I saw that changed 
was 'closest_line_calc_offset1 + offset_b' in the upper half of the 
recursive call.

So if I'm following it correctly, closest_line_calc_offset2 is always == 
start_a, so then in what 'file space' does the return val from 
get_closest_local_line() reside?  A's space?  Relative number to the 
beginning of start_a?

What all is the closest_local_line?  The line in A that corresponds to 
the line in B, for some mapping function?  Maybe the corresponding 
fractional bit?  (hence the division, but there's also that *2 in there)

The assertion that trips for me is related to this - we get a 
closest_local_line_a that is farther than max_search_distance_a from 
local_line_a.

On a related note, any 1-1 offsetting between A and B seems precarious. 
It's the 'offset' variable passed in from guess_line_blames().  But if 
you're calculating it from start_b - start_a, that is a little brittle. 
start_b is the tiny window of e->s_lno.  If we ever change things to 
look into the entire diff chunk (the one passed to guess_line_blames()), 
then that will break.  Not sure if you're offsetting in this manner or 
not.  (looks like 'no').

> +
> +static int *get_similarity(int *similarities, int max_search_distance_a,
> +			   int local_line_a, int local_line_b,
> +			   int closest_local_line_a) {
> +	assert(abs(local_line_a - closest_local_line_a) <= max_search_distance_a);
> +	return similarities + local_line_a - closest_local_line_a +
> +		max_search_distance_a +
> +		local_line_b * (max_search_distance_a * 2 + 1);
> +}

This needs some sort of comment.  From it's allocation below (and from 
staring at the code long enough), I gather the similarities array is a 
2D array, "each line in B" by "each line in a window (of size X) in A 
centered on some line in A mapping to B", where X is 
(max_search_distance_a * 2 + 1), and 'some line' is closest_local_line_a.

Is this something like "for line b, get sim for local_line a"?  Is it 
local_line_a or closest_local_line_a that we are getting?  It's not 
clear from the prototype alone.

Maybe the 'closest_local_line_a' ought to be decided inside this 
function, so it's a simpler-looking 'getter.'  Or better yet, maybe 
there's a lookup array mapping B -> "window in A".  (more on this later).

> +
> +#define CERTAIN_NOTHING_MATCHES -2
> +#define CERTAINTY_NOT_CALCULATED -1
> +
> +static void find_best_line_matches(const int max_search_distance_a,
> +				   int start_a,
> +				   int length_a,
> +				   int local_line_b,
> +				   struct fingerprint *fingerprints_a,
> +				   struct fingerprint *fingerprints_b,

I was a little worried about keeping track of what offset / "address 
space" we're in.  The fingerprint arrays passed in are both set so the 
0th member corresponds to the FP for start_a or start_b, with length_a 
or length_b members.

start_a and local_line_b are in different types spaces: start_a is 
absolute in 'A' and local_line_b is relative in 'B'.  Eventually, I 
figured out that that is what 'local' meant.  Seeing both start_a 
(absolute) and local_line_b (relative) passed in made me worry a little 
about whether or not there was a bug.  Maybe compute and pass in the 
local_a_line instead?

I see later on that you use start_a in this function so you can store 
result[] and second_best_result[] in A's absolute space.

It might make sense to pick a space (absolute or relative) and use it 
throughout.

> +				   int *similarities,
> +				   int *certainties,
> +				   int *second_best_result,
> +				   int *result,
> +				   int closest_line_calc_offset1,
> +				   int closest_line_calc_offset2,
> +				   int closest_line_calc_numerator,
> +				   int closest_line_calc_denominator) {
> +
> +	int i, search_start, search_end, closest_local_line_a, *similarity,
> +		best_similarity = 0, second_best_similarity = 0,
> +		best_similarity_index = 0, second_best_similarity_index = 0;
> +
> +	if (certainties[local_line_b] != CERTAINTY_NOT_CALCULATED)
> +		return;
> +
> +	closest_local_line_a = get_closest_local_line(start_a,
> +					  local_line_b,
> +					  closest_line_calc_offset1,
> +					  closest_line_calc_offset2,
> +					  closest_line_calc_numerator,
> +					  closest_line_calc_denominator);
> +
> +	search_start = closest_local_line_a - max_search_distance_a;
> +	if (search_start < 0)
> +		search_start = 0;
> +
> +	search_end = closest_local_line_a + max_search_distance_a + 1;
> +	if (search_end > length_a)
> +		search_end = length_a;
> +
> +	for (i = search_start; i < search_end; ++i) {
> +		similarity = get_similarity(similarities, max_search_distance_a,
> +					    i, local_line_b,
> +					    closest_local_line_a);
> +		if (*similarity == -1) {
> +			*similarity = fingerprint_similarity(
> +				fingerprints_b + local_line_b,
> +				fingerprints_a + i) *
> +				(1000 - abs(i - closest_local_line_a));

Need some assertion that "1000 > 2*max_search_distance_a + 1" or 
something?  I know that number is '21', but it's based on a hard-coded 
integer somewhere else.

> +		}
> +		if (*similarity > best_similarity) {
> +			second_best_similarity = best_similarity;
> +			second_best_similarity_index = best_similarity_index;
> +			best_similarity = *similarity;
> +			best_similarity_index = i;
> +		}
> +		else if (*similarity > second_best_similarity) {
> +			second_best_similarity = *similarity;
> +			second_best_similarity_index = i;
> +		}
> +	}
> +
> +	if (best_similarity == 0) {
> +		certainties[local_line_b] = CERTAIN_NOTHING_MATCHES;
> +		result[local_line_b] = -1;
> +	}
> +	else {
> +		certainties[local_line_b] = best_similarity * 2 -
> +			second_best_similarity;
> +		result[local_line_b] = start_a + best_similarity_index;
> +		second_best_result[local_line_b] =
> +			start_a + second_best_similarity_index;
> +	}
> +}
> +
> +/*
> + * This finds the line that we can match with the most confidence, and
> + * uses it as a partition. It then calls itself on the lines on either side of
> + * that partition. In this way we avoid lines appearing out of order, and
> + * retain a sensible line ordering.
> + */
> +static void fuzzy_find_matching_lines_recurse(
> +	int max_search_distance_a,
> +	int max_search_distance_b,
> +	int start_a, int start_b,
> +	int length_a, int length_b,
> +	struct fingerprint *fingerprints_a,
> +	struct fingerprint *fingerprints_b,
> +	int *similarities,
> +	int *certainties,
> +	int *second_best_result,
> +	int *result,
> +	int closest_line_calc_offset1,
> +	int closest_line_calc_offset2,
> +	int closest_line_calc_numerator,
> +	int closest_line_calc_denominator) {

For all of the arguments that never change, which seem to be the last 
three args and the search distances, you could consider putting them in 
a struct and passing that along.  Similarly, the only time the 
fingerprints and the four int arrays change is in the upper half of the 
recursive call.

You might be able to put most all of those array args into the args 
struct too, such that that struct always represents the original window 
(or even the absolute numbers?), and then pass along the offsets. 
You're somewhat doing that already when you pass second_half_start_a, 
second_half_start_b (which includes offset_b), etc, down below.

> +
> +	int i, barrier, invalidate_min, invalidate_max, offset_b,
> +		second_half_start_a, second_half_start_b,
> +		second_half_length_a, second_half_length_b,
> +		most_certain_line_a, most_certain_local_line_b = -1,
> +		most_certain_line_certainty = -1,
> +		closest_local_line_a;
> +
> +	for (i = 0; i < length_b; ++i) {
> +		find_best_line_matches(max_search_distance_a,
> +				       start_a,
> +				       length_a,
> +				       i,
> +				       fingerprints_a,
> +				       fingerprints_b,
> +				       similarities,
> +				       certainties,
> +				       second_best_result,
> +				       result,
> +				       closest_line_calc_offset1,
> +				       closest_line_calc_offset2,
> +				       closest_line_calc_numerator,
> +				       closest_line_calc_denominator);
> +
> +		if (certainties[i] > most_certain_line_certainty) {
> +			most_certain_line_certainty = certainties[i];
> +			most_certain_local_line_b = i;
> +		}
> +	}
> +
> +	if (most_certain_local_line_b == -1) {
> +		return;
> +	}
> +
> +	most_certain_line_a = result[most_certain_local_line_b];
> +
> +	/* Subtract the most certain line's fingerprint in b from the
> +	 matched fingerprint in a. This means that other lines in b can't also
> +	 match the same parts of the line in a. */
> +	fingerprint_subtract(fingerprints_a + most_certain_line_a - start_a,
> +			     fingerprints_b + most_certain_local_line_b);

FYI, if I comment out fingerprint_subtract, this will pass your test 2 
(currently failing), but fail test 5 (currently passing).

> +
> +
> +	/* Invalidate results that may be affected by the choice of pivot. */
> +	invalidate_min = most_certain_local_line_b - max_search_distance_b;
> +	invalidate_max = most_certain_local_line_b + max_search_distance_b + 1;
> +	if (invalidate_min < 0)
> +		invalidate_min = 0;
> +	if (invalidate_max > length_b)
> +		invalidate_max = length_b;
> +
> +	for (i = invalidate_min; i < invalidate_max; ++i) {
> +		closest_local_line_a = get_closest_local_line(
> +			start_a, i,
> +			closest_line_calc_offset1,
> +			closest_line_calc_offset2,
> +			closest_line_calc_numerator,
> +			closest_line_calc_denominator);
> +		*get_similarity(similarities, max_search_distance_a,
> +				most_certain_line_a - start_a, i,
> +				closest_local_line_a) = -1;

The assert in get_similarity fails on my example when called from here. 
I guess something went wrong with finding the window center.

> +	}
> +
> +	barrier = most_certain_line_a;
> +
> +	for (i = most_certain_local_line_b - 1; i >= invalidate_min; --i) {
> +		if (certainties[i] >= 0 &&
> +		    (result[i] >= barrier || second_best_result[i] >= barrier)) {
> +			    certainties[i] = CERTAINTY_NOT_CALCULATED;
> +			    barrier = result[i];
> +			    invalidate_min = i - max_search_distance_b;
> +			    if (invalidate_min < 0)
> +				    invalidate_min = 0;
> +		    }
> +	}

Is it important for this function (above) to go in descending order, and 
the next to go in ascending order?  If you can go i = 0; i<foo; i++ for 
both, then maybe a helper function can replace both of these functions 
(above and below this comment).

> +
> +	barrier = most_certain_line_a;
> +
> +	for (i = most_certain_local_line_b + 1; i < invalidate_max; ++i) {
> +		if (certainties[i] >= 0 &&
> +		    (result[i] <= barrier || second_best_result[i] <= barrier)) {
> +			    certainties[i] = CERTAINTY_NOT_CALCULATED;
> +			    barrier = result[i];
> +			    invalidate_max = i + max_search_distance_b + 1;
> +			    if (invalidate_max > length_b)
> +				    invalidate_max = length_b;
> +		    }
> +	}

What's the intuition behind the steps to invalidate the results?

It looks like you want to reset the similarity calculations for any 
other line in B that was compared to the matching A line, so that that A 
line is not matched again until its similarity is recomputed (due to the 
"fingerprint subtraction"?).

You limit it to invalidate_min and max, then clamp those to the real 
min/max (0, length_b).  Initially, I thought that's a performance 
optimization (vs correctness) since you want to limit recalculations. 
Though I think the assumption is that you'll never ask for a 
get_similarity(a, b) where A is outside the reachable radius of B 
(max_search_distance_b).

What about with the certainties?  It seems like you're trying to limit 
which ones get reset.  Why do you check result[i] <= barrier, instead of 
just equality?

> +
> +
> +	if (most_certain_local_line_b > 0) {
> +		fuzzy_find_matching_lines_recurse(
> +			max_search_distance_a,
> +			max_search_distance_b,
> +			start_a, start_b,
> +			most_certain_line_a + 1 - start_a,
> +			most_certain_local_line_b,
> +			fingerprints_a, fingerprints_b, similarities,
> +			certainties, second_best_result, result,
> +			closest_line_calc_offset1, closest_line_calc_offset2,
> +			closest_line_calc_numerator,
> +			closest_line_calc_denominator);
> +	}
> +	if (most_certain_local_line_b + 1 < length_b) {
> +		second_half_start_a = most_certain_line_a;
> +		offset_b = most_certain_local_line_b + 1;
> +		second_half_start_b = start_b + offset_b;
> +		second_half_length_a =
> +			length_a + start_a - second_half_start_a;
> +		second_half_length_b =
> +			length_b + start_b - second_half_start_b;
> +		fuzzy_find_matching_lines_recurse(
> +			max_search_distance_a,
> +			max_search_distance_b,
> +			second_half_start_a, second_half_start_b,
> +			second_half_length_a, second_half_length_b,
> +			fingerprints_a + second_half_start_a - start_a,
> +			fingerprints_b + offset_b,
> +			similarities +
> +				offset_b * (max_search_distance_a * 2 + 1),
> +			certainties + offset_b,
> +			second_best_result + offset_b, result + offset_b,
> +			closest_line_calc_offset1 + offset_b,
> +			closest_line_calc_offset2,
> +			closest_line_calc_numerator,
> +			closest_line_calc_denominator);
> +	}
> +}
> +
> +int *fuzzy_find_matching_lines(const char *content_a,
> +			       const char *content_b,
> +			       const int *line_starts_a,
> +			       const int *line_starts_b,
> +			       int start_a,
> +			       int start_b,
> +			       int length_a,
> +			       int length_b) {
> +
> +	int i, *result, *second_best_result,
> +		*certainties, *similarities, similarity_count;
> +	struct fingerprint *fingerprints_a, *fingerprints_b;
> +
> +	/* max_search_distance_a means that given a line in `b`, compare it to
> +	the line in `a` that is closest to its position, and the lines in `a`
> +	that are no greater than max_search_distance_a lines away from the
> +	closest line in `a`.
> +	max_search_distance_b is an upper bound on the greatest possible
> +	distance between lines in `b` such that they will both be compared with
> +	the same line in `a` according to max_search_distance_a. */
> +	int max_search_distance_a = 10, max_search_distance_b;
> +
> +	if (max_search_distance_a >= length_a)
> +		max_search_distance_a = length_a ? length_a - 1 : 0;
> +
> +	if (length_a == 0) {
> +		max_search_distance_b = 0;
> +	}
> +	else {
> +		max_search_distance_b = ((2 * max_search_distance_a + 1) *
> +			length_b - 1) / length_a;
> +	}

A few questions / comments about these search distances:

- max_search_distance_a and _b sounds like different concepts.  maybe 
different names?  it's also hard to follow how you get these numbers, 
especially since it seems there is some scaling involved when converting 
from A space to B space.

- is the max_search_distance_a limited to 10 for performance reasons? 
conceptually, it'd be easier the similarities array was length_b * 
length_a.  instead, it sounds like every line B has a (potentially) 
separate window into A.  maybe it'd be simpler to have another array 
that tracks for each B, which line in A is the *start* of its window. 
that'd also make me rest easier when worrying about odd numbers, 
division, and off-by-one issues.

- the (2 * max_search_distance_a + 1) comes up a lot.  similarly, that 
calculation involving dividing by length_a comes up too.  maybe 
encapsulate those in some other smaller set of functions, so it's clear 
when they are being used for the same purpose.  i'm having a hard time 
convincing myself there aren't off-by-one issues.  (not that there are).

Thanks,

Barret


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-04-24 21:07 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <[PATCH v6 0/6] blame: add the ability to ignore commits>
2019-04-22 22:26 ` [PATCH v6 0/6] blame: add the ability to ignore commits michael
2019-04-23 14:23   ` Barret Rhoden
2019-04-23 18:13   ` Barret Rhoden
2019-04-23 20:17   ` Barret Rhoden
2019-04-23 21:21   ` Barret Rhoden
2019-04-24 21:07   ` Barret Rhoden
2019-04-10 16:24 Barret Rhoden
2019-04-14 21:10 ` Michael Platings
2019-04-15 13:23   ` Barret Rhoden
2019-04-15 21:54     ` Michael Platings

Code repositories for project(s) associated with this public inbox

	https://80x24.org/mirrors/git.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).