git@vger.kernel.org mailing list mirror (one of many)
 help / color / mirror / code / Atom feed
* New (better?) hash map technique in limit case.
@ 2021-12-10 21:57 Philip Oakley
  2021-12-10 22:52 ` Glen Choo
  0 siblings, 1 reply; 4+ messages in thread
From: Philip Oakley @ 2021-12-10 21:57 UTC (permalink / raw)
  To: Git List; +Cc: Philip Oakley

Recently I saw a report [1] on a new theoretical result about how to
manage hash maps which get nearly 'full', which beats Knuth's limit
formula. The full paper is at [2]

As I understand it, the method adds the gravestone entries early during
has collisions to avoid clumping of such collision insertions, rather
than always having to enter the collision list at the end. This keeps
the available slots relatively randomly spaced.

It feels like the old random bus arrival problem where the average wait
for next bus is identical to the average time since the last bust, which
is the same as the average bus interval (thus 1 + 1 = 1), and the
technique maintains that advantageous perception.

Given Git's use of hashes, it sounds like it could have uses, assuming
the theory pans out. I've not yet gone through the paper itself [2] but
hope springs eternal.

Philip

[1]
S. Nadis and M. I. of Technology, “Theoretical breakthrough could boost
data storage.”
https://techxplore.com/news/2021-11-theoretical-breakthrough-boost-storage.html
(accessed Nov. 18, 2021).

[2]
M. A. Bender, B. C. Kuszmaul, and W. Kuszmaul, “Linear Probing
Revisited: Tombstones Mark the Death of Primary Clustering,”
arXiv:2107.01250 [cs, math], Jul. 2021, Accessed: Nov. 18, 2021.
[Online]. Available: http://arxiv.org/abs/2107.01250

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: New (better?) hash map technique in limit case.
  2021-12-10 21:57 New (better?) hash map technique in limit case Philip Oakley
@ 2021-12-10 22:52 ` Glen Choo
  2021-12-12 17:43   ` Philip Oakley
  0 siblings, 1 reply; 4+ messages in thread
From: Glen Choo @ 2021-12-10 22:52 UTC (permalink / raw)
  To: Philip Oakley, Git List; +Cc: Philip Oakley

Philip Oakley <philipoakley@iee.email> writes:

> Recently I saw a report [1] on a new theoretical result about how to
> manage hash maps which get nearly 'full', which beats Knuth's limit
> formula. The full paper is at [2]
>
> As I understand it, the method adds the gravestone entries early during
> has collisions to avoid clumping of such collision insertions, rather
> than always having to enter the collision list at the end. This keeps
> the available slots relatively randomly spaced.
>
> It feels like the old random bus arrival problem where the average wait
> for next bus is identical to the average time since the last bust, which
> is the same as the average bus interval (thus 1 + 1 = 1), and the
> technique maintains that advantageous perception.
>
> Given Git's use of hashes, it sounds like it could have uses, assuming
> the theory pans out. I've not yet gone through the paper itself [2] but
> hope springs eternal.
>
> Philip
>
> [1]
> S. Nadis and M. I. of Technology, “Theoretical breakthrough could boost
> data storage.”
> https://techxplore.com/news/2021-11-theoretical-breakthrough-boost-storage.html
> (accessed Nov. 18, 2021).
>
> [2]
> M. A. Bender, B. C. Kuszmaul, and W. Kuszmaul, “Linear Probing
> Revisited: Tombstones Mark the Death of Primary Clustering,”
> arXiv:2107.01250 [cs, math], Jul. 2021, Accessed: Nov. 18, 2021.
> [Online]. Available: http://arxiv.org/abs/2107.01250

Very interesting, thanks for sharing. I haven't read the full paper
either, but this is an interesting result.

It seems that this result is limited to hashmaps with a approximately
equal number of insertions and deletions..

From [1]

  They found that for applications where the number of insertions and
  deletions stays about the same—and the amount of data added is roughly
  equal to that removed—linear-probing hash tables can operate at high
  storage capacities without sacrificing speed.apacities without
  sacrificing speed.

and [2]

  ...We then turn our attention to sequences of operations that contain
  deletions, and show that the tombstones left behind by those deletions
  have a primary-anti-clustering effect, that is, they have a tendency
  to speed up future insertions

Do we have any such use cases?

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: New (better?) hash map technique in limit case.
  2021-12-10 22:52 ` Glen Choo
@ 2021-12-12 17:43   ` Philip Oakley
  2021-12-12 20:53     ` rsbecker
  0 siblings, 1 reply; 4+ messages in thread
From: Philip Oakley @ 2021-12-12 17:43 UTC (permalink / raw)
  To: Glen Choo, Git List

Hi Glen,
On 10/12/2021 22:52, Glen Choo wrote:
> Philip Oakley <philipoakley@iee.email> writes:
>
>> Recently I saw a report [1] on a new theoretical result about how to
>> manage hash maps which get nearly 'full', which beats Knuth's limit
>> formula. The full paper is at [2]
>>
>> As I understand it, the method adds the gravestone entries early during
>> has collisions to avoid clumping of such collision insertions, rather
>> than always having to enter the collision list at the end. This keeps
>> the available slots relatively randomly spaced.
>>
>> It feels like the old random bus arrival problem where the average wait
>> for next bus is identical to the average time since the last bust, which
>> is the same as the average bus interval (thus 1 + 1 = 1), and the
>> technique maintains that advantageous perception.
>>
>> Given Git's use of hashes, it sounds like it could have uses, assuming
>> the theory pans out. I've not yet gone through the paper itself [2] but
>> hope springs eternal.
>>
>> Philip
>>
>> [1]
>> S. Nadis and M. I. of Technology, “Theoretical breakthrough could boost
>> data storage.”
>> https://techxplore.com/news/2021-11-theoretical-breakthrough-boost-storage.html
>> (accessed Nov. 18, 2021).
>>
>> [2]
>> M. A. Bender, B. C. Kuszmaul, and W. Kuszmaul, “Linear Probing
>> Revisited: Tombstones Mark the Death of Primary Clustering,”
>> arXiv:2107.01250 [cs, math], Jul. 2021, Accessed: Nov. 18, 2021.
>> [Online]. Available: http://arxiv.org/abs/2107.01250
> Very interesting, thanks for sharing. I haven't read the full paper
> either, but this is an interesting result.
>
> It seems that this result is limited to hashmaps with a approximately
> equal number of insertions and deletions..
>
> From [1]
>
>   They found that for applications where the number of insertions and
>   deletions stays about the same—and the amount of data added is roughly
>   equal to that removed—linear-probing hash tables can operate at high
>   storage capacities without sacrificing speed.apacities without
>   sacrificing speed.
>
> and [2]
>
>   ...We then turn our attention to sequences of operations that contain
>   deletions, and show that the tombstones left behind by those deletions
>   have a primary-anti-clustering effect, that is, they have a tendency
>   to speed up future insertions
>
> Do we have any such use cases?
I know that we use hash maps, but haven't followed there actual usage in
various optimisations.

Obviously we use hash naming of objects but that's generally a
red-herring, I think, unless we are over-abbreviating the hash so that
it's no longer unique (which could be happening somewhere).

I suspect that some of the hosting providers may be more interested from
a File system perspective, as I think we just pass the object store
problems to the FS. Then again, all the mono-repo and partial checkout
corporate users are likely to be interested, especially if this unblocks
some historical misunderstanding about the limits and how to handle them.

--
Philip

^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: New (better?) hash map technique in limit case.
  2021-12-12 17:43   ` Philip Oakley
@ 2021-12-12 20:53     ` rsbecker
  0 siblings, 0 replies; 4+ messages in thread
From: rsbecker @ 2021-12-12 20:53 UTC (permalink / raw)
  To: 'Philip Oakley', 'Glen Choo', 'Git List'

On December 12, 2021 12:44 PM, Philip Oakley:
> To: Glen Choo <chooglen@google.com>; Git List <git@vger.kernel.org>
> Subject: Re: New (better?) hash map technique in limit case.
> 
> Hi Glen,
> On 10/12/2021 22:52, Glen Choo wrote:
> > Philip Oakley <philipoakley@iee.email> writes:
> >
> >> Recently I saw a report [1] on a new theoretical result about how to
> >> manage hash maps which get nearly 'full', which beats Knuth's limit
> >> formula. The full paper is at [2]
> >>
> >> As I understand it, the method adds the gravestone entries early
> >> during has collisions to avoid clumping of such collision insertions,
> >> rather than always having to enter the collision list at the end.
> >> This keeps the available slots relatively randomly spaced.
> >>
> >> It feels like the old random bus arrival problem where the average
> >> wait for next bus is identical to the average time since the last
> >> bust, which is the same as the average bus interval (thus 1 + 1 = 1),
> >> and the technique maintains that advantageous perception.
> >>
> >> Given Git's use of hashes, it sounds like it could have uses,
> >> assuming the theory pans out. I've not yet gone through the paper
> >> itself [2] but hope springs eternal.
> >>
> >> Philip
> >>
> >> [1]
> >> S. Nadis and M. I. of Technology, “Theoretical breakthrough could
> >> boost data storage.”
> >> https://techxplore.com/news/2021-11-theoretical-breakthrough-boost-
> st
> >> orage.html
> >> (accessed Nov. 18, 2021).
> >>
> >> [2]
> >> M. A. Bender, B. C. Kuszmaul, and W. Kuszmaul, “Linear Probing
> >> Revisited: Tombstones Mark the Death of Primary Clustering,”
> >> arXiv:2107.01250 [cs, math], Jul. 2021, Accessed: Nov. 18, 2021.
> >> [Online]. Available: http://arxiv.org/abs/2107.01250
> > Very interesting, thanks for sharing. I haven't read the full paper
> > either, but this is an interesting result.
> >
> > It seems that this result is limited to hashmaps with a approximately
> > equal number of insertions and deletions..
> >
> > From [1]
> >
> >   They found that for applications where the number of insertions and
> >   deletions stays about the same—and the amount of data added is roughly
> >   equal to that removed—linear-probing hash tables can operate at high
> >   storage capacities without sacrificing speed.apacities without
> >   sacrificing speed.
> >
> > and [2]
> >
> >   ...We then turn our attention to sequences of operations that contain
> >   deletions, and show that the tombstones left behind by those deletions
> >   have a primary-anti-clustering effect, that is, they have a tendency
> >   to speed up future insertions
> >
> > Do we have any such use cases?
> I know that we use hash maps, but haven't followed there actual usage in
> various optimisations.
> 
> Obviously we use hash naming of objects but that's generally a red-herring, I
> think, unless we are over-abbreviating the hash so that it's no longer unique
> (which could be happening somewhere).
> 
> I suspect that some of the hosting providers may be more interested from a
> File system perspective, as I think we just pass the object store problems to
> the FS. Then again, all the mono-repo and partial checkout corporate users
> are likely to be interested, especially if this unblocks some historical
> misunderstanding about the limits and how to handle them.

If we are going to change data structures for object hashes, it might be better to use a compressed trie data structure (trie is not a typo for tree). This can be mapped easily to the current FS structure in .git/objects - in fact, .git/objects is mappable to a two-level compressed trie with a top level of 2 characters. Extending the data structure model both in memory and on the file system to become multi-level instead of just 2 levels should not be particularly onerous and has a O(1) search speed that, at worst, is the same as a perfect closed hash and the same as an open hash with single entries in buckets. Once open hashes degenerate to have long chains, the compressed trie is substantially more time efficient. A trie is an O(1) lookup and insert in all cases and does not degenerate like hashes or trees do. Uncompressed tries are generally extremely less space efficient than hashes (making them quite bad for our purposes) but compressed tries substantially reduce wasted space with a trade-off of some time to do node-splits on insert and single-level mid-key lookups.

Note: In a FS look up, the search is not O(1) unless the FS is also implemented as a trie, so the benefit on disk will not be as fully realized, although if you can get the compression to fit into a directory inode block, the search speeds up significantly because of reduced I/Os. A FS existence search is O(n,m) where n is the maximum number of partitions in the trie's key, and m is the maximum number of files within a directory IF the key partitioning compression is not known - if it IS known, then the FS existence search is O(m). This n is different (much smaller) than the total number of entries in the hash. As an example, a sample repo has 2607 entries under object, and the key partitioned length (n) is always 2, representing, as an example, 00/16212c76018d27bd1a39630c32d1027fbfbebd. Keeping the key partitioning unknown allows the searcher to adapt to any repository trie compression, including existing ones, so there's that advantage.

-Randall


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-12-12 20:54 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-10 21:57 New (better?) hash map technique in limit case Philip Oakley
2021-12-10 22:52 ` Glen Choo
2021-12-12 17:43   ` Philip Oakley
2021-12-12 20:53     ` rsbecker

Code repositories for project(s) associated with this public inbox

	https://80x24.org/mirrors/git.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).