git@vger.kernel.org mailing list mirror (one of many)
 help / color / mirror / code / Atom feed
From: <rsbecker@nexbridge.com>
To: "'Philip Oakley'" <philipoakley@iee.email>,
	"'Glen Choo'" <chooglen@google.com>,
	"'Git List'" <git@vger.kernel.org>
Subject: RE: New (better?) hash map technique in limit case.
Date: Sun, 12 Dec 2021 15:53:54 -0500	[thread overview]
Message-ID: <000001d7ef9a$6493d150$2dbb73f0$@nexbridge.com> (raw)
In-Reply-To: <7ffa36ab-da93-0fe7-8d21-f489b16a3340@iee.email>

On December 12, 2021 12:44 PM, Philip Oakley:
> To: Glen Choo <chooglen@google.com>; Git List <git@vger.kernel.org>
> Subject: Re: New (better?) hash map technique in limit case.
> 
> Hi Glen,
> On 10/12/2021 22:52, Glen Choo wrote:
> > Philip Oakley <philipoakley@iee.email> writes:
> >
> >> Recently I saw a report [1] on a new theoretical result about how to
> >> manage hash maps which get nearly 'full', which beats Knuth's limit
> >> formula. The full paper is at [2]
> >>
> >> As I understand it, the method adds the gravestone entries early
> >> during has collisions to avoid clumping of such collision insertions,
> >> rather than always having to enter the collision list at the end.
> >> This keeps the available slots relatively randomly spaced.
> >>
> >> It feels like the old random bus arrival problem where the average
> >> wait for next bus is identical to the average time since the last
> >> bust, which is the same as the average bus interval (thus 1 + 1 = 1),
> >> and the technique maintains that advantageous perception.
> >>
> >> Given Git's use of hashes, it sounds like it could have uses,
> >> assuming the theory pans out. I've not yet gone through the paper
> >> itself [2] but hope springs eternal.
> >>
> >> Philip
> >>
> >> [1]
> >> S. Nadis and M. I. of Technology, “Theoretical breakthrough could
> >> boost data storage.”
> >> https://techxplore.com/news/2021-11-theoretical-breakthrough-boost-
> st
> >> orage.html
> >> (accessed Nov. 18, 2021).
> >>
> >> [2]
> >> M. A. Bender, B. C. Kuszmaul, and W. Kuszmaul, “Linear Probing
> >> Revisited: Tombstones Mark the Death of Primary Clustering,”
> >> arXiv:2107.01250 [cs, math], Jul. 2021, Accessed: Nov. 18, 2021.
> >> [Online]. Available: http://arxiv.org/abs/2107.01250
> > Very interesting, thanks for sharing. I haven't read the full paper
> > either, but this is an interesting result.
> >
> > It seems that this result is limited to hashmaps with a approximately
> > equal number of insertions and deletions..
> >
> > From [1]
> >
> >   They found that for applications where the number of insertions and
> >   deletions stays about the same—and the amount of data added is roughly
> >   equal to that removed—linear-probing hash tables can operate at high
> >   storage capacities without sacrificing speed.apacities without
> >   sacrificing speed.
> >
> > and [2]
> >
> >   ...We then turn our attention to sequences of operations that contain
> >   deletions, and show that the tombstones left behind by those deletions
> >   have a primary-anti-clustering effect, that is, they have a tendency
> >   to speed up future insertions
> >
> > Do we have any such use cases?
> I know that we use hash maps, but haven't followed there actual usage in
> various optimisations.
> 
> Obviously we use hash naming of objects but that's generally a red-herring, I
> think, unless we are over-abbreviating the hash so that it's no longer unique
> (which could be happening somewhere).
> 
> I suspect that some of the hosting providers may be more interested from a
> File system perspective, as I think we just pass the object store problems to
> the FS. Then again, all the mono-repo and partial checkout corporate users
> are likely to be interested, especially if this unblocks some historical
> misunderstanding about the limits and how to handle them.

If we are going to change data structures for object hashes, it might be better to use a compressed trie data structure (trie is not a typo for tree). This can be mapped easily to the current FS structure in .git/objects - in fact, .git/objects is mappable to a two-level compressed trie with a top level of 2 characters. Extending the data structure model both in memory and on the file system to become multi-level instead of just 2 levels should not be particularly onerous and has a O(1) search speed that, at worst, is the same as a perfect closed hash and the same as an open hash with single entries in buckets. Once open hashes degenerate to have long chains, the compressed trie is substantially more time efficient. A trie is an O(1) lookup and insert in all cases and does not degenerate like hashes or trees do. Uncompressed tries are generally extremely less space efficient than hashes (making them quite bad for our purposes) but compressed tries substantially reduce wasted space with a trade-off of some time to do node-splits on insert and single-level mid-key lookups.

Note: In a FS look up, the search is not O(1) unless the FS is also implemented as a trie, so the benefit on disk will not be as fully realized, although if you can get the compression to fit into a directory inode block, the search speeds up significantly because of reduced I/Os. A FS existence search is O(n,m) where n is the maximum number of partitions in the trie's key, and m is the maximum number of files within a directory IF the key partitioning compression is not known - if it IS known, then the FS existence search is O(m). This n is different (much smaller) than the total number of entries in the hash. As an example, a sample repo has 2607 entries under object, and the key partitioned length (n) is always 2, representing, as an example, 00/16212c76018d27bd1a39630c32d1027fbfbebd. Keeping the key partitioning unknown allows the searcher to adapt to any repository trie compression, including existing ones, so there's that advantage.

-Randall


      reply	other threads:[~2021-12-12 20:54 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-10 21:57 New (better?) hash map technique in limit case Philip Oakley
2021-12-10 22:52 ` Glen Choo
2021-12-12 17:43   ` Philip Oakley
2021-12-12 20:53     ` rsbecker [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: http://vger.kernel.org/majordomo-info.html

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='000001d7ef9a$6493d150$2dbb73f0$@nexbridge.com' \
    --to=rsbecker@nexbridge.com \
    --cc=chooglen@google.com \
    --cc=git@vger.kernel.org \
    --cc=philipoakley@iee.email \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://80x24.org/mirrors/git.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).