Speed up trace mode (GSoC Project)

Clemens Lang cal at macports.org
Thu Apr 4 23:15:15 UTC 2019


Hi Mihir,

On Thu, Apr 04, 2019 at 06:25:00PM +0530, Mihir Luthra wrote:
> I have made the memory expansion idea precise and filtered out bugs
> from the one I sent previously.
> Please check the pdf in the attachment.

I see you're getting the hang of the difficulties of the project now :)

You have the right ideas to solve the problem. Do keep in mind though
that CAS will only work up to a word size or a double word size at most,
i.e. swapping more than 64 bit with CAS atomically is probably not going
to work.

I'm not sure whether we will actually need a separate process to
increase the allocation size. Consider this proposal:

  struct shared_block_mgmt {
    size_t last_known_size;
    size_t refcnt;
    int blockfd;
    struct shared_block* block;
  };

  struct shared_block_mgmt block_mgmt;

  struct shared_block {
    size_t size;
    size_t used;
    char memory[];
  };

This struct would mean that the first `used` bytes of `memory` are
in-use by our data structure and everything after that up to `size`
bytes is free[1].

Any allocation request where used + request_size <= size would succeed
and we could change `used` using CAS to ensure that this allocation
operation is atomic.

Any allocation request where used + request_size > size would trigger
growing the memory area. Growing would calculate the size we want to
grow to, let's call this `target_size`. Then, it would attempt to grow
atomically using:

  size_t local_size;
  size_t local_used;
  size_t target_used;
  size_t target_size;
  bool needs_resize;
  do {
    local_used = block_mgmt.block->used;
    local_size = block_mgmt.block->size;
    target_used = local_used + request_size;
    needs_resize = target_used < local_size;
    if (needs_resize) {
      // growing required
      target_size = local_size + BLOCK_GROWTH;
      ftruncate(block_mgmt.blockfd, target_size);
    }
  } while (
      (needs_resize &&
        !CAS(&block_mgmt.block->size, local_size, target_size) &&
        !CAS(&block_mgmt.block->used, local_used, target_used)) ||
      (!needs_resize &&
        !CAS(&block_mgmt.block->used, local_used, target_used)));

  // At this point, either resizing was not needed and block->used is
  // what we expect it to be (i.e. allocation succeeded), or resizing
  // was needed, and we did successfully resize, and after resizing we
  // did update block->used (i.e. allocation also succeeded).

This will oportunistically call ftruncate with the bigger size on the
mmap'd file descriptor, but calling ftruncate in multiple processes at
the same time should not be a problem unless the truncation would ever
shrink the file (which we can not completely avoid, but make very
unlikely by making BLOCK_GROWTH sufficiently large so that any pending
extension requests would be processed before another extension is
required).

You correctly identified that we'd have to re-mmap(2) the file after
this operation, which is why I've included last_known_size in struct
shared_block_mgmt. If last_known_size differs from the size stored in
struct shared_memory, that would trigger creation of a new struct
shared_block_mgmt with a fresh mmap with the larger size. We cannot
unmap the old mapping until all threads are no longer using it, so we'd
have to keep a reference count (field refcnt). And finally, we'd
probably have to check whether any offsets in the data structure are
larger than last_known_size, and if so, retry the current lookup
operation with a fresh mmap(2) of the shared memory area, since it must
have grown during the pending lookup.

I believe this would allow us to avoid the separate process to grow the
shared memory and also not require the use of a spinlock to block all
processes for the resize operation. What do you think?


[1] This memory management approach may actually not work for us later
on and we may have to use a free-list or buddy system with a bitmap, but
for the point of discussing the extension, let's simplify the memory
allocation for now.

-- 
Clemens


More information about the macports-dev mailing list