Fwd: Feedback request regarding speed optimisations in trace mode

Mihir Luthra 1999mihir.luthra at gmail.com
Sun Jun 9 07:53:55 UTC 2019


> This is certainly an improvement. How does it compare with running the
> same builds without trace mode? The ideal scenario would of course be to
> have trace mode incur only a barely noticeable performance penalty.

Kindly check out this link. I made the comparisons again for those ports.(I
was stupid enough to not note down the comparisons last time)
Also, I made some changes to the code since that mail, for which I also
made a PR.


I will keep adding any new data to these sheets whenever I make more tests.
Also in case of some ports like db48 and glib2(+deps), optimisations didn’t
work well. I will test ports cmake, mariadb, and some more
which I tested last time as well and they showed nice improvements. I will
keep updating data along with tests.

I could figure out 2 points where I can make the library better
1) Darwin trace not always checks the registry. Like sometimes the case is
of an always allowed/denied prefix such as /usr/include(always allowed) or
/usr/local(always denied).  Despite of this fact, shared memory would store
complete paths. This point can be made much better if shared memory stores
just the prefix in such cases and on the last character of prefix it marks
* or some character to indicate all paths with this prefix are allowed.
Also this happens really frequently. So maybe after doing this,
optimisations will improve.
2) As the data struct for shared memory is a extension of a trie, each node
stores huge arrays for every possible characters. It is possible for a path
to contain any character from unicode, so making an array of size 256  and
traversing through path by utf-8 representation is possible. But its rare
to see chars 127-255 in paths and this leads to a huge wastage. So shared
mem just doesn’t store such paths. Similar is the case for chars 0-32 which
are just like carriage return, esc, etc Probably they can also be filtered
out somehow. Because there are lots of nodes getting inserted, reducing
this array size ends up reducing insertion time to a large extent.

Also thanks for providing me with the help.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.macports.org/pipermail/macports-dev/attachments/20190609/3dd11288/attachment.html>

More information about the macports-dev mailing list