lldb ...

Jeremy Huddleston Sequoia jeremyhu at macports.org
Sat Sep 10 08:45:38 PDT 2016


> On Sep 10, 2016, at 02:15, René J.V. Bertin <rjvbertin at gmail.com> wrote:
> 
> On Friday September 09 2016 13:59:50 Jeremy Huddleston Sequoia wrote:
> 
>>> As an aside, I'd be in favour of setting up MacPorts such that ${prefix} is owned by a ${macports_operator} who's got admin rights (= myself) and reserve use of actual root privilege to those few ports that require setting up SETUID/GETUID executables or that need to create users or groups.
>> 
>> YES!  We should not be needing to do such things as root.  That is 100% true, and I am in full support of moving away from that and only using root for activate.  We should be able to use fakeroot (https://wiki.debian.org/FakeRoot) for destdir.
> 
> Why would we even require root for activation, except for the few exceptions that install items outside of ${prefix} or that install SETUID/GUID to another user/group? 

Exactly, so we'll need root access at activation time for such things.

Yes, for most cases we would probably not need root access at activation time, but for those cases you just listed, we would.  We could do a pass over the bom to determine if root access is needed or not.  The UI for handling such a case is non-trivial.  Do we prompt once for the entire run of the process or escalate permissions for each port we encounter and then drop them again?  If 8 ports are being installed and 4 require root, that's not a good user experience to prompt for password 4 times.

> It's already an implicit requirement that the MacPorts operator (the user who installs it and ports) be an admin user, and once ${prefix} is created there's no need for it and anything below it to require root access.
> 
> I've run MacPorts for years like that, and only moved away from it very recently because I made an error (forgot a ${destroot}) that caused pollution of my ${prefix}. Of course the protection I gained with that move is very relative, and depends on the destroot step NOT being run as root.
> 
> Would fakeroot work on OS X, including on versions that predate SIP/rootless?

There's absolutely nothing preventing something like that from working technically.  All it's doing is interposing FS syscalls like stat(), chown(), chmod(), and storing that information as a shadow.  The limitation that I see is with executables that are restricted from using DYLD_INSERT_LIBRARIES for the interposition (ie: the same problem we have with darwintrace).  We would need to use our own versions of cp, mv, chmod, chown, stat, install, etc command line utilities instead of the system binaries.

> Funny btw, I trust Debian to have written a safe fakeroot implementation, but if you read the wiki you get the impression it's a dangerous little hacking tool, which could be misused easily e.g. to make any executable setuid root...

Such executables still need to be installed through real root access in order to become "really" setuid root.

>> It's quite a bit more complicated than that.  First off, these settings are on by default but can be configured through SIP flags, boot args, etc.  There >are also many types of restrictions that have different effects.
> 
> Hah, TMI :)
> 
>> Because of the CS_HARD restriction, all libraries that are linked against require a valid code signature.
> 
> Out of curiosity, if an IDE were to use a proper lldb debugger implementation that uses liblldb rather than an existing external driver (lldb-mi, python, ...), will all of the IDE have to be signed or is that still a requirement only for the debugserver utility?

The process doing the debugging (the one that is calling ptrace(2)) is the one that has these restrictions, ie: debugserver.

>> This is because you likely already launched the executable, so the old signature for that particular inode was already cached.  If you copied debugserver somewhere else and then copied it back, it would have addressed the problem for you.
> 
> Presumable, but that's the point, it didn't. I tried that manually, but shouldn't reinstalling via MacPorts have taken care of that too? Because that does
> - delete the previous copy
> - install a new, unsigned copy
> - sign the new copy
> 
> I understand that the certificate catching is coupled to the file's inode (whatever that is under HFS+), and the new copy indeed had a new inode. And yet another inode after signing it.

If you can reproduce this, please let me know and file a radar with details and let me know the number, so I can followup internally on it.  I haven't seen such cases.  The only issue I'm aware of is with overwriting an existing file (eg: using cp instead of mv).

>>> I'm concerned about every step that takes OS X away from a regular Unix (underneath a nice and truly integrated desktop) and towards a locked-in OS like iOS.'
>> 
>> Well macOS is still UNIX.  We continue to verify that through continual conformance testing.  I don't expect that to ever change.
> 
> iOS used to be a Unix too.

iOS was never branded as UNIX.  It is UNIX-like, but it was never UNIX.

> I don't know whether one can still think of it as that, though. But anyway that wasn't my real point. For me, "regular Unix" carries connotations and associations of an open developers' OS of choice that date back to the late 80s. 

Being called UNIX has a very specific set of requirements which very few OSs actually pass.  Fedora, Ubuntu, FreeBSD, OpenBSD, NetBSD, et al are not UNIX.  They do not pass the conformance tests and therefor can't be called UNIX.  That's why they are called UNIX-like.  They violate various POSIX specifications that they don't want to conform to.

> Something "breaks" in that perception of things when we get to the point where you have to ask (or pay) for a certificate to install and run even your own code even if it stays away from system areas. That's exactly why I never got into iOS development, too.

You don't need to pay for a certificate to install and run your own code on macOS nor iOS.

>> FWIW, I really love my 2015 rMBP.  I was a holdout staying on the pre-retina ones so I could continue to have a DVD drive and a 2.5" drive bay, but I finally gave that up and am really glad that I did because the newer SSDs are blazing fast.
> 
> Going completely OT here :)
> 
> I guess that if I had the money I might be less sceptic, but the fact is that I simply cannot afford to dump the amount required to replace my current "mobile workstation" mid 2011 13" MBP with something comparable from Apple (or anyone else if I want to stick to an i7). It's got a 1Tb Hitachi HDD which is plenty fast for me (and cost me all of 80€), evolved from 4 to 8 to 12Gb RAM as prices dropped and lacks an expensive LCD screen that won't outlive the computer itself. I've got a 1080p 21" external screen that is largely sufficient. When I go mobile I either can make do with the low internal resolution, or I have another external screen.
> That large disk allowed me to make 4 partitions and keep large working directories like Qt5's source+build+destroot around without running out of space. I wouldn't feel comfortable at all with a non-replaceable SSD of a comparable size knowing its cost and that rapidly evolving tech is almost by definition of largely unknown reliability. I presume you run additional storage over USB3; can you vouch that you get comparable I/O speeds out of that as you got from big internal spinners, regardless of the overal CPU (and bus) load?

I've got an internal 1TB SSD and it is more than enough for most of my needs.  I mainly use USB for thumbsticks and my camera.  In such cases, the throughput is bottlenecked on the attached device, not the bus.

That being said, my server machine is a 2012 MacMini, and I've got a DroboMini and a LaCie RAID attached to it via thunderbolt, and the throughput and latency are more than adequate for my demanding needs.

> My current MBP cost me about half a month's salary when I bought it (her? :)). That was an investment because I knew my contract was ending with little chance of finding a new job easily (prediction come true ...), justified by the fact that my previous G4 Powerbook had given me 6 years of reliable service despite rough handling. Even if I were still earning a salary I'd really hesitate to spend significantly more on a system with basically no user-replaceable off-the-shelf parts.
> As an anecdote from the dark side: I also have an Acer Netbook that's been running Linux since 2012 and that cost me less than 450€ with the upgrade to 8Gb and a replacement HDD because the original failed a few months ago (running ZFS I lost only a single worthless file). It feels feeble but it can actually take a lot of abuse (and the keyboard is frankly more robust than the Apple ones). I only replaced it with another sub-400€ 11" notebook (by Clevo) because it really became too slow for what I was doing with it. I've already seen that Clevo also have a 13" line that can be configured to correspond almost exactly to what I look for, except for running OS X (but for I'd say less than half the price of a new MBP).

I felt the same way about non-replaceble internals for quite some time, but I don't think it's that big of a deal any more.  Most users don't ever crack open their laptop to replace components, so it's not really an issue for most users.  For folks like you and me, it might be a minor concern being limited to 1TB, but we can certainly use an external SSD or an SDXC to increase storage as needed in the future.


-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4465 bytes
Desc: not available
URL: <https://lists.macosforge.org/pipermail/macports-dev/attachments/20160910/ecb41f91/attachment-0001.p7s>


More information about the macports-dev mailing list