talkin' 'bout GSoC 2011
Bayard Bell
buffer.g.overflow at googlemail.com
Tue Apr 5 07:29:15 PDT 2011
On 30 Mar 2011, at 09:23, Rainer Müller wrote:
>> I'd like to see a profile,
>> test, fingerprint, and, sign implementation to provide more robust
>> comparisons between builds or even update decisions (e.g. I built on
>> this platform, with these deveopment tools, against a code base with
>> this signature, using these parameters, got binaries with
>> platform/section signatures that look like this, had dependencies that
>> look like this, ran the following test suite, and got the following
>> results; deploy the latest version of X as soon as test suite Y is
>> published under Z's signature; if I build using these parameters vs.
>> these parameters, I can see that the changes look like this and limit
>> test coverage accordingly; if I rebuild from scratch, I can confirm that
>> I get identical results; I built this port just like someone else but
>> have dependencies that are built differently, which may explain why
>> things don't work the same).
>
> Where would signatures and test results be published?
> Who writes the test suites and how do you run them?
I'll try to cover this in greater detail in a separate post. What I've been chewing on is the idea of taking a packaging system like IPS that's purpose-approximate and adapting it into a kind of message bus workflow based on related packages and signatures. Thus you'd have the base port package, build fingerprint packages, test packages, and binary packages. Someone updates the port, and the build fingerprint can be used to verify reproducibility or compatibility with a different build environment, runtime, or platform (or as a bug-reporting tool when it fails). Other people can check to see whether there are unbuilt variants or untested platforms based on the absence of packages. A test package can be downloaded with a test suite definition, its requirements, and a validation script. Someone then downloads and runs the test suite, adds the results, and signs it. The general idea is that you end up with a tool to facilitate collaboration between active porters, starting with reproducibility and scope of portability, moving onto functional validation. This helps you get to a point like Debian, where you have enough participation, information, and structure to designate something as stable, unstable, or dev and to break this out for different platforms and architectures. It can give you a lot of information for analysing variations and understanding component variations. People have a choice between levels of participation and amounts of data they want to share: it can be limited to here, I've got a bug report, I'm sharing this with you to resolve my issue, or it can be a matter of putting your name to saying that something works, as supported by the following. Working backwards from some of what I've spoken of in a blue-sky context, these are possibilities I see based on preliminary research into IPS, my extrapolations from working through design docs where it seems close enough that it may be feasible to tailor it.
>> I'd like to see the namespace support more
>> strict dependency information (e.g. rather than using -L /opt/local/lib
>> and -I /opt/local/include, I'd prefer to see something closer to -L
>> /opt/local/var/macports/software/db48/4.8.30_0/lib, etc.,
>
> So as soon as I upgrade to db48 @4.8.30_1 dependents will break
I should elaborate, using a better example. ;-> The way I've seen this resolved previously is to provide symlinks for the appropriate level of link target. What I'm suggesting is that there are cases where it may be safer to relink dependencies to the new version, while there are other cases where you may be happy simply to follow the new link. Let me give a quick example using an EFS-style namespace, mapping db48 over. My link target would be:
/efs/database/db/4.8/exec/lib/libdb.dylib
Where:
/efs/database/db/4.8/ -> /efs/database/db/4.8_0
and further:
/efs/database/db/4.8_0/exec/lib/libdb.dylib -> libdb.dylib.1
The last symlink is to support in-place patching and requires incremeted the extension to the file name and repointing the link every time you patch (linker/loader targets are against the symlink, but the rtld uses the destination, so the symlink can be replaced by a file with different relocations without coring a running application-I suspect that not having this kind of namespace is at least one of the reasons why Apple seems to want people to export the contents of /Network/Library and /Network/Applications over AFP rather than NFS, thus changing the underlying filesystem semantics), but the rules implemented in EFS are that you can't unlock a release once you've made it production. For binaries you'd turn over the production release to links into the top level of the metaproject space, something like
/efs/macports/bin/port
Where:
/efs/macports/bin/port -> /efs/macports/port/prod/bin/port
and:
/efs/macports/port/prod -> /efs/macports/port/1.9.2_0
and
/efs/macports/port/1.9.2/bin/port -> /efs/macports/port/1.9.2_0/common/bin/port
There are additional fine points to consider, like whether you should have default behaviour that prevents the contents of include, lib, and friends from being turned over into a metaproject default, as the default practice would be to use them through a version-specific namespace. (There are also some considerations about what kinds of default and links can point to releaselinks, but let's not worry about that now.)
When you want to activate a port, the operation may be as quick as flipping the releaselink, although you obviously need to make sure that destinations linked against it haven't been added or deleted within the release. This also makes it really easy to back out to a previous release.
The exact practices for determining releaselink targets are context-dependent: some libraries have strong practices about maintaining ABI and do so reliably (you could use something like ASAT and Upstream Tracker to test this). On the other hand, even if you don't have to relink, you may want to keep your current version patched against what already works until you can test against the new version, which means rebuilding dependents against revision-level targets and repointing their releaselinks after testing. Which is to say: coverage is a significant, if not determinative, consideration, along with ABI-level interface stability.
In the non-C/C++, I'd add that there are languages like perl where some modules don't attempt to provide stable interfaces or behaviour. In those cases, you might not provide releaselinks at all. The solution to that was to add a module that provided a BEGIN block to simplify the resultant changes to PERL5LIB so that it could be expressed as something like module/1.0.1, where module is assumed to be under the perl5 meta if there's just one separator. If the module is specific to a perl release, its release would be something like 1.0.1-5.8 or 1.0.1-5.12, where the module handling versionised dependencies would hide the extension, making it easier to move between interpreter versions. Those ports could be tagged to automatically retain releases, and you need to manage the dependencies that result for software you're writing. The same module that manages these dependencies would of course also provide an audit trail that could be checked at install time and verified again at runtime.
If the context of something like Macports, the depends line could work with this by using a prod releaselink if the module didn't have an additional separator and version specification, provided that the release already had a "prod" releaselink. Otherwise the dependency could be fetched and maintained with a "current" releaselink, unless the port was tagged as unstable, in which case the dependency could be expanded to be version-specific. In any case, dependency specification can be pretty easily flipped so that I'm not just doing an existence test but adding appropriate LDFLAGS and such.
The other useful way you could use releaselinks is for variants: variants would be expansions of the version, allowing variants or variant-specific content to be available in parallel, with the releaselink thus able to default a variant. (This begins to present problems when you've got a number of variants that need to be combined.)
Forgive me if this is frustratingly mixed between discussion of abstractions and particulars.
Cheers,
Bayard
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 1515 bytes
Desc: not available
URL: <http://lists.macosforge.org/pipermail/macports-dev/attachments/20110405/1fb53952/attachment.bin>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: PGP.sig
Type: application/pgp-signature
Size: 203 bytes
Desc: This is a digitally signed message part
URL: <http://lists.macosforge.org/pipermail/macports-dev/attachments/20110405/1fb53952/attachment-0001.bin>
More information about the macports-dev
mailing list