Discussion:
Go (golang) packaging, part 2
(too old to reply)
Michael Stapelberg
2013-01-27 19:20:02 UTC
Permalink
Hi,

I have been in contact with a few Go people and we have worked out the
following:

Go libraries (not binaries!) should be present in Debian _only_ for the
purpose of building Debian binary packages. They should not be used
directly for Go development¹.

Go library Debian packages such as golang-codesearch-dev will ship the
full source code (required) in /usr/lib/gocode plus statically compiled
object files (not required, but no downsides) compiled with gc from
the golang-go Debian package.

At least at the moment, binaries must be linked statically since dynamic
linking is not yet a viable solution². I acknowledge that this
introduces some unfortunate implications (as discussed in the previous
thread), but we have no alternative at the moment and I think we can
make it work reasonably well.

I intend to upload the codesearch package soon as an example (and
because it’s useful ;-)).

① https://wiki.debian.org/MichaelStapelberg/GoPackaging#Why_should_I_use_.2BIBw-go_get.2BIB0_instead_of_apt-get_to_install_Go_libraries.3F
② http://thread.gmane.org/gmane.comp.lang.go.general/81527/focus=81687
iant (gccgo maintainer) states that one cannot use the go tool to
build a shared library.
--
Best regards,
Michael
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@midna.zekjur.net
Hilko Bengen
2013-01-28 23:50:01 UTC
Permalink
Post by Michael Stapelberg
Go libraries (not binaries!) should be present in Debian _only_ for
the purpose of building Debian binary packages. They should not be
used directly for Go development¹.
This is a pity for those of us who don't really subscribe to "get
everything from github as needed" model of distributing software.
Post by Michael Stapelberg
Go library Debian packages such as golang-codesearch-dev will ship the
full source code (required) in /usr/lib/gocode plus statically
compiled object files (not required, but no downsides) compiled with
gc from the golang-go Debian package.
Since one of the stated goals of the Go language and also the golang
compiler are fast builds: How about using the Emacs / Common-Lisp /
Python approach: Ship only source files in the .deb packages and build
object files during post-install?
Post by Michael Stapelberg
At least at the moment, binaries must be linked statically since dynamic
linking is not yet a viable solution². I acknowledge that this
introduces some unfortunate implications (as discussed in the previous
thread), but we have no alternative at the moment and I think we can
make it work reasonably well.
How does gccgo fit into this picture, apart from the problem that object
files generated using gccgo are not compatible with those generated
using golang-go?
Post by Michael Stapelberg
I intend to upload the codesearch package soon as an example (and
because it’s useful ;-)).
Having found codesearch to be very useful, I am looking forward to that.

Thank you for coordinating things with upstream.

Cheers,
-Hilko
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@msgid.hilluzination.de
Michael Stapelberg
2013-01-29 07:30:02 UTC
Permalink
Hi Hilko,
Post by Hilko Bengen
This is a pity for those of us who don't really subscribe to "get
everything from github as needed" model of distributing software.
Yes, but at the same time, it makes Go much more consistent across
multiple platforms. We should tackle one issue at a time. I suppose in
the future, upstream’s take on library distribution might change. For
now I agree with upstream on this — not introducing another source of
errors/mistakes for the end user (version problems involving not only go
get, but also a Debian version of some library) seems like a good idea.
Post by Hilko Bengen
Post by Michael Stapelberg
Go library Debian packages such as golang-codesearch-dev will ship the
full source code (required) in /usr/lib/gocode plus statically
compiled object files (not required, but no downsides) compiled with
gc from the golang-go Debian package.
Since one of the stated goals of the Go language and also the golang
compiler are fast builds: How about using the Emacs / Common-Lisp /
Python approach: Ship only source files in the .deb packages and build
object files during post-install?
What advantage would that have? It won’t change the fact that we will
only distribute Go libraries for building Debian binary packages.
Post by Hilko Bengen
How does gccgo fit into this picture, apart from the problem that object
files generated using gccgo are not compatible with those generated
using golang-go?
I tried to explain that one cannot use gccgo to create dynamically
linked shared libraries from Go libraries. At least not at the
moment. All it can do is dynamically linked executables.
--
Best regards,
Michael
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@midna.zekjur.net
Wouter Verhelst
2013-01-29 08:50:02 UTC
Permalink
Post by Michael Stapelberg
Hi Hilko,
Post by Hilko Bengen
This is a pity for those of us who don't really subscribe to "get
everything from github as needed" model of distributing software.
Yes, but at the same time, it makes Go much more consistent across
multiple platforms.
"consistency across multiple platforms" has been claimed as a benefit
for allowing "gem update --system" to replace half of the ruby binary
package, amongst other things. It wasn't a good argument then, and it
isn't a good argument now.
Post by Michael Stapelberg
We should tackle one issue at a time. I suppose in
the future, upstream’s take on library distribution might change. For
now I agree with upstream on this — not introducing another source of
errors/mistakes for the end user (version problems involving not only go
get, but also a Debian version of some library) seems like a good idea.
The problem with having a language-specific "get software" command is
that it introduces yet another way to get at software. There are many
reasons why that's a bad idea, including, but not limited to:
- most config management systems support standard packages, but not
language-specific "get software" commands, making maintenance of
multiple systems with config management harder if there aren't
any distribution packages for the things you want/need to have.
- It's yet another command to learn for a sysadmin.
- It makes it harder for the go program to declare a dependency on
non-go software, or vice versa

So there are real and significant benefits to be had by actually trying
to do this right, meaning, "this will have to do" (as opposed to "this
will have to do /for now/, but we'll tackle doing it better once this
bit works right") would be a pity.
--
Copyshops should do vouchers. So that next time some bureaucracy requires you
to mail a form in triplicate, you can mail it just once, add a voucher, and
save on postage.
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@grep.be
Iustin Pop
2013-01-29 12:00:02 UTC
Permalink
Post by Wouter Verhelst
Post by Michael Stapelberg
Hi Hilko,
Post by Hilko Bengen
This is a pity for those of us who don't really subscribe to "get
everything from github as needed" model of distributing software.
Yes, but at the same time, it makes Go much more consistent across
multiple platforms.
"consistency across multiple platforms" has been claimed as a benefit
for allowing "gem update --system" to replace half of the ruby binary
package, amongst other things. It wasn't a good argument then, and it
isn't a good argument now.
Post by Michael Stapelberg
We should tackle one issue at a time. I suppose in
the future, upstream’s take on library distribution might change. For
now I agree with upstream on this — not introducing another source of
errors/mistakes for the end user (version problems involving not only go
get, but also a Debian version of some library) seems like a good idea.
The problem with having a language-specific "get software" command is
that it introduces yet another way to get at software. There are many
- most config management systems support standard packages, but not
language-specific "get software" commands, making maintenance of
multiple systems with config management harder if there aren't
any distribution packages for the things you want/need to have.
- It's yet another command to learn for a sysadmin.
- It makes it harder for the go program to declare a dependency on
non-go software, or vice versa
So there are real and significant benefits to be had by actually trying
to do this right, meaning, "this will have to do" (as opposed to "this
will have to do /for now/, but we'll tackle doing it better once this
bit works right") would be a pity.
I would add one thing here: Haskell/GHC also (currently) doesn't create
shared libraries, and instead builds the program statically, but the
Debian Haskell group still tries to package as best as they can the
development libraries, for all the reasons above (which are very good
reasons, IMHO).

So, take this as an example of another language which doesn't do shared
linking but for which libraries are still packaged in Debian.

regards,
iustin
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@teal.hq.k1024.org
Paul Wise
2013-01-30 09:00:02 UTC
Permalink
Post by Iustin Pop
I would add one thing here: Haskell/GHC also (currently) doesn't create
shared libraries, and instead builds the program statically, but the
Debian Haskell group still tries to package as best as they can the
development libraries, for all the reasons above (which are very good
reasons, IMHO).
So, take this as an example of another language which doesn't do shared
linking but for which libraries are still packaged in Debian.
Do all Haskell packages add Built-Using headers?
--
bye,
pabs

http://wiki.debian.org/PaulWise
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/CAKTje6HiWM3Dx3ybLTh9uZ=***@mail.gmail.com
Joachim Breitner
2013-01-30 11:40:02 UTC
Permalink
Hi,
Post by Paul Wise
Post by Iustin Pop
I would add one thing here: Haskell/GHC also (currently) doesn't create
shared libraries, and instead builds the program statically, but the
Debian Haskell group still tries to package as best as they can the
development libraries, for all the reasons above (which are very good
reasons, IMHO).
So, take this as an example of another language which doesn't do shared
linking but for which libraries are still packaged in Debian.
Do all Haskell packages add Built-Using headers?
no, not yet (it is a relatively young addition to the Debian policy). I
have filed a bug against haskell-devscripts to remember that (#699329).

Maybe dh_buildinfo can be extended to output its results as a substvar?
After all, it already collects a (safe approximation) to the data
expected in the Built-Using header:

„This script is designed to be run at build-time, and registers
in a file the list of packages declared as build-time
dependencies, as well as build-essential packages, together with
their versions, as installed in the build machine.“

So all that seems to be missing is to look up the source name and
versions of these packages.

Greetings,
Joachim
--
Joachim "nomeata" Breitner
Debian Developer
***@debian.org | ICQ# 74513189 | GPG-Keyid: 4743206C
JID: ***@joachim-breitner.de | http://people.debian.org/~nomeata
Stefano Zacchiroli
2013-01-30 12:10:02 UTC
Permalink
Post by Paul Wise
Post by Iustin Pop
So, take this as an example of another language which doesn't do shared
linking but for which libraries are still packaged in Debian.
FWIW, OCaml is another such example.
Post by Paul Wise
Do all Haskell packages add Built-Using headers?
The OCaml toolchain in Debian doesn't either, #699330 has now been filed
to address that.

Cheers.
--
Stefano Zacchiroli . . . . . . . ***@upsilon.cc . . . . o . . . o . o
Maître de conférences . . . . . http://upsilon.cc/zack . . . o . . . o o
Debian Project Leader . . . . . . @zack on identi.ca . . o o o . . . o .
« the first rule of tautology club is the first rule of tautology club »
Paul Wise
2013-01-30 12:40:01 UTC
Permalink
Post by Stefano Zacchiroli
Post by Paul Wise
Post by Iustin Pop
So, take this as an example of another language which doesn't do shared
linking but for which libraries are still packaged in Debian.
FWIW, OCaml is another such example.
Post by Paul Wise
Do all Haskell packages add Built-Using headers?
The OCaml toolchain in Debian doesn't either, #699330 has now been filed
to address that.
Could folks who are familiar with these languages submit bugs against
lintian about detecting packages for these languages that should have
Built-Using but don't?
--
bye,
pabs

http://wiki.debian.org/PaulWise
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@mail.gmail.com
Michael Stapelberg
2013-01-29 22:00:01 UTC
Permalink
Hi Wouter,
Post by Wouter Verhelst
"consistency across multiple platforms" has been claimed as a benefit
for allowing "gem update --system" to replace half of the ruby binary
package, amongst other things. It wasn't a good argument then, and it
isn't a good argument now.
I am not familiar with the Ruby situation, I only know that many Ruby
developers seem to be very angry at Debian people. Is there a summary of
the events that I could read?
Post by Wouter Verhelst
- most config management systems support standard packages, but not
language-specific "get software" commands, making maintenance of
multiple systems with config management harder if there aren't
any distribution packages for the things you want/need to have.
“most config management systems” should be fixed, then. If you mean
systems like puppet with that statement, I don’t see a reason why you
would _ever_ want to obtain a Go library with puppet. When deploying Go
software, you either a) compile it on your development machine, then
distribute it (no libraries needed) or b) install the Debian package
(libraries will be available, as explained in previous emails).
Post by Wouter Verhelst
- It's yet another command to learn for a sysadmin.
Sysadmins are not developers and therefore don’t need to learn any new
commands.
Post by Wouter Verhelst
- It makes it harder for the go program to declare a dependency on
non-go software, or vice versa
Dependency as in Debian package dependency? In that case I really don’t
understand what argument you are trying to make here.

Overall, I am not convinced that using “go get” on Debian is evil™.
Post by Wouter Verhelst
So there are real and significant benefits to be had by actually trying
to do this right, meaning, "this will have to do" (as opposed to "this
will have to do /for now/, but we'll tackle doing it better once this
bit works right") would be a pity.
I have no interest in actively working against upstream’s
recommendations and against my own beliefs. Not now, not in the future.
--
Best regards,
Michael
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@midna.zekjur.net
Chow Loong Jin
2013-01-30 03:20:02 UTC
Permalink
Post by Michael Stapelberg
I am not familiar with the Ruby situation, I only know that many Ruby
developers seem to be very angry at Debian people. Is there a summary of
the events that I could read?
I'm not very familiar with the situation myself, but the gist of it, as I
understand, is that Ruby upstream wants everything to be installed via RubyGems,
while we want everything to be installed via dpkg. I think there was some issue
regarding which paths can be controlled by which package manager.
Post by Michael Stapelberg
Dependency as in Debian package dependency? In that case I really don’t
understand what argument you are trying to make here.
Overall, I am not convinced that using “go get” on Debian is evil™.
Having multiple package managers which don't know about each other on a system
is evil™ (but in some cases, can be managed properly).

The issue here is that:
1. If software that depends on native packages is installed using "go get"
or whatever other language-specific package manager, e.g. pip for Python or
gem for Ruby is installed, there is no way to declare a dependency on
those. For example, the Python mysql bindings require some MySQL C headers
which are available in Apt, but you won't know until your pip install run
fails due to missing headers. After you're done, you move on to your next
dependency which also fails due to missing headers, and the next, and the
next


2. If something installed from your language-specific package manager is in a
path that a Debian package overrides, dpkg is going to barf because it
doesn't know where those files come from. This of course applies to
randomly ./configure-ing stuff with --prefix=/usr and running make install.

3. Software packages from Apt cannot declare dependencies against
language-specific packages, for the same reasons highlighted in #1.

Now, some cases where these problems are (partially) mitigated is when
installing using the upstream package manager into a language-specific sandboxed
environment, e.g. python-virtualenv. I say partially because this doesn't solve
the issue in #1. In fact, my example there was something I ran into while
running a pip install inside a virtualenv.
Post by Michael Stapelberg
I have no interest in actively working against upstream’s
recommendations and against my own beliefs. Not now, not in the future.
Upstreams should always be OS/distro-agnostic wherever possible, so the language
specific package manager will always be the recommendation, even if it isn't the
best solution.
--
Kind regards,
Loong Jin
Michael Stapelberg
2013-01-30 08:30:02 UTC
Permalink
Hi Chow,
Post by Chow Loong Jin
1. If software that depends on native packages is installed using "go get"
or whatever other language-specific package manager, e.g. pip for Python or
gem for Ruby is installed, there is no way to declare a dependency on
those. For example, the Python mysql bindings require some MySQL C headers
which are available in Apt, but you won't know until your pip install run
fails due to missing headers. After you're done, you move on to your next
dependency which also fails due to missing headers, and the next, and the
next…
True.
Post by Chow Loong Jin
2. If something installed from your language-specific package manager is in a
path that a Debian package overrides, dpkg is going to barf because it
doesn't know where those files come from. This of course applies to
randomly ./configure-ing stuff with --prefix=/usr and running make install.
Not going to happen in case of go. “go get” _always_ installes into a
user-defined path with ~/gocode being the recommendation.
Post by Chow Loong Jin
3. Software packages from Apt cannot declare dependencies against
language-specific packages, for the same reasons highlighted in #1.
Irrelevant argument in our case, as outlined earlier in the discussion.
--
Best regards,
Michael
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@midna.zekjur.net
Marcelo E. Magallon
2013-01-30 12:40:02 UTC
Permalink
Post by Michael Stapelberg
Post by Chow Loong Jin
3. Software packages from Apt cannot declare dependencies against
language-specific packages, for the same reasons highlighted in #1.
Irrelevant argument in our case, as outlined earlier in the
discussion.
That's not quite true.

If a Debian package uses a service (think database, webserver,
etc) provided by a laguage-specific package, you cannot declare
such dependency because Apt doesn't have visibility into the
other system's information.

At that point you are in the Rubygems situation, where you have
to start packaging the language-specific packages for Debian.
You get into trouble when rubygems sees package-0.0.1 installed
(and managed by dpkg) and the user installs (via gems)
application-1.0.0 which declares a dependency on package-0.1.0.

Now what would you expect to happen?

rubygems tells dpkg to upgrade package-0.0.1? What about the
other stuff managed by dpkg which depends on package-0.0.1?

rubygems installs package-0.1.0 on its own? What happens to
package-0.1.0 managed by dpkg?

What happens if application-1.0.0 gets packaged for dpkg?
Should it be handed over from one system to the other?

Should rubygems packages be hidden from dpkg, let the two
systems coexist in parallel?

This situation gets so messy, that some Ruby developers prefer
to _not_ install ruby via dpkg at all. And they get mad when
something in the dependency chain pulls Ruby into the system.

Marcelo
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@esk
Steve McIntyre
2013-01-30 16:30:02 UTC
Permalink
Post by Marcelo E. Magallon
This situation gets so messy, that some Ruby developers prefer
to _not_ install ruby via dpkg at all. And they get mad when
something in the dependency chain pulls Ruby into the system.
To be fair, it's similar in reverse. Some Debian developers prefer to
_not_ install Ruby *at all*. Given how utterly awful the internals of
the language implementation are, I'd happily support dropping Ruby
from Debian altogether. Maybe that's just me...
--
Steve McIntyre, Cambridge, UK. ***@einval.com
"We're the technical experts. We were hired so that management could
ignore our recommendations and tell us how to do our jobs." -- Mike Andrews
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/E1U0aTI-00067a-***@mail.einval.com
Marc Haber
2013-01-30 19:40:03 UTC
Permalink
Post by Steve McIntyre
To be fair, it's similar in reverse. Some Debian developers prefer to
_not_ install Ruby *at all*. Given how utterly awful the internals of
the language implementation are, I'd happily support dropping Ruby
from Debian altogether. Maybe that's just me...
Ruby is actually a pretty nice language, and it is needed by the two
major Configuration Management tools that are both widely used in
business, puppet and chef.

I have never looked into Ruby's internals, aber I happen to like its
outside.

Greetings
Marc
--
-------------------------------------- !! No courtesy copies, please !! -----
Marc Haber | " Questions are the | Mailadresse im Header
Mannheim, Germany | Beginning of Wisdom " | http://www.zugschlus.de/
Nordisch by Nature | Lt. Worf, TNG "Rightful Heir" | Fon: *49 621 72739834
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/E1U0dSC-0005y5-***@swivel.zugschlus.de
Игорь Пашев
2013-01-30 22:30:01 UTC
Permalink
Post by Marc Haber
Post by Steve McIntyre
To be fair, it's similar in reverse. Some Debian developers prefer to
_not_ install Ruby *at all*. Given how utterly awful the internals of
the language implementation are, I'd happily support dropping Ruby
from Debian altogether. Maybe that's just me...
Ruby is actually a pretty nice language, and it is needed by the two
major Configuration Management tools that are both widely used in
business, puppet and chef.
Don't forget vim-addon-manager!
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/CALL-Q8z3oV07uzY8TO9fwspykacihOTmspB79V3yV0JOYQ=***@mail.gmail.com
Jean-Christophe Dubacq
2013-01-31 05:40:01 UTC
Permalink
Post by Игорь Пашев
Post by Marc Haber
Post by Steve McIntyre
To be fair, it's similar in reverse. Some Debian developers prefer to
_not_ install Ruby *at all*. Given how utterly awful the internals of
the language implementation are, I'd happily support dropping Ruby
from Debian altogether. Maybe that's just me...
Ruby is actually a pretty nice language, and it is needed by the two
major Configuration Management tools that are both widely used in
business, puppet and chef.
Don't forget vim-addon-manager!
Don't forget package.el for emacs!
--
Jean-Christophe Dubacq
Chow Loong Jin
2013-01-31 08:30:01 UTC
Permalink
Post by Jean-Christophe Dubacq
Don't forget package.el for emacs!
Wait, what? package.el uses Ruby, and not elisp?
--
Kind regards,
Loong Jin
Dmitrijs Ledkovs
2013-01-31 10:00:02 UTC
Permalink
Post by Chow Loong Jin
Post by Jean-Christophe Dubacq
Don't forget package.el for emacs!
Wait, what? package.el uses Ruby, and not elisp?
How did we start a thread on go packaging and now mention TeX Live
package manager (tlmgr) ?!

Regards,

Dmitrijs.
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/CANBHLUh3S9zGVR93JttPRLZdaQHjNxQHvq8O+***@mail.gmail.com
Paul Wise
2013-01-30 09:00:02 UTC
Permalink
Post by Chow Loong Jin
Having multiple package managers which don't know about each other on a system
is evil™ (but in some cases, can be managed properly).
Some integration between dpkg and domain-specific package managers
could be useful. With DEP-11, we could have 'cpan install Foo::Bar'
tell you "Foo::Bar is available in your distribution's package
manager, you can install it with apt-get install libfoo-bar-perl.".

Also, GoboLinux apparently have a more advanced system for integration
between their package manager and others like rubygems/cpan.
Unfortunately the video for the presentation about it at LCA 2010 was
lost, but maybe folks who are interested in this could take a look at
the implementation.
--
bye,
pabs

http://wiki.debian.org/PaulWise
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/CAKTje6GJ9WO5gDV_exc5Y7B-D_KBdsE+bfGiYFXj66j-***@mail.gmail.com
Thibaut Paumard
2013-01-30 09:30:03 UTC
Permalink
Post by Paul Wise
Post by Chow Loong Jin
Having multiple package managers which don't know about each other on a system
is evil™ (but in some cases, can be managed properly).
Some integration between dpkg and domain-specific package managers
could be useful. With DEP-11, we could have 'cpan install Foo::Bar'
tell you "Foo::Bar is available in your distribution's package
manager, you can install it with apt-get install libfoo-bar-perl.".
Also, GoboLinux apparently have a more advanced system for integration
between their package manager and others like rubygems/cpan.
Unfortunately the video for the presentation about it at LCA 2010 was
lost, but maybe folks who are interested in this could take a look at
the implementation.
FWIW, in the case of the Yorick language, I have proposed a developed
with upstream a solution which works reasonably well for the Yorick
community:

- the yorick internal package manager can install (by default) files
either in /usr/local or in the user home directory;

- Debian packages for an add-on drop a file somewhere under /usr/lib so
that the yorick internal package manager knows what version of which
add-ons are installed and does not unnecessarily install another version
of an add-on for solving a dependency.

However, the internal package manager only knows what is installed, not
what is available, and dpkg of course has no idea of what the yorick
internal package manager has installed.

Kind regards, Thibaut.
Thorsten Glaser
2013-01-30 21:20:03 UTC
Permalink
Post by Paul Wise
Post by Chow Loong Jin
Having multiple package managers which don't know about each other on a system
is evil™ (but in some cases, can be managed properly).
Meh, it’s evil, period.
Post by Paul Wise
Some integration between dpkg and domain-specific package managers
could be useful. With DEP-11, we could have 'cpan install Foo::Bar'
tell you "Foo::Bar is available in your distribution's package
manager, you can install it with apt-get install libfoo-bar-perl.".
That would be nice but not need integration, just replacement of
cpan with a small script ;-)

And, of course, having all of CPAN packaged properly. Which comes to…
Post by Paul Wise
Also, GoboLinux apparently have a more advanced system for integration
between their package manager and others like rubygems/cpan.
… something like that. Not quite. I don’t think it’s worth the pain;
it may be manageable for Perl, and maybe PEAR, and with even bigger
pain maybe even pypi, but not for Ruby and similarily hostile upstreams.

My co-developer (on the MirBSD side) benz has written a script that
almost automates creation of a port (source package) from/for a CPAN:
http://www.slideshare.net/bsiegert/painless-perl-ports-with-cpan2port
(It looks even MacPorts has adopted it!)

Of course, it needs some manual review (and someone’d have to convert
its output to Debian source packages, or merge it into the already
existing dh-make-perl which also somewhat worked when I tried it), but
it would make achieving this goal possible (and let running dpkg require
128 MiB of RAM or so, to fit the list of packages into it, I guess, but
even those Amigas have that).

bye,
//mirabilos (with his Synergy hat)
PS: Steve is not alone in thinking to kick it…
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/loom.20130130T221150-***@post.gmane.org
Russ Allbery
2013-01-30 21:40:02 UTC
Permalink
Post by Thorsten Glaser
My co-developer (on the MirBSD side) benz has written a script that
http://www.slideshare.net/bsiegert/painless-perl-ports-with-cpan2port
(It looks even MacPorts has adopted it!)
Of course, it needs some manual review (and someone’d have to convert
its output to Debian source packages, or merge it into the already
existing dh-make-perl which also somewhat worked when I tried it), but
it would make achieving this goal possible (and let running dpkg require
128 MiB of RAM or so, to fit the list of packages into it, I guess, but
even those Amigas have that).
Do you think there's something substantial missing from the existing
Debian packaging of Perl modules? I'm quite happy with what Debian is
doing already and am somewhat dubious there is any point in changing
anything. Most of what's available in CPAN that isn't already packaged is
either new or quite obscure, and much of what's available but obscure
probably *shouldn't* be packaged: it's buggy, abandoned, an inferior
version of something that's already packaged, or otherwise just not of
general interest.
--
Russ Allbery (***@debian.org) <http://www.eyrie.org/~eagle/>
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@windlord.stanford.edu
Thorsten Glaser
2013-01-30 21:50:01 UTC
Permalink
Post by Russ Allbery
Do you think there's something substantial missing from the existing
Debian packaging of Perl modules? I'm quite happy with what Debian is
(Oops. Forgot to fix a spelling mistake in the Subject in my
first reply. It’s not Go!) (Hello GMane, no I was *not* top-posting…
but apparently I have to move this paragraph down.)

I’m not versed enough in the ways of Perl to be able to comment
on this. I meant no slight in any way to the existing packages.
I was just speculating on missing things that need to be packaged…
Post by Russ Allbery
anything. Most of what's available in CPAN that isn't already packaged is
either new or quite obscure, and much of what's available but obscure
probably *shouldn't* be packaged: it's buggy, abandoned, an inferior
version of something that's already packaged, or otherwise just not of
general interest.
… but apparently, there’s no need to do anything more ;-)

That is, of course, even better. Then, just use the packaged things.

(Side note: not that I like PHP even one bit, but at least it doesn’t
make actually packaging its thingies hard. With my FusionForge developer
hat on, we generally have everything needed already in Debian or can
easily add it, so that we can really say what the DDs in the upstream
team prefer: if it’s not in Debian, it doesn’t exist.)

bye,
//mirabilos

PS: Please do not call golang “Go”, it’s a hostile name take-over that
could have easily been fixed (instead of becoming hostile; I’m sure
it was just an oversight initially) when Issue 9 was filed. I follow
the tradition to call the language “Issue 9” in reference to both that
bug and Plan 9 instead, but “golang” is probably fine. Or “go-nuts”…
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/loom.20130130T224109-***@post.gmane.org
Lennart Sorensen
2013-01-31 00:00:01 UTC
Permalink
Post by Thorsten Glaser
Meh, it’s evil, period.
Absolutely. As a user I have a nice package management system that I
know how to use and which works well. I don't need another one.

It is not the job of a language developer to invent yet another bloody
package distribution and installation system. Just because windows
doesn't have a decent way to handle software installations doesn't mean
other systems don't know how to do it well.
Post by Thorsten Glaser
That would be nice but not need integration, just replacement of
cpan with a small script ;-)
And, of course, having all of CPAN packaged properly. Which comes to…
I think having the useful bits done is fine, and dh-make-perl is pretty
good for the few times you want to try something that isn't already
packaged (and probably just long enough to find out it wasn't worth
using in the first place).
Post by Thorsten Glaser
… something like that. Not quite. I don’t think it’s worth the pain;
it may be manageable for Perl, and maybe PEAR, and with even bigger
pain maybe even pypi, but not for Ruby and similarily hostile upstreams.
Much as I like PHP, I really hate PEAR. I don't want another add on
system to manage.
Post by Thorsten Glaser
My co-developer (on the MirBSD side) benz has written a script that
http://www.slideshare.net/bsiegert/painless-perl-ports-with-cpan2port
(It looks even MacPorts has adopted it!)
Of course, it needs some manual review (and someone’d have to convert
its output to Debian source packages, or merge it into the already
existing dh-make-perl which also somewhat worked when I tried it), but
it would make achieving this goal possible (and let running dpkg require
128 MiB of RAM or so, to fit the list of packages into it, I guess, but
even those Amigas have that).
Poor old Amigas. Not that any of mine have enough CPU to run linux. :(
--
Len Sorensen
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@csclub.uwaterloo.ca
Russ Allbery
2013-01-31 00:10:02 UTC
Permalink
Post by Lennart Sorensen
Absolutely. As a user I have a nice package management system that I
know how to use and which works well. I don't need another one.
It is not the job of a language developer to invent yet another bloody
package distribution and installation system. Just because windows
doesn't have a decent way to handle software installations doesn't mean
other systems don't know how to do it well.
My upstream experience is that people generally use the package management
properties of things like CPAN in two main cases: when using operating
systems with deficient package management or scope of packages, or when
using old systems that can't be updated for some reason.

CPAN is hugely popular with Solaris admins, for example, because good luck
finding real Solaris packages of most of the things you care about. Red
Hat is similar; a lot of common Perl modules are packaged, but far fewer
than are packaged for Debian, and it's common to need something that isn't
part of the base Red Hat distribution.

The problem with the general feeling that they're inferior versions of
real package management is that the above use cases mean that the
packaging components of something like CPAN aren't ever going to go away,
since upstream will continue to support users on those platforms, or users
who are on lenny and refuse to upgrade for some reason. And since those
components exist and people get used to using them to get their jobs done
on Red Hat, they'll naturally use them on Debian as well without being
aware there are better alternatives. Or when they're on squeeze and want
to do some sort of cutting-edge development.

Thankfully, at least with Perl, they co-exist fairly well, although people
get bitten by having outdated versions of things installed in /usr/local
from an old CPAN run that should have been, but weren't, supplanted by
better versions available through the package manager.
--
Russ Allbery (***@debian.org) <http://www.eyrie.org/~eagle/>
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@windlord.stanford.edu
Jon Dowland
2013-02-01 10:10:01 UTC
Permalink
Post by Lennart Sorensen
Absolutely. As a user I have a nice package management system that I
know how to use and which works well. I don't need another one.
As a Haskell developer, I find cabal much more convenient than nothing,
in the situation where the library I want is not packaged by Debian yet.
If I want my haskell libraries and programs to reach a wide audience, I
need to learn Cabal anyway.
Post by Lennart Sorensen
It is not the job of a language developer to invent yet another bloody
package distribution and installation system.
And yet they do, and so we need to manage it.

Remember that Debian does not just provide a package management system: it also
provides repositories and dictates what goes in them according to the DFSG.
Whilst adding new repositories is relatively simple for users (and growing in
popularity for upstreams), installing bare .deb files is still not a very
smooth process (although massively improved by e.g. gdebi these days)

From an upstream POV, they want their software in the hands of end users. They
don't want to have to learn and build a myriad of different package types
(.deb, .rpm, etc.) and crucially neither do we. In many cases they don't want
to have to wait for a distro to package their software for them either.

In the Go case, their users are people who might have a shell/web account but
not admin access on a shared host somewhere, running god knows what distro and
version, hence having a self-contained fat binary that is guaranteed to run
wherever libc is meets their goals.
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@debian
Lennart Sorensen
2013-02-01 20:30:02 UTC
Permalink
Post by Jon Dowland
As a Haskell developer, I find cabal much more convenient than nothing,
in the situation where the library I want is not packaged by Debian yet.
If I want my haskell libraries and programs to reach a wide audience, I
need to learn Cabal anyway.
If you are writing libraries to add to the language, then I don't consider
you a normal developer using the language.
Post by Jon Dowland
And yet they do, and so we need to manage it.
And certainly saying "We will package things for distribution using the
package and installation system we have" is managing it. Upstream and
whine all they want about not using their install system, and they will
be Wrong.

Upstream can do what they want if they think there is a need, but if
they don't consider that a lot of people don't want yet another system
then that's their problem and they do not need to be catered to by
everyone else. If they want their stuff used by people they have to
make it accessible in a normal manner that fits in with whatever a given
distribution does.
Post by Jon Dowland
Remember that Debian does not just provide a package management system: it also
provides repositories and dictates what goes in them according to the DFSG.
Whilst adding new repositories is relatively simple for users (and growing in
popularity for upstreams), installing bare .deb files is still not a very
smooth process (although massively improved by e.g. gdebi these days)
You generally don't have to because things are in Debian archives already.
Post by Jon Dowland
From an upstream POV, they want their software in the hands of end users. They
don't want to have to learn and build a myriad of different package types
(.deb, .rpm, etc.) and crucially neither do we. In many cases they don't want
to have to wait for a distro to package their software for them either.
So what? A lot of us want stable systems that works and is consistent.
They don't have to package everything, they just have to make it possible
to get the sources and build them and allow them to be packaged and
distributed in a consistent manner (unlike Ocracle's Java these days
for example).

If you want bleeding edge, then you are not a normal user and you
certainly aren't a system administrator that wants to keep a controlled
system they can reproduce.

I know dpkg --get-selections will tell me all the software installed on
the system so I can do the same on another one. If yet another package
maanger gets involved I have to know about it and do something different
to handle that. That's not a good thing.
Post by Jon Dowland
In the Go case, their users are people who might have a shell/web account but
not admin access on a shared host somewhere, running god knows what distro and
version, hence having a self-contained fat binary that is guaranteed to run
wherever libc is meets their goals.
That's a different goal than running a nice debian system.
--
Len Sorensen
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@csclub.uwaterloo.ca
Russ Allbery
2013-02-01 20:40:02 UTC
Permalink
Post by Lennart Sorensen
Post by Jon Dowland
As a Haskell developer, I find cabal much more convenient than nothing,
in the situation where the library I want is not packaged by Debian
yet. If I want my haskell libraries and programs to reach a wide
audience, I need to learn Cabal anyway.
If you are writing libraries to add to the language, then I don't
consider you a normal developer using the language.
I hope that's not generally true, because that would be horribly
depressing. I don't believe that's true of the Perl community in general.
It's certainly not true of the C or Java community!
Post by Lennart Sorensen
If you want bleeding edge, then you are not a normal user and you
certainly aren't a system administrator that wants to keep a controlled
system they can reproduce.
Speak for yourself. I've been a system administrator for twenty years,
and sometimes I have to deploy bleeding-edge code in order to accomplish a
particular task. You can do that in ways that also give you a
reproducible system.

Using Debian packages is a *means*, not an *end*. Sometimes in these
discussions I think people lose sight of the fact that, at the end of the
day, the goal is not to construct an elegantly consistent system composed
of theoretically pure components. That's a *preference*, but there's
something that system is supposed to be *doing*, and we will do what we
need to do in order to make the system functional.

Different solutions have different tradeoffs. Obviously, I think Debian
packages are in a particularly sweet spot among those tradeoffs or I
wouldn't invest this much time in Debian, but they aren't perfect. There
are still tradeoffs. (For example, Debian packages are often useless for
research computing environments where it is absolutely mandatory that
multiple versions of any given piece of software be co-installable and
user-choosable.)
Post by Lennart Sorensen
I know dpkg --get-selections will tell me all the software installed on
the system so I can do the same on another one. If yet another package
maanger gets involved I have to know about it and do something different
to handle that. That's not a good thing.
Indeed. But it's a tradeoff. One frequently does not have the luxury of
appending to this paragraph "...and therefore I will never install
anything with a different package manager." Sometimes it's the most
expedient way of getting something done. Sometimes people aren't as deft
with turning unpackaged software into Debian packages as you and I are.
Post by Lennart Sorensen
Post by Jon Dowland
In the Go case, their users are people who might have a shell/web
account but not admin access on a shared host somewhere, running god
knows what distro and version, hence having a self-contained fat binary
that is guaranteed to run wherever libc is meets their goals.
That's a different goal than running a nice debian system.
Hence the point. Not everyone has the same goals.
--
Russ Allbery (***@debian.org) <http://www.eyrie.org/~eagle/>
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@windlord.stanford.edu
Lennart Sorensen
2013-02-01 21:10:02 UTC
Permalink
Post by Russ Allbery
I hope that's not generally true, because that would be horribly
depressing. I don't believe that's true of the Perl community in general.
It's certainly not true of the C or Java community!
Not all C libraries are distributed from one central site and they
certainly don't expect you to use a central package installation system.

I personally consider Java a bad joke that won't go away.
Post by Russ Allbery
Speak for yourself. I've been a system administrator for twenty years,
and sometimes I have to deploy bleeding-edge code in order to accomplish a
particular task. You can do that in ways that also give you a
reproducible system.
If I want something updated that is newer than what debian provides,
then I will make the .deb myself. I want everything consistently
installed.
Post by Russ Allbery
Using Debian packages is a *means*, not an *end*. Sometimes in these
discussions I think people lose sight of the fact that, at the end of the
day, the goal is not to construct an elegantly consistent system composed
of theoretically pure components. That's a *preference*, but there's
something that system is supposed to be *doing*, and we will do what we
need to do in order to make the system functional.
I like my system to stay working and maintainable. I still have one
system that was installed with Debian 2.1, and upgraded ever since and
is still doing fine. You don't generally get there by taking shortcuts
that seem convinient now, even though long term they are a bad idea.
I very much find doing it right to begin with saves a lot of hassle and
time in the long run. Avoiding trying to circumvent dpkg and apt is
the best way to do that. dpkg and apt help you more than any other
packaging system I have ever seen. No point trying to bypass them.
Post by Russ Allbery
Different solutions have different tradeoffs. Obviously, I think Debian
packages are in a particularly sweet spot among those tradeoffs or I
wouldn't invest this much time in Debian, but they aren't perfect. There
are still tradeoffs. (For example, Debian packages are often useless for
research computing environments where it is absolutely mandatory that
multiple versions of any given piece of software be co-installable and
user-choosable.)
Making a debian package is generally very easy, so if you need something
on your system, make a package for it. Now it's simple to deploy to
many systems.
Post by Russ Allbery
Indeed. But it's a tradeoff. One frequently does not have the luxury of
appending to this paragraph "...and therefore I will never install
anything with a different package manager." Sometimes it's the most
expedient way of getting something done. Sometimes people aren't as deft
with turning unpackaged software into Debian packages as you and I are.
But it's so easy (not like rpm and such, which tend to be more work).

For cpan there is even dh-make-perl. The solution then is to make
equivelant scripts for other languages. The solution is NOT to use some
other package installation system.
--
Len Sorensen
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@csclub.uwaterloo.ca
Russ Allbery
2013-02-01 21:30:02 UTC
Permalink
Post by Lennart Sorensen
Post by Russ Allbery
I hope that's not generally true, because that would be horribly
depressing. I don't believe that's true of the Perl community in
general. It's certainly not true of the C or Java community!
Not all C libraries are distributed from one central site and they
certainly don't expect you to use a central package installation system.
So much more the shame for C. Those are *improvements* that Perl came up
with (well, actually, that the TeX community came up with and Perl copied)
that made the ecosystem much nicer.
Post by Lennart Sorensen
I personally consider Java a bad joke that won't go away.
Look, if other people like something and use it heavily, that's probably
because it solves their problems. Saying that it doesn't solve *your*
problems, or even that it creates problems for you, does not change the
fact that it solves *their* problems.

I have some personal experience with Java on both the systems
administration side and on the developer side. It's an awful language for
deployment, and it's a great language to write code in, with an incredibly
rich ecosystem of well-tested and reliable components for nearly anything
you'd want to do. In particular, it is far better at APIs and code
boundaries than Perl is, and therefore scales to large teams of developers
more naturally and more easily than Perl does. And I say this as someone
who loves Perl and maintains core Perl modules.

The same Java infrastructure that makes it so incredibly painful to
construct a consistent system with separated and separately-upgradable
parts makes it a wonderful system in which to develop and to create
applications with reliably consistent behavior. Particularly if, as is
the case in a depressing number of environments, the system administrators
are in some other group from the developers, they're not allowed to
coordinate, and the system administrators have all sorts of rules and
restrictions the developers have to go through to update anything in
production.

You talk about reproducible systems as one of your primary goals. Well,
that's exactly why Java does those things that make it such a pain in a
Debian context. If you bundle together all the exact JARs that were
tested and known to work and don't change any of them, you get exactly
that: a reproducible system that works exactly like it did in the test
environment. You of course also have a system that has some real problems
when it comes time to do security upgrades, and one that tends to be very
difficult to upgrade to the latest version of the JARs when you need some
new feature. But those are *tradeoffs*. That is not an absolute flaw in
Java.

The flaws in Java are more obvious in a devops environment. Most sites
aren't devops. If you're in a traditional "develop, test, and throw it
over the fence to the production guys" shop, Java's ability to roll up
your application into one file that is completely self-contained is a
*godsend*.

You may feel that all non-devops shops should get the religion. I'd even
agree with you. But all the stuff we're talking about are artifacts that
exist in the real world and have to deal with how that world currently
works, not how we want it to work at some theoretical point in the future.
Post by Lennart Sorensen
If I want something updated that is newer than what debian provides,
then I will make the .deb myself. I want everything consistently
installed.
Sure. Me too. I've also been making Debian packages for years, so this
is a matter of an hour or two of work, or less if I don't care about doing
it properly.
Post by Lennart Sorensen
I like my system to stay working and maintainable. I still have one
system that was installed with Debian 2.1, and upgraded ever since and
is still doing fine. You don't generally get there by taking shortcuts
that seem convinient now, even though long term they are a bad idea.
Sure. I have that religion too.

The other way that you don't get to having a system that's been
continuously upgraded from Debian 2.1 is if you got fired somewhere around
Debian 3 because you couldn't deploy things fast enough for your boss, who
didn't give a shit about whether things were in Debian packages or not.

Tradeoffs.
Post by Lennart Sorensen
Making a debian package is generally very easy, so if you need something
on your system, make a package for it.
I would love this to be the case. It just isn't.

*I* find it easy. I know lots of other people who find it easy. And, in
fact, we make doing this mandatory within my group. But because we made
it mandatory, I've trained a lot of sysadmins and developers in how to do
this. I've seen the problems they ran into, I've helped them out of blind
corners, I've cleaned up some of the messes, and I've helped them find
better tools.

It's not easy. It's really not easy for quite a few people.

I do think it pays off in the long run. If one has the luxury of a long
run, teaching people proper packaging is great. In the short run, for at
least the first few packages people make, it is almost certainly the case
that they would have gotten their problem solved and their system deployed
much faster if they had not tried to make a proper package.

I'm a long-run guy too. I love to focus on the long run. But it's a
luxury and a privilege to be able to do that. Projects, deadlines, and
management priorities normally don't much care about the long run. That's
something that we often have to teach the people who are pushing to get
something done, and sometimes that sticks, and sometimes that doesn't.

As upstream, I want my software to feel right, to feel elegant and
comfortable, to people who want to take the long-run approach and do
things right. But I also want my software to be usable by people who have
a crash project that has to be done by tomorrow and who are reaching
desperately for my software because it does something that's mandatory for
that project and they don't have time to figure out another way to do it.
If that means that using cpan install gets them working software faster,
then three cheers for cpan install!
Post by Lennart Sorensen
For cpan there is even dh-make-perl. The solution then is to make
equivelant scripts for other languages. The solution is NOT to use some
other package installation system.
dh-make-perl is great, don't get me wrong, but the problem with this sort
of automation is that it works 90% of the time and then 10% of the time
there's some weird oddity or corner case. And if you don't know anything
about Debian packaging and you don't know how any of the pieces work, if
you only know how to use the automated tool, then when you hit that 10%
case, you are completely lost.

That's a demoralizing place to be. That's when people switch back to cpan
install, which is a rather thinner wrapper around a bunch of commands one
can run manually, and which the upstream whose code you're trying to
deploy probably already understands and can help you with.
--
Russ Allbery (***@debian.org) <http://www.eyrie.org/~eagle/>
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@windlord.stanford.edu
Jon Dowland
2013-02-05 17:30:01 UTC
Permalink
Post by Russ Allbery
Post by Lennart Sorensen
Not all C libraries are distributed from one central site and they
certainly don't expect you to use a central package installation system.
So much more the shame for C. Those are *improvements* that Perl came up
with (well, actually, that the TeX community came up with and Perl copied)
that made the ecosystem much nicer.
On that note, Rusty Russell's "CCAN" is interesting:
http://ccodearchive.net/
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@debian
Jon Dowland
2013-02-05 17:40:02 UTC
Permalink
Post by Lennart Sorensen
For cpan there is even dh-make-perl. The solution then is to make
equivelant scripts for other languages. The solution is NOT to use some
other package installation system.
Although I've never used dh-make-perl myself, I'm lead to believe that it
is perhaps *the* most successful tool of its type (that is, of things that
create .debs from packages in an alternative repository system like CPAN,
gems, cabal, etc.), and that it works as reliably as it does (which as Russ
points out is not 100% by any means) by relying on data from CPAN. So it's
hard to use it as an argument against such external package systems.
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@debian
Russ Allbery
2013-02-05 18:00:02 UTC
Permalink
Post by Jon Dowland
Although I've never used dh-make-perl myself, I'm lead to believe that
it is perhaps *the* most successful tool of its type (that is, of things
that create .debs from packages in an alternative repository system like
CPAN, gems, cabal, etc.), and that it works as reliably as it does
(which as Russ points out is not 100% by any means) by relying on data
from CPAN. So it's hard to use it as an argument against such external
package systems.
Indeed, that's exactly right. dh-make-perl works as well as it does in
part because upstream has a fairly good package management system with
explicit dependencies and good package metadata, all of which dh-make-perl
can download and make use of.

I used the tool back before integration with that information was
available, and it required much more work to build the package. Now, it
can mostly do things like figure out the dependencies without a lot of
human assistance.
--
Russ Allbery (***@debian.org) <http://www.eyrie.org/~eagle/>
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@windlord.stanford.edu
Bernhard R. Link
2013-02-01 22:50:02 UTC
Permalink
Post by Russ Allbery
Post by Lennart Sorensen
If you want bleeding edge, then you are not a normal user and you
certainly aren't a system administrator that wants to keep a controlled
system they can reproduce.
Speak for yourself. I've been a system administrator for twenty years,
and sometimes I have to deploy bleeding-edge code in order to accomplish a
particular task. You can do that in ways that also give you a
reproducible system.
Using Debian packages is a *means*, not an *end*. Sometimes in these
discussions I think people lose sight of the fact that, at the end of the
day, the goal is not to construct an elegantly consistent system composed
of theoretically pure components. That's a *preference*, but there's
something that system is supposed to be *doing*, and we will do what we
need to do in order to make the system functional.
Different solutions have different tradeoffs. Obviously, I think Debian
packages are in a particularly sweet spot among those tradeoffs or I
wouldn't invest this much time in Debian, but they aren't perfect. There
are still tradeoffs. (For example, Debian packages are often useless for
research computing environments where it is absolutely mandatory that
multiple versions of any given piece of software be co-installable and
user-choosable.)
Of course there are trade-offs. For every rule, as sensible it might be,
there can be a need great enough to ignore it. Using software not
properly packaged is not so different from modifying files in /usr/lib
from the distribution, compiling passwords or other hardcoded stuff
directly into programs, using binaries you have no source of or even
using those and patching the binary or many many other things you can do
and sometimes you have to do.

That there are no guidelines that are absolute and that may not be
better ignored in some cases does not change that in general they
show a useful path that leaving without a good reason is no good idea.

And a "only use distro packaged software" is a very useful guideline.
There are so many advantages in this that "you cannot get this with
distro packages" is a very good argument against anything you cannot
get this way. There will always be a case where there can be a more
important argument pushing the scales in the other direction, but at
the end of the day, the normal system administrator wants one package
management tool for all their software (or at least as few as possible)
and as few copies/different versions of common code (aka libraries,
modules, ...) around as possible. And most of the features for
developers are just additional nightmares for the administrator.

Bernhard R. Link
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@client.brlink.eu
Jon Dowland
2013-02-05 17:40:02 UTC
Permalink
Post by Russ Allbery
Using Debian packages is a *means*, not an *end*. Sometimes in these
discussions I think people lose sight of the fact that, at the end of the
day, the goal is not to construct an elegantly consistent system composed
of theoretically pure components. That's a *preference*, but there's
something that system is supposed to be *doing*, and we will do what we
need to do in order to make the system functional.
And in particular, where a problem cannot be solved in pure Debian, I don't
want Debian to interfere with the bit of the solution that lives outside of
its domain. That may include not attempting to package/patch/alter/adjust
upstream systems like Go that have a different philosophical approach. The
worst case scenario IMHO is some people invest a lot of time to make the
Debianized-Go stuff quite divergent from upstream, people's expectations of
how things behave in Go-land are broken when they access Go-via-Debian, the
job is never quite complete and so we get extra bugs, and a new upstream
community relationship is marred. This is a much worse outcome than not
attempting to package Go at all, IMHO.

I guess I'd quite like the boundaries of responsibility to be very clear,
when I'm forced to have such boundaries.
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@debian
Adam Borowski
2013-02-05 20:10:01 UTC
Permalink
Post by Jon Dowland
And in particular, where a problem cannot be solved in pure Debian, I don't
want Debian to interfere with the bit of the solution that lives outside of
its domain. That may include not attempting to package/patch/alter/adjust
upstream systems like Go that have a different philosophical approach. The
worst case scenario IMHO is some people invest a lot of time to make the
Debianized-Go stuff quite divergent from upstream, people's expectations of
how things behave in Go-land are broken when they access Go-via-Debian
Just think what would happen if every single language and environment would
be allowed its own packaging system. This way lies madness. Yes, packaging
needs to be beaten into shape, even if this goes against upstream wishes.
--
ᛊᚨᚾᛁᛏᚣ᛫ᛁᛊ᛫ᚠᛟᚱ᛫ᚦᛖ᛫ᚹᛖᚨᚲ
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@angband.pl
Hilko Bengen
2013-02-05 22:50:01 UTC
Permalink
Post by Adam Borowski
The worst case scenario IMHO is some people invest a lot of time to
make the Debianized-Go stuff quite divergent from upstream, people's
expectations of how things behave in Go-land are broken when they
access Go-via-Debian
Just think what would happen if every single language and environment
would be allowed its own packaging system. This way lies madness.
Not if there's a clear separation between the language's solution and
dpkg-land. Mind you, almost every language already has its own packaging
system or two. I really don't see any big deal here.
Post by Adam Borowski
Yes, packaging needs to be beaten into shape, even if this goes
against upstream wishes.
Just put up a boundary between dpkg and everything else. This is
actually quite easy because Debian packages generally don't install
anything to /usr/local. From there it's just a maatter of ensuring that
each packaging system respects the line. And I expect that that's
exactly what is going to happen in the case of the go(1) tool.

Cheers,
-Hilko
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@msgid.hilluzination.de
Neil Williams
2013-02-06 09:30:02 UTC
Permalink
On Tue, 05 Feb 2013 23:44:30 +0100
Post by Hilko Bengen
Post by Adam Borowski
The worst case scenario IMHO is some people invest a lot of time to
make the Debianized-Go stuff quite divergent from upstream, people's
expectations of how things behave in Go-land are broken when they
access Go-via-Debian
Just think what would happen if every single language and environment
would be allowed its own packaging system. This way lies madness.
Not if there's a clear separation between the language's solution and
dpkg-land. Mind you, almost every language already has its own packaging
system or two. I really don't see any big deal here.
Then don't package Go at all and leave it entirely outside the realm of
dpkg - no dependencies allowed in either direction, no files created
outside /usr/local for any reason, no contamination of the apt or dpkg
cache data. If what you want is complete separation, why is there even
a long running thread on integration?
Post by Hilko Bengen
Post by Adam Borowski
Yes, packaging needs to be beaten into shape, even if this goes
against upstream wishes.
Just put up a boundary between dpkg and everything else. This is
actually quite easy because Debian packages generally don't install
anything to /usr/local. From there it's just a maatter of ensuring that
each packaging system respects the line. And I expect that that's
exactly what is going to happen in the case of the go(1) tool.
Then why bother discussing packaging Go if it isn't going to be
packaged, it's just going to invent it's own little ghetto
in /usr/local?

If Go wants to be packaged, it complies by the requirements of
packaging. If it wants to live the life of a hermit and disappear up
itself, that's fine but then it doesn't get the privilege of
interacting with the rest of Debian. It's just a user download.
--
Neil Williams
=============
http://www.linux.codehelp.co.uk/
Russ Allbery
2013-02-06 09:50:02 UTC
Permalink
Post by Neil Williams
If Go wants to be packaged, it complies by the requirements of
packaging. If it wants to live the life of a hermit and disappear up
itself, that's fine but then it doesn't get the privilege of interacting
with the rest of Debian. It's just a user download.
Debian packaging isn't a reward that we give to upstream software authors
in exchange for doing things right. Most software gets Debian packages
because people who use Debian want to use it and don't like dealing with
unpackaged software. Upstream authors are often indifferent.
--
Russ Allbery (***@debian.org) <http://www.eyrie.org/~eagle/>
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@windlord.stanford.edu
Jon Dowland
2013-02-06 09:50:02 UTC
Permalink
Then don't package Go at all and leave it entirely outside the realm of dpkg
- no dependencies allowed in either direction, no files created outside
/usr/local for any reason, no contamination of the apt or dpkg cache data. If
what you want is complete separation, why is there even a long running thread
on integration?
That's one possible solution, and a low-risk one at that. The others carry the
risk of doing the job badly, especially if there is not enough resource to
implement them going forwards.
Then why bother discussing packaging Go if it isn't going to be packaged,
it's just going to invent it's own little ghetto in /usr/local?
That seems pretty perjorative. The reason Go has "invented it's own little
ghetto" is to solve the distribution problem. The reason they want to solve it
is because, despite our best efforts, we haven't solved it. Pretending we have
done helps nobody. From Go's perspective, we are a bit player. There's a
pattern of misplaced arrogance and pride that permeates -devel from time to
time whenever difficult integration discussions come up (systemd included) that
really doesn't help. Let's not overstate our importance in the wider world, it
does not help us one bit. Step by step we'll just close our doors to the rest
of the Universe and slide further into irrelevance.
If Go wants to be packaged, it complies by the requirements of packaging.
Go doesn't want anything: It's a programming language and environment, not a
sentient being. The authors of Go are probably not that bothered about it being
packaged, seeing as they've put energy into solving the distribution problem
themselves, at the same time making it more difficult to distribution-package.
The people who want to see it packaged are people who want to see Debian users
be able to conveniently interact with Go-land.
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@debian
Hilko Bengen
2013-02-06 13:50:02 UTC
Permalink
If what you want is complete separation, why is there even a long
running thread on integration?
Sorry if I failed to make myself clear:

I want excellent Debian packages of the compiler/runtime/tools *and*
libraries *and* still make it possible for our users to use upstream's
half-a-package-manager if they so desire. Complete separation is
something entirely different.
Then why bother discussing packaging Go if it isn't going to be
packaged, it's just going to invent it's own little ghetto in
/usr/local?
We already have directory structures in place in /usr/local for several
other languages and "software ecosystems":

$ ls /usr/local/lib/
eclipse ocaml python2.6 python3.1 site_ruby
luarocks perl python2.7 python3.2
$ ls /usr/local/share
applications emacs games man sgml xml
ca-certificates fonts hibernate perl texmf zsh

I am pretty sure that if you asked about packaging software in the
Python, Perl, Ruby, Java, Lua communities, you would get recommendations
to not use Debian packages at all and get pointers to what the
respective community considers a solution to the packaging problem (if
they see it as a problem at all). This likely involve replacing the
language compiler/runtime itself.

So what? Just take the freedom given to you through the software
licenses and ignore those parts of their view of how things ought to
work and work with the useful parts instead.

Cheers,
-Hilko
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@msgid.hilluzination.de
Roland Mas
2013-02-06 14:30:03 UTC
Permalink
Hilko Bengen, 2013-02-06 14:46:11 +0100 :

[...]
Post by Hilko Bengen
I am pretty sure that if you asked about packaging software in the
Python, Perl, Ruby, Java, Lua communities, you would get recommendations
to not use Debian packages at all and get pointers to what the
respective community considers a solution to the packaging problem (if
they see it as a problem at all).
I can only speak about Python and Perl, but I don't remember *ever*
having been told to use their deployment system instead of the packaged
versions of the interpreter and modules. The closest I've seen is
something like "if you're running CentOS or RHEL, then you'll need this
plethora of modules that are not packaged, so please use our
language-specific system to install them instead".

So it is possible for a language community to work with the
distributors rather than against. The fact that some communities
(insert your favourite pet peeve here) choose not to shouldn't be
construed as an impossibility.

Roland.
--
Roland Mas

Au royaume des aveugles, les borgnes sont mal vus.
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@polymir.internal.placard.fr.eu.org
Hilko Bengen
2013-02-06 15:10:02 UTC
Permalink
Post by Roland Mas
[...]
Post by Hilko Bengen
I am pretty sure that if you asked about packaging software in the
Python, Perl, Ruby, Java, Lua communities, you would get recommendations
to not use Debian packages at all and get pointers to what the
respective community considers a solution to the packaging problem (if
they see it as a problem at all).
I can only speak about Python and Perl, but I don't remember *ever*
having been told to use their deployment system instead of the packaged
versions of the interpreter and modules. The closest I've seen is
something like "if you're running CentOS or RHEL, then you'll need this
plethora of modules that are not packaged, so please use our
language-specific system to install them instead".
I have heard exactly what I described at Perl conferences more than
once. The root cause may have been that some versions of RHEL still ship
Perl 5.8.something and ancient, broken versions of some modules, but I
had the impression that some (not all) people over-generalized this view
to every Linux distribution, including Debian.

If you need or want to run the current stable Perl (5.16.2) and the
latest-greatest modules on wheezy, it's going to be in your best
interest to use things like perlbrew and local::lib (which are both
shipped with Debian).

That upstream's preferences may differ from ours is not even a problem,
as long as no-one tries to enforce his values on users. I don't see such
attempts in the Perl and Go communities.
Post by Roland Mas
So it is possible for a language community to work with the
distributors rather than against.
And it's possible for Debian to at least *try* to work with any language
community.

Simply calling people "idiots" when one hasn't yet understood the other
community's values/interests/position does not help, of course. ;-)

Cheers,
-Hilko
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@msgid.hilluzination.de
Barry Warsaw
2013-02-07 00:30:02 UTC
Permalink
I can only speak about Python and Perl, but I don't remember *ever* having
been told to use their deployment system instead of the packaged versions of
the interpreter and modules. The closest I've seen is something like "if
you're running CentOS or RHEL, then you'll need this plethora of modules that
are not packaged, so please use our language-specific system to install them
instead".
Speaking with many hats on, I think Debian Python has done a very admirable
job of integrating the Python ecosystem with Debian.

It's never going to be perfect, and there are certainly difficult outliers,
but in most cases, things work pretty well. OTOH, Python itself has several
strategies for dealing with difficult situations, with probably the most
notable being virtualenv (and upstream's built-in venv in Python 3.3+). You
can even mix and match, i.e. if some Debian packaged versions are fine but you
need one or two missing or out-of-date packages from the Cheeseshop, you can
`virtualenv --site-system-packages` that generally works pretty well.

Where things get tricky is if you have multiple applications that need
different versions of its dependencies. Say Debian has python-foo 1.2 which
application Bar needs, but application Baz needs python-foo 2.0. Despite
years of discussion, in Debian, Ubuntu, and upstream, we really don't have a
good multi-version story. I think that's forgivable though because it's a
Really Hard Problem and nobody's stepped up to pay a team of 5 gurus for the
25 man years of non-stop work it would probably take to solve (both
technically and socially ;). This problem isn't particularly limited to
Python.

I'm also encouraged by the albeit slow work on upstream packaging technology
(PEPs and code) to help improve the overall packaging story. I had many
discussions with various packaging folks, and good developers from Fedora and
other distros, about ways to make it easier to integrate PyPI packaging with
distros, Linux-based or otherwise. I thought we'd made a lot of good
progress, but some of the drivers moved on to other things. I'm hoping Nick
Coghlan's efforts at Pycon 2013 can help motivate a revival of this work[1].

Cheers,
-Barry

[1]
https://us.pycon.org/2013/community/openspaces/packaginganddistributionminisummit/
Russ Allbery
2013-02-07 00:40:02 UTC
Permalink
Post by Barry Warsaw
Where things get tricky is if you have multiple applications that need
different versions of its dependencies. Say Debian has python-foo 1.2
which application Bar needs, but application Baz needs python-foo 2.0.
Despite years of discussion, in Debian, Ubuntu, and upstream, we really
don't have a good multi-version story. I think that's forgivable though
because it's a Really Hard Problem and nobody's stepped up to pay a team
of 5 gurus for the 25 man years of non-stop work it would probably take
to solve (both technically and socially ;). This problem isn't
particularly limited to Python.
I keep being tempted to go off on a rant about how we have all of these
modern, sophisticated, much more expressive programming languages, and yet
still none of them handle ABI versioning as well as C does. Normal
versioning problems that we just take for granted in C, such as allowing
the coexistence of two versions of the same library with different ABIs on
the system at the same time without doing the equivalent of static
linking, vary between ridiculously difficult and completely impossible in
every other programming language that I'm aware of (with the partial
exception of C++ where it is possible to use the same facilities as C,
just considerably more difficult).

In fact, most new languages seem to be *regressing* here. Both Perl and
Python were already fairly bad at this, Java bailed on the problem and
shoved it under the carpet of a version of static linking, and now Go is
even worse than they are and is explicitly embracing static linking.
Sigh.

It's immensely frustrating that this doesn't appear to be an interesting
problem to language designers, and even brand-new, sophisticated languages
that are trying to be at the cutting edge of language design do not make
any serious attempt at solving the ABI versioning problem.

Everyone could stand to learn something from the REST and web services
community, where ABI versioning is a well-known and widely accepted
problem that regularly prompts considerable discussion and careful thought
and a variety of useful proposals for addressing it with different
tradeoffs. It's kind of sad when one's brand-new, cutting-edge
programming language does a worse job at a foundational requirement of
language ecosystems than the Facebook API.
--
Russ Allbery (***@debian.org) <http://www.eyrie.org/~eagle/>
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@windlord.stanford.edu
Joachim Breitner
2013-02-07 08:50:02 UTC
Permalink
Hi,
Post by Russ Allbery
Normal
versioning problems that we just take for granted in C, such as allowing
the coexistence of two versions of the same library with different ABIs on
the system at the same time without doing the equivalent of static
linking, vary between ridiculously difficult and completely impossible in
every other programming language that I'm aware of
things are not that bad in Haskell: Precisely due to the rapidly
changing ABIs, and hence usually very tightly versioned build
dependencies in the library metadata, Haskell (i.e. Cabal) has very good
support for installing multiple versions of a library being installed at
the same time. Work is in progress to even support multiple builds of
the same library (e.g. foo-1.0 built against bar-1.0 and foo-1.0 built
against bar-2.0) to be installed at the same time.

Of course this is not a feature that helps us a lot in Debian, where we
usually want to provide one single version of each library. But that is
due to our choosing, and not a shortcoming of the language ecosystem.
And we have had exceptions (parsec, QuickCheck) when there are two
common major versions of a library, so it is possible.

Greetings,
Joachim
--
Joachim "nomeata" Breitner
Debian Developer
***@debian.org | ICQ# 74513189 | GPG-Keyid: 4743206C
JID: ***@joachim-breitner.de | http://people.debian.org/~nomeata
Steve Langasek
2013-02-07 16:40:02 UTC
Permalink
Post by Russ Allbery
I keep being tempted to go off on a rant about how we have all of these
modern, sophisticated, much more expressive programming languages, and yet
still none of them handle ABI versioning as well as C does. Normal
versioning problems that we just take for granted in C, such as allowing
the coexistence of two versions of the same library with different ABIs on
the system at the same time without doing the equivalent of static
linking, vary between ridiculously difficult and completely impossible in
every other programming language that I'm aware of (with the partial
exception of C++ where it is possible to use the same facilities as C,
just considerably more difficult).
In fact, most new languages seem to be *regressing* here. Both Perl and
Python were already fairly bad at this, Java bailed on the problem and
shoved it under the carpet of a version of static linking, and now Go is
even worse than they are and is explicitly embracing static linking.
Sigh.
Actually, if you look closely, you'll find that the traditional Java .jar
linking resolver precisely mirrors the behavior of the C linker on Solaris
from the same era (allows you to link dynamically, but requires top-level
objects to be linked at build time with all the recursive dependencies of
the libraries it uses). So it's not so much that they failed to learn from
C in that case, as that they failed to learn from the Free Unices of the
era.

But for Go, there's certainly no excuse. They simply have ignored the
problem by deciding that it doesn't matter to their deployment model,
leaving distributions high and dry.
--
Steve Langasek Give me a lever long enough and a Free OS
Debian Developer to set it on, and I can move the world.
Ubuntu Developer http://www.debian.org/
***@ubuntu.com ***@debian.org
Florian Weimer
2013-02-07 18:40:01 UTC
Permalink
Post by Steve Langasek
Actually, if you look closely, you'll find that the traditional Java
.jar linking resolver precisely mirrors the behavior of the C linker
on Solaris from the same era (allows you to link dynamically, but
requires top-level objects to be linked at build time with all the
recursive dependencies of the libraries it uses).
I don't think this is actually true. The Class-Path manifest
attribute has been around since almost forever (1.2 probably). It
serves the same purpose as the NEEDED attribute in ELF, and avoids the
need for dependencies to bubble up to the final link.

Most Java linking is done differently, probably because relatively few
people now about the Class-Path attribute. Others use other module
systems not part of the JDK, of course. But the Class-Path attribute
alone gets you very far.
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@mid.deneb.enyo.de
Matthew Woodcraft
2013-02-07 20:50:03 UTC
Permalink
Post by Russ Allbery
I keep being tempted to go off on a rant about how we have all of
these modern, sophisticated, much more expressive programming
languages, and yet still none of them handle ABI versioning as well as
C does. Normal versioning problems that we just take for granted in C,
such as allowing the coexistence of two versions of the same library
with different ABIs on the system at the same time without doing the
equivalent of static linking, vary between ridiculously difficult and
completely impossible in every other programming language that I'm
aware of (with the partial exception of C++ where it is possible to
use the same facilities as C, just considerably more difficult).
In fact, most new languages seem to be *regressing* here.
I don't think it's as clear-cut as that.

Debian handles multiple versions of C libraries at _runtime_ well, but I
think its support for C libraries still leaves a good deal to be
desired: it doesn't let you install multiple versions of -dev packages,
and it doesn't provide much in the way of tools to help you manage
multiple versions of libraries-and-headers that you've installed outside
the packaging system.


I think, for someone who wants an OS for software development, and wants
or needs to program against library versions newer than those that
Debian ships, Debian has better support for some of the newer languages
than it does for C.

(For example, as I understand it Python's virtualenv/venv stuff lets you
express "I want to see the standard library shipped with Debian's
Python, but otherwise only the locally-installed libraries I specify".
That's annoying to do with C because all the headers are jumbled
together in /usr/include.)

-M-
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@golux.woodcraft.me.uk
Neil Williams
2013-02-08 00:10:01 UTC
Permalink
On Thu, 7 Feb 2013 20:16:18 +0000
Post by Matthew Woodcraft
I don't think it's as clear-cut as that.
Debian handles multiple versions of C libraries at _runtime_ well, but I
think its support for C libraries still leaves a good deal to be
desired: it doesn't let you install multiple versions of -dev packages,
That depends on the -dev package, many are versioned where that is
appropriate. I have libgtk-3.0-dev and libgtk2.0-dev installed,
amongst others. It all depends on how long it is likely to take for all
of the reverse dependencies to migrate.

It's a bit more work for the maintainer but a SONAME change in a
library still needs a trip through NEW, so a new source package vs a
few new binary packages isn't that hard. If there are packages which
aren't providing suitable migration paths, file bugs.

More often than not, there is simply no need to have libfoo1-dev and
libfoo2-dev - libfoo2 & it's libfoo-dev just live in experimental whilst
the reverse deps migrate to the new API.
Post by Matthew Woodcraft
(For example, as I understand it Python's virtualenv/venv stuff lets you
express "I want to see the standard library shipped with Debian's
Python, but otherwise only the locally-installed libraries I specify".
That's annoying to do with C because all the headers are jumbled
together in /usr/include.)
Not true. pkg-config isolates the headers between packages whilst
retaining dependencies and only the headers you specify get
included by the compiler anyway. .h files in /usr/include are the
exception, most are in package-specific sub-directories of /usr/include.
--
Neil Williams
=============
http://www.linux.codehelp.co.uk/
Russ Allbery
2013-02-08 00:30:01 UTC
Permalink
Post by Neil Williams
Post by Matthew Woodcraft
I don't think it's as clear-cut as that.
Debian handles multiple versions of C libraries at _runtime_ well, but
I think its support for C libraries still leaves a good deal to be
desired: it doesn't let you install multiple versions of -dev packages,
It can if upstream versions the headers. A lot of upstreams don't do
this, but some do, and C provides a perfectly reasonable facility for
doing so (add the API version to the include paths). Many common C
libraries, from Glib to APR, use this facility.

You certainly *can* break versioning in C, but my point is rather that C
provides all the facilities that you need to do this and they are in
reasonably common use in the C community. (It's common not to bother to
version APIs because it's usually reasonably easy to maintain API
compatibility if you care, since you can add new fields to structs and add
new interfaces without changing the API even if the ABI changes.)

ABI versioning is much more important than API versioning because ABI
versioning affects end users who just want to install packages and have
them work, whereas API versioning only affects developers who by
definition have access to the source and can probably update things.

One of the challenges of interpreted (or JIT-compiled) languages like Perl
and Python is that the API and the ABI are the same thing, which means
that you have to version the API in order to version the ABI. This is
unavoidable given the nature of the language, but is actually quite
awkward, since versioning the API generally requires that the source code
of all callers be aware of the versioning and update to use the new
version. With C libraries with ABI versioning but no API versioning (the
most common case), one often doesn't need to make any changes at all to
the caller to move to a newer version of the ABI.
Post by Neil Williams
More often than not, there is simply no need to have libfoo1-dev and
libfoo2-dev - libfoo2 & it's libfoo-dev just live in experimental whilst
the reverse deps migrate to the new API.
Right. Plus, if upstream isn't versioning the API, the -dev packages
won't be coinstallable, and thus are somewhat pointless.

I could see how one could go about supporting something like this in a
language like Perl or Python. It would require introducing the concept of
an API version at the language level, and the caller would be required to
declare the API version at the time it loads the module. (Experience in
the REST world, specifically with the Facebook API, says that allowing the
caller to omit the API version and providing some default is a bad idea
and produces fragile clients.) So, to take a Perl example (since that's
what I'm most familiar with), the current:

use Date::Parse 2.20;

where the number is the minimum required version of Date::Parse (which is
kind of useless since it doesn't set a maximum version, and even with
support for setting a maximum version is annoying because it requires the
caller know release versions instead of API versions), one would instead
have something like:

use_api Date::Parse 2;

which would load version 2 of the Date::Parse API.

Then, the module would declare what API versions it supports, and the
installation paths would have to be restructured so that modules providing
different versions would be co-installable. One way backward-compatible
way to do that in Perl would be to install Date::Parse as both:

Date/Parse/_API_2.pm
Date/Parse.pm

Old Perls would use the second path. New Perls, given the above
declaration, would only load the first path and would throw an error if it
didn't exist. After a transition, or with an install option, you could
drop the second path, and then you could co-install two versions of
Date::Parse that provided different APIs because the next API version
would install:

Date/Parse/_API_3.pm

and clients that did:

use_api Date::Parse 3;

would get that one instead. Note that this would also allow a single
source package to provide multiple versions of the API by just installing
multiple _API_n.pm modules.

This is all quite ugly, and is mostly just a thought experiment. I'm sure
one could devise better schemes. What surprises me is that no one has
done this in any of the newer languages. (You really do want support for
this built into the language, since you want a bunch of help constructing
the install paths, supporting the automatic logic behind use_api, and
ideally you want the language to be able to figure out whether you changed
APIs at least enough to warn you about it.)
--
Russ Allbery (***@debian.org) <http://www.eyrie.org/~eagle/>
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@windlord.stanford.edu
Paul Wise
2013-02-07 01:00:02 UTC
Permalink
Post by Barry Warsaw
Speaking with many hats on, I think Debian Python has done a very admirable
job of integrating the Python ecosystem with Debian.
One of the pain points for users (I've had folks ask me this
face-to-face) with that stuff was site-packages vs dist-packages. With
your various Python hats on, can you explain why not just use
"packages" instead of "site-packages" and "dist-packages"? The right
way (IMO) would have been to put site packages in
/usr/local/lib/pythonX.Y/packages and dist ones in
/usr/lib/pythonX.Y/packages. Right now I have
/usr/local/lib/pythonX.Y/dist-packages and
/usr/lib/pythonX.Y/dist-packages, why is /usr/local dist-packages
instead of site-packages? /usr/local is clearly not the location for
distro installed packages.

Why did Debian have to invent /usr/share/pyshared and symlink farms in
/usr/lib/pythonX.Y instead of upstream having something like that in
the default install and search paths?

The location of .pyc files that are built at install time doesn't feel
FHS-correct to me, /var/cache/python/X.Y/ seems better.

Debian's Python build helper tools are still breeding like rabbits,
there is a new one in experimental. I guess because the current ones
dh_python2/dh_python3 don't handle packages that contain only code
that runs on both python2 and python3 without changes.
--
bye,
pabs

http://wiki.debian.org/PaulWise
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/CAKTje6GZ4O1yv38kNkneu_jnDx7=r-***@mail.gmail.com
Matthias Klose
2013-02-07 01:50:02 UTC
Permalink
Post by Paul Wise
Post by Barry Warsaw
Speaking with many hats on, I think Debian Python has done a very admirable
job of integrating the Python ecosystem with Debian.
One of the pain points for users (I've had folks ask me this
face-to-face) with that stuff was site-packages vs dist-packages. With
your various Python hats on, can you explain why not just use
"packages" instead of "site-packages" and "dist-packages"?
Sure, anything else than "site-packages" would have worked.
Post by Paul Wise
The right
way (IMO) would have been to put site packages in
/usr/local/lib/pythonX.Y/packages and dist ones in
/usr/lib/pythonX.Y/packages. Right now I have
/usr/local/lib/pythonX.Y/dist-packages and
/usr/lib/pythonX.Y/dist-packages, why is /usr/local dist-packages
instead of site-packages? /usr/local is clearly not the location for
distro installed packages.
This came up several times (e.g. [1], [2]). See [3] for the rationale for the
naming and the extra directories in /usr/local.
Post by Paul Wise
Why did Debian have to invent /usr/share/pyshared and symlink farms in
/usr/lib/pythonX.Y instead of upstream having something like that in
the default install and search paths?
Having separate copies of .py files would have been an option. Splitting .so and
.py files across directories not, because having different directories for third
party packages in python2.x can break the import behaviour, does break split-out
debug information, doesn't work for gir files, and more. Please search back the
archives until 2003, if you are really interested in this.
Post by Paul Wise
The location of .pyc files that are built at install time doesn't feel
FHS-correct to me, /var/cache/python/X.Y/ seems better.
The location is correct if you include these files in the packages. So why
change it if you regenerate them? Byte-compilation and pyshared is only a means
to make pure python code independent of the python version. This is changed in
Python3 upstream [4]. Please address any outstanding issues upstream, then maybe
provide a backport.
Post by Paul Wise
Debian's Python build helper tools are still breeding like rabbits,
there is a new one in experimental. I guess because the current ones
dh_python2/dh_python3 don't handle packages that contain only code
that runs on both python2 and python3 without changes.
I don't have any issue to build packages using dh_python2 and dh_python3. Please
file a bug report if you do have such issues. It is my understanding that
dh_python2/dh_python3 are stable and should still be available in jessie, but
then using the underlying pybuild system. If you want to improve the tools,
please do. Sarcasm doesn't do any improvement by itself. Please join Piotr.

Matthias

[1] https://lists.debian.org/debian-python/2008/03/msg00021.html
[2] https://lists.debian.org/debian-python/2009/02/msg00002.html
[3] https://lists.debian.org/debian-devel/2009/02/msg00431.html
[4] http://www.python.org/dev/peps/pep-3147/
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@debian.org
Barry Warsaw
2013-02-07 02:00:02 UTC
Permalink
Okay, fortunately, no bands are practicing tonight and no kids need homework
help, so let's see if I can answer some of these questions. :)
Post by Barry Warsaw
Speaking with many hats on, I think Debian Python has done a very admirable
job of integrating the Python ecosystem with Debian.
One of the pain points for users (I've had folks ask me this face-to-face)
with that stuff was site-packages vs dist-packages. With your various Python
hats on, can you explain why not just use "packages" instead of
"site-packages" and "dist-packages"?
Fundamentally, this comes down to a conflict between Python's historical
defaults and Debian's interpretation of the FHS. Let me just stipulate that
I'm not casting blame, or saying that anybody is doing anything wrong. I'm
not interested in that discussion, though I've had it many times. It is what
it is.

Old timers like me will remember the days when *nix systems reserved
/usr/local for stuff you downloaded and installed from source (i.e. most
everything on a usable system :). There was no /opt or FHS. This was
codified in the first auto-configuration scripts. I don't remember when
Python adopted the configure regime, but as long as I can remember (going back
at least to 1994), a default build-from-source of Python installed into
/usr/local. When site-packages was added,
/usr/local/lib/pythonX.Y/site-packages was the most logical place to put it.

Predating my involvement with Debian, I remember problem reports where
developers of Python, and others who install from source for various reasons,
would break their systems when they used the wrong Python executable to
install third party packages outside of the Debian packaging system. This was
because Debian allowed /usr/local/lib/pythonX.Y/site-packages to be used for
third party packages installed outside the Debian packaging system, using the
*system* Python, i.e. /usr/bin/python. This meant that if I installed
something for /usr/bin/python into /usr/local/lib/pythonX.Y/site-packages it
could easily break my /usr/local/bin/python, and possibly vice versa.

I think it was at a Pycon years ago that Matthias and I discussed this
problem. At the time (and probably still so), it didn't seem like either
Debian or Python was going to change its policy, so we had to find a way to
avoid the conflict and let both communities live in peace. Matthias's
solution was the use of dist-packages for Debian's system Python, which would
be ignored by a /usr/local/bin Python. Also, system Python would ignore
/usr/local/lib/pythonX.Y/site-packages (but not .../dist-packages), thus
avoiding all conflict. It seemed elegant at the time, and I still think this
is a reasonable compromise, even though it does cause some tooling problems,
which have to be patched in Debian.
The right way (IMO) would have been to put site packages in
/usr/local/lib/pythonX.Y/packages and dist ones in
/usr/lib/pythonX.Y/packages. Right now I have
/usr/local/lib/pythonX.Y/dist-packages and /usr/lib/pythonX.Y/dist-packages,
why is /usr/local dist-packages instead of site-packages? /usr/local is
clearly not the location for distro installed packages.
That was my position, i.e. that system Python shouldn't have any path from
/usr/local on sys.path, but that was very strongly (at the time) disputed by
Debian users. To be fair, the Debian users at the time (and maybe still do)
say that the right solution is for a default from-source build of Python to
install into /opt/local and not /usr/local, but again, that would conflict
with years of established use by upstream.

That's the historical background as I remember it anyway.
Why did Debian have to invent /usr/share/pyshared and symlink farms in
/usr/lib/pythonX.Y instead of upstream having something like that in
the default install and search paths?
Because upstream doesn't really care (or didn't until my PEPs 3147 and 3149 in
Python 3.2) about multiple versions of packages co-existing in harmony, and
because upstream Python requires .pyc files to live next to (FSVO, see below)
the .py files.

Debian was the first place that I recall where multiple versions of Python
could be co-installed. Let's say you have both Python 2.6 and 2.7 installed,
and you have a module called foo.py that is source-level compatible with both.
The problem is that Python has never guaranteed that .pyc files would be
compatible across Python versions. It's never said they wouldn't be, but in
practice the byte code cached in .pyc files always changes, due to new
features or bug fixes in the interpreter between major version numbers.

So in Debian you have a situation where you want to share foo.py across all
supported and installed Pythons, but where you cannot share .pyc files because
they aren't compatible. You want to share .py files 1) to keep package sizes
smaller, 2) to consume less disk space, 3) because you don't actually know
which versions of Python the target system has installed. Just because
version W of Debian supports PythonX.Y and PythonA.B doesn't mean your system
has both installed, so you'd rather not pay for the penalty of packaging up
two identical foo.py's for both of them, just because they'll live in
different locations on the file system. And they'd have to live in different
paths because of Python's requirement for nearby .pyc files combined with
cross-version incompatibility of .pyc files.

(Aside: there's no getting around paying this cost for extension modules since
they are binary .so files, but there are *way* fewer of these than
pure-Python, theoretically cross-version compatible source files.)

There have been several regimes to manage this, all of them to the best of my
knowledge using symlink farms to manage the sharing of .py files with the
version-specific-ness of .pyc files. IMHO, dh_python2 takes the best approach
to this, but previous regimes such as python-support, and probably
python-central are still in use.

This was finally solved by my work on PEPs 3147 and 3149, which introduced the
__pycache__ directory in Python 3.2 and tagged .so and .pyc file names.
(Aside: __pycache__ isn't strictly necessary to support this, but was a nice
additional feature suggested by Guido.)

Now in the Python 3 world, you *can* co-install multiple versions and even
though the .pyc and .so files are still version-specific, they can co-exist
peacefully. PythonX.Y will only try to load foo.cpython-XY.pyc and ignore
foo.cpython-AB.pyc, instead of overwriting it, which would have happened
before. Unfortunately, this work came too late to be included for Python 2,
so we still need the symlimk farms for that (obsolete <wink>) version.

But if you look at how we do Python 3 packages now, you'll see
/usr/lib/python3/dist-packages with shared .py source files, version-specific
.pyc inside __pycache__ directories, and ABI tagged .so files co-existing with
no symlink farms. Three cheers for progress!

(Aside: you'll still see a /usr/lib/python3.X but that's for version-specific
stdlib only.)
The location of .pyc files that are built at install time doesn't feel
FHS-correct to me, /var/cache/python/X.Y/ seems better.
It probably is, but upstream Python can only handle .pyc files living next to
(or in the post PEP 3147 world, very nearby) their .py files. I suppose you
could use Python 3.3's importlib to write an importer that codified this
policy, but leaving aside whether it would be worth it, you'd probably have a
similar (or worse) tooling problem as with dist-packages, since there's
probably many packages that assume the .pyc lives near the .py (and some have
even had bugs caused by the PEP 3147 reorganization alone, not all of which
are fixed I'm sure).
Debian's Python build helper tools are still breeding like rabbits,
there is a new one in experimental. I guess because the current ones
dh_python2/dh_python3 don't handle packages that contain only code
that runs on both python2 and python3 without changes.
Not exactly. dh_python2 and dh_python3 are really good IMHO, but one problem
is that while dh has a lot of helpers to make it easy to write d/rules files
for common case setup.py based Python 2 packages, it doesn't know anything
about Python 3. Take a look at all the overrides you have to add for
libraries that are both Python 2 and 3 compatible, as described in
http://wiki.debian.org/Python/LibraryStyleGuide

Among the things that pybuild improves is dh support for Python 3, so you
really can almost always write just a 3 line d/rules file, even for libraries
that support both Python 2 and 3, with automatic running of unittests, etc.
That's win enough, IMHO.

Piotr can perhaps speak in more detail about it, but pybuild is more ambitious
still, and IIUC, really just builds on top of dh_python2 and dh_python3 by
supporting several different upstream build systems (default is to
auto-detect, e.g. distutils-based, configure-based, etc.) with lots of
overrides and customization possible. For example, there are several ways
that a library's test suite can be invoked so having good auto-detection of
that, along with convenient ways to customize it are important.

The pybuild manpage has all the gory details, but I think with pybuild, we're
finally able to promote really easy to write d/rules files for the majority of
Python packages across both major version stacks.

Hope that helps!
-Barry
Steve Langasek
2013-02-07 16:40:02 UTC
Permalink
Post by Paul Wise
Why did Debian have to invent /usr/share/pyshared and symlink farms in
/usr/lib/pythonX.Y instead of upstream having something like that in
the default install and search paths?
This is all resolved now in python3. There is no more /usr/share/pyshared
for python3, and all packages install directly to
/usr/lib/python3/dist-packages.

It took a while to get there, which is regrettable, but when such changes
need to be done upstream in order to be done right, and you have a large
established user base like python does, these things can be slow moving.
Post by Paul Wise
The location of .pyc files that are built at install time doesn't feel
FHS-correct to me, /var/cache/python/X.Y/ seems better.
I think this is a minor point, honestly. Also, I believe there are other
precedents in Debian for this kind of install-time bytecode compilation
(emacs lisp, IIRC?).
Post by Paul Wise
Debian's Python build helper tools are still breeding like rabbits,
there is a new one in experimental. I guess because the current ones
dh_python2/dh_python3 don't handle packages that contain only code
that runs on both python2 and python3 without changes.
pybuild is a necessary adjunct to dh_python3 that provides the dh(1) build
system integration, not an alternate build helper tool. I don't think
"breeding like rabbits" is anywhere near the mark.
--
Steve Langasek Give me a lever long enough and a Free OS
Debian Developer to set it on, and I can move the world.
Ubuntu Developer http://www.debian.org/
***@ubuntu.com ***@debian.org
Russ Allbery
2013-02-08 02:10:01 UTC
Permalink
Well, relative to other languages, I think Python's had the most changes
with regards to build helper tools -- there was dh_pycentral, and
dh_pysupport in the past which did more or less the same thing in
different ways, and now we have dh_python2, and dh_python3. In contrast,
Mono stuff have only had the dh_cli* set of things, and Java only had
the javahelper bunch of things.
Java also has Maven helper tools (maven-debian-helper and
maven-repo-helper).

The iterating on Python tools has partly been because there were various
problems with doing the right thing and various possible workarounds for
integration issues that were then resolved in part by upstream changing
things there so that Debian could do what we need to do. I don't think
the quantity of tools is itself a sign of a problem. (There have been
other things around Python package maintenance that have been problems,
though, but seem to be getting better.)
--
Russ Allbery (***@debian.org) <http://www.eyrie.org/~eagle/>
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@windlord.stanford.edu
Chow Loong Jin
2013-02-08 02:10:01 UTC
Permalink
Post by Steve Langasek
Post by Paul Wise
Debian's Python build helper tools are still breeding like rabbits,
there is a new one in experimental. I guess because the current ones
dh_python2/dh_python3 don't handle packages that contain only code
that runs on both python2 and python3 without changes.
pybuild is a necessary adjunct to dh_python3 that provides the dh(1) build
system integration, not an alternate build helper tool. I don't think
"breeding like rabbits" is anywhere near the mark.
Well, relative to other languages, I think Python's had the most changes with
regards to build helper tools -- there was dh_pycentral, and dh_pysupport in the
past which did more or less the same thing in different ways, and now we have
dh_python2, and dh_python3. In contrast, Mono stuff have only had the dh_cli*
set of things, and Java only had the javahelper bunch of things.
--
Kind regards,
Loong Jin
Steve Langasek
2013-02-08 02:20:02 UTC
Permalink
Post by Steve Langasek
Post by Paul Wise
Debian's Python build helper tools are still breeding like rabbits,
there is a new one in experimental. I guess because the current ones
dh_python2/dh_python3 don't handle packages that contain only code
that runs on both python2 and python3 without changes.
pybuild is a necessary adjunct to dh_python3 that provides the dh(1) build
system integration, not an alternate build helper tool. I don't think
"breeding like rabbits" is anywhere near the mark.
Well, relative to other languages, I think Python's had the most changes
with regards to build helper tools -- there was dh_pycentral, and
dh_pysupport in the past which did more or less the same thing in
different ways, and now we have dh_python2, and dh_python3.
Yes, but this is not proliferation. There's one standard tool for python2 -
dh_python2 - and one for python3 - dh_python3. (The languages have
sufficiently different build-time and install-time requirements that it
makes sense to have one for each.)
In contrast, Mono stuff have only had the dh_cli* set of things, and Java
only had the javahelper bunch of things.
Heh. No, there's javahelper, and maven-debian-helper, and
maven-repo-helper, and I'm pretty sure there was another one for another
repo system besides maven.
--
Steve Langasek Give me a lever long enough and a Free OS
Debian Developer to set it on, and I can move the world.
Ubuntu Developer http://www.debian.org/
***@ubuntu.com ***@debian.org
Chow Loong Jin
2013-02-08 02:30:02 UTC
Permalink
Post by Steve Langasek
[...]
Well, relative to other languages, I think Python's had the most changes
with regards to build helper tools -- there was dh_pycentral, and
dh_pysupport in the past which did more or less the same thing in
different ways, and now we have dh_python2, and dh_python3.
Yes, but this is not proliferation. There's one standard tool for python2 -
dh_python2 - and one for python3 - dh_python3. (The languages have
sufficiently different build-time and install-time requirements that it
makes sense to have one for each.)
Agreeably so, but I don't think the justification applies for having both
python-support and python-central. Thank $deity those have been replaced by
dh_python2. A significant number of packages still show up in reverse-depends
-b, though.
Post by Steve Langasek
In contrast, Mono stuff have only had the dh_cli* set of things, and Java
only had the javahelper bunch of things.
Heh. No, there's javahelper, and maven-debian-helper, and
maven-repo-helper, and I'm pretty sure there was another one for another
repo system besides maven.
Hmm, okay, I hadn't known about those. I guess that makes the Mono build helpers
the outlier then. Why can't everyone else be as awesome as us and have proper
Debhelper-based build helpers written in Perl? ;-)
--
Kind regards,
Loong Jin
Jon Dowland
2013-02-05 17:30:01 UTC
Permalink
Post by Lennart Sorensen
Post by Jon Dowland
As a Haskell developer, I find cabal much more convenient than nothing,
in the situation where the library I want is not packaged by Debian yet.
If I want my haskell libraries and programs to reach a wide audience, I
need to learn Cabal anyway.
If you are writing libraries to add to the language, then I don't consider
you a normal developer using the language.
Well, my point stands if I were writing a mere Haskell user-oriented program
for that matter.
Post by Lennart Sorensen
You generally don't have to because things are in Debian archives already.
It can be a chicken and egg problem. I see .debs often when a repository does
exist (spotify, dropbox I think, google chrome) and many situations where a
repository does not (humble indie bundle)
Post by Lennart Sorensen
If you want bleeding edge, then you are not a normal user and you
certainly aren't a system administrator that wants to keep a controlled
system they can reproduce.
I must admit I'm losing touch of precisely what you are arguing here. I guess
it's not "everything that matters will be packaged with Debian" hence the
previous paragraph re: external apt repositories. Yet, I don't suppose you're
arguing that availability in an external apt repository is any guarantee of
quality (Or at least I hope you're not). I don't think we're necessarily
talking about bleeding edge, either. If something is not packaged in Debian,
it's not necessarily bleeding edge.
Post by Lennart Sorensen
I know dpkg --get-selections will tell me all the software installed on
the system so I can do the same on another one. If yet another package
maanger gets involved I have to know about it and do something different
to handle that. That's not a good thing.
True. But you also lose lots of other information, such as what is marked
automatic, the contents of your debconf DB and corresponding changes in
/etc, non-corresponding changes in /etc… dpkg --get-selections is not and
has never been a solution to the problem you are describing.
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@debian
Chow Loong Jin
2013-01-31 01:50:02 UTC
Permalink
Post by Wouter Verhelst
Post by Chow Loong Jin
Having multiple package managers which don't know about each other on a
system
Post by Chow Loong Jin
is evil™ (but in some cases, can be managed properly).
Meh, it’s evil, period.
Well, really, the manageable case I was talking about was pip/easy_install
inside a Python virtualenv, which is basically a Python installation that is
kept completely distinct from the system's installation and doesn't touch
anything dpkg is supposed to be managing.

It's useful for installing crap from PyPI in a more or less standard,
distro-agnostic manner.
--
Kind regards,
Loong Jin
Philipp Kern
2013-01-31 11:10:02 UTC
Permalink
Chow,
Post by Chow Loong Jin
Well, really, the manageable case I was talking about was pip/easy_install
inside a Python virtualenv, which is basically a Python installation that is
kept completely distinct from the system's installation and doesn't touch
anything dpkg is supposed to be managing.
it's not really "completely distinct". That aside that's basically what you get
with the Go toolset if you change GOPATH. You can freely install libraries just
like easy_install and use them from programs.

(Except that the end product is always a binary that doesn't care if a library
changes, contrary to Python.)

Kind regards
Philipp Kern
Barry Warsaw
2013-01-31 21:30:02 UTC
Permalink
Post by Chow Loong Jin
Well, really, the manageable case I was talking about was pip/easy_install
inside a Python virtualenv, which is basically a Python installation that is
kept completely distinct from the system's installation and doesn't touch
anything dpkg is supposed to be managing.
It's useful for installing crap from PyPI in a more or less standard,
distro-agnostic manner.
This arrangement works pretty well for me in practice, especially since
`virtualenv --system-site-packages` can usually give you the mix you need.

Cheers,
-Barry
Chow Loong Jin
2013-02-01 01:20:01 UTC
Permalink
Post by Barry Warsaw
This arrangement works pretty well for me in practice, especially since
`virtualenv --system-site-packages` can usually give you the mix you need.
Yes it does in most cases, but --system-site-packages can do the wrong thing in
certain edge-cases like trying to find a sane interface to libmagic in Python.

There are at least three different Python modules available on PyPI that provide
the magic module, i.e. "import magic", and *all* of them have *different* APIs.
If you're lucky, you end up deploying on a distro (e.g. in shared hosting or in
a PaaS like Openshift) that has the wrong magic module installed on the system,
and your application/library crashes and burns in a very spectacular manner.
--
Kind regards,
Loong Jin
Clint Byrum
2013-02-01 19:10:02 UTC
Permalink
Post by Chow Loong Jin
Having multiple package managers which don't know about each other on a system
is evil™ (but in some cases, can be managed properly).
Robert Collins did a nice write up on this very subject not long ago:

http://rbtcollins.wordpress.com/2012/08/27/why-platform-specific-package-systems-exist-and-wont-go-away/
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/1359744295-sup-***@fewbar.com
Lennart Sorensen
2013-02-01 20:40:02 UTC
Permalink
Post by Clint Byrum
Post by Chow Loong Jin
Having multiple package managers which don't know about each other on a system
is evil™ (but in some cases, can be managed properly).
http://rbtcollins.wordpress.com/2012/08/27/why-platform-specific-package-systems-exist-and-wont-go-away/
And I think the best part if comment 19 (From September 15, 2012).
Much better than the article itself.
--
Len Sorensen
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@csclub.uwaterloo.ca
Tollef Fog Heen
2013-01-30 10:10:02 UTC
Permalink
]] Michael Stapelberg
Post by Michael Stapelberg
I am not familiar with the Ruby situation, I only know that many Ruby
developers seem to be very angry at Debian people. Is there a summary of
the events that I could read?
Debian's rubygems package installed gems to a directory not in $PATH,
meaning most documentation from upstreams on how to use the gems didn't
actually work. This has since been fixed and newer rubygems install
gems to /usr/local.

[...]
Post by Michael Stapelberg
Sysadmins are not developers and therefore don’t need to learn any new
commands.
If you believe this is true, you should read up on the whole idea of
devops.
--
Tollef Fog Heen
UNIX is user friendly, it's just picky about who its friends are
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@qurzaw.varnish-software.com
Marc Haber
2013-01-30 14:10:02 UTC
Permalink
On Tue, 29 Jan 2013 22:57:03 +0100, Michael Stapelberg
Post by Michael Stapelberg
Post by Wouter Verhelst
"consistency across multiple platforms" has been claimed as a benefit
for allowing "gem update --system" to replace half of the ruby binary
package, amongst other things. It wasn't a good argument then, and it
isn't a good argument now.
I am not familiar with the Ruby situation, I only know that many Ruby
developers seem to be very angry at Debian people. Is there a summary of
the events that I could read?
In my past experience it is the usual case where and upstream and/or
its community takes at as a personal offense when a user is not using
the latest and greatest software version[1] and does not understand
that one might be mandated to use what a third party (here: Debian)
found fit to freeze for a release a year or more ago.

This attitude can be found upon many upstreams communities (for
example, in Open/LibreOffice and KDE), but is even harder with ruby
since ruby packages seem to be so small that there are so many of
them, that they have something resembling a release process for a
rubygems bundle package, introducing the time lag caused by releases
once for them and once for us, and that our packaging of their softwae
was done in a way that ruby packages are the biggest nightmare to
backport that I have ever seen[2] (gem2deb, 'nuff said).

I do sincerely hope that most of those things will be history when
wheezy is released.

Greetings
Marc

[1] up to "what, you're only using what we released last week? Please
upgrade to our git head and check back.
[2] I think it was between woody and sarge when more and more package
began to rely on a debhelper version that wasn't backportable because
it used a perl language construct that was not available in the past's
stable perl
--
-------------------------------------- !! No courtesy copies, please !! -----
Marc Haber | " Questions are the | Mailadresse im Header
Mannheim, Germany | Beginning of Wisdom " | http://www.zugschlus.de/
Nordisch by Nature | Lt. Worf, TNG "Rightful Heir" | Fon: *49 621 72739834
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/E1U0Xwu-0004cg-***@swivel.zugschlus.de
Thorsten Glaser
2013-01-30 21:30:03 UTC
Permalink
Post by Marc Haber
In my past experience it is the usual case where and upstream and/or
its community takes at as a personal offense when a user is not using
the latest and greatest software version[1] and does not understand
I think the Ruby case involved more:

“What, you’re not running version x of the dependency y but a newer
one? Ignore the fact that version x is vulnerable, because that’s
the one you *must* be using for my code! Ah, and no, the dependency
of package z of version x+1 of package y is not a problem, because
with out cool package manager you can install them in parallel!”

(Somewhat remembering and paraphrasing what I read on the Planet.)

bye,
//mirabilos
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/loom.20130130T221904-***@post.gmane.org
Antonio Terceiro
2013-01-31 03:10:02 UTC
Permalink
Post by Marc Haber
In my past experience it is the usual case where and upstream and/or
its community takes at as a personal offense when a user is not using
the latest and greatest software version[1] and does not understand
“What, you’re not running version x of the dependency y but a newer
one? Ignore the fact that version x is vulnerable, because that’s
the one you *must* be using for my code! Ah, and no, the dependency
of package z of version x+1 of package y is not a problem, because
with out cool package manager you can install them in parallel!”
That was a bit of both issue, but that happens in pretty much every
language community out there.

I just tried installing some cool thing in Perl, and installing it from
CPAN into $HOME blew up because of incompatibilities with some Perl
package installed via dpkg, and could only test the said package when I
tried installing it from CPAN in a clean chroot.

There is always impedance mismatch between those who want to build
stable, dependable, predictable systems and those who want to live in
the bleeding edge, and as said elsewhere in this thread, that's not
exclusive to the Ruby people.

IMO the noise caused by "the Ruby situation" was amplified by the
concurrent development of two different of factors:

- a lot of leading (and very vocal) Ruby developers were using a system
that had no proper package management (MacOS X), so they didn't care
enough to understand the requirements "from our side". But the Ruby
packages in Debian implemented those requirements.

- that also happened during the boom of Ruby development outside of
Japan -- skyrocketed by the boom of Rails -- where everything was new
and everyone was experimenting like crazy. That led to the
lack-of-stables-API and let's-break-our-reverse-dependencies
bandwagons.

Today, the situation is improving from our side, if not by discussing
the relationship, at least by writing the proper code. With Rubygems
moving into the core -- what makes sense, because we cannot fight
against the fact that several users of the language won't have proper
package managers provided by their OS, we had to deal with it. As a
sort of technology preview, in Wheezy one will be able to have Rubygems
detect dpkg-installed packages¹.

¹ http://packages.debian.org/rubygems-integration

Also, switching the default Ruby version on a per-user/pre-project² or
even on a per-system² basis was made easier.

² http://packages.debian.org/rbenv
³ update-alternatives --config ruby

From the Ruby community side the situation is a lot better. You will
even see the occasional "don't you break a stable API" rant within the
Ruby community.
--
Antonio Terceiro <***@debian.org>
Hilko Bengen
2013-01-30 14:10:02 UTC
Permalink
Post by Wouter Verhelst
Post by Michael Stapelberg
Hi Hilko,
Post by Hilko Bengen
This is a pity for those of us who don't really subscribe to "get
everything from github as needed" model of distributing software.
Yes, but at the same time, it makes Go much more consistent across
multiple platforms.
"consistency across multiple platforms" has been claimed as a benefit
for allowing "gem update --system" to replace half of the ruby binary
package, amongst other things. It wasn't a good argument then, and it
isn't a good argument now.
I think that "Consistency across multiple platforms" can be a useful
argument. But if either upstream's or the distribution's view of How
Things Ought To Work are ignored for the sake of consistency, something
has gone wrong. (Which is my impression of what happened with Ruby.)
Post by Wouter Verhelst
The problem with having a language-specific "get software" command is
that it introduces yet another way to get at software. There are many
reasons why that's a bad idea, including, but not limited to: [...]
On the other hand it gives users the power to get things done, without
having to care how they can build "proper" distro packages.

People expect upstream's "get software" command to Just Work, just as
they expect "./configure && make && sudo make install" or "perl
Makefile.PL && make && make test && sudo make install" to Just Work. In
my experience autoconf and perl-based build systems usually don't write
all over the parts of the system that dpkg is responsible for -- they
put their stuff into /usr/local and I can even override that using
documented environment variables or command line parameters.

This is exactly the kind of behavior I would expect when running "go
install" on a Debian system. It should be useful as intended by its
authors but leave certain parts of the system alone that are off-limits
by convention.

Cheers,
-Hilko
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@msgid.hilluzination.de
Hilko Bengen
2013-01-30 13:40:02 UTC
Permalink
Post by Michael Stapelberg
Post by Hilko Bengen
This is a pity for those of us who don't really subscribe to "get
everything from github as needed" model of distributing software.
Yes, but at the same time, it makes Go much more consistent across
multiple platforms.
Apparently, upstream's idea about the role of library code does not
really match the ideas that have been built into Debian and other Linux
distributions. This has happened before: Ruby (1.8, 1.9, ironruby,
rubinius, etc.), Java (we have multiple JRE implementations, Maven),
even Perl (perlbrew, local::lib, etc.) and will happen again.

This in itself is not a problem. We jsut have to figure out where the
diverging ideas are enscribed (I guess: primarily the "go" tool that was
introduced with Go 1.0) and see if we can tweak them to a compromise
that does not cause surprise and confusion.
Post by Michael Stapelberg
Post by Hilko Bengen
Since one of the stated goals of the Go language and also the golang
compiler are fast builds: How about using the Emacs / Common-Lisp /
Python approach: Ship only source files in the .deb packages and build
object files during post-install?
What advantage would that have? It won’t change the fact that we will
only distribute Go libraries for building Debian binary packages.
Architecture: all packages.
Post by Michael Stapelberg
Post by Hilko Bengen
How does gccgo fit into this picture, apart from the problem that object
files generated using gccgo are not compatible with those generated
using golang-go?
I tried to explain that one cannot use gccgo to create dynamically
linked shared libraries from Go libraries. At least not at the moment.
All it can do is dynamically linked executables.
I drew a different conclusion from Ian's messages the thread you
mentioned (see the quotes below). Apparently, one *can* build shared
libraries using gccgo, but they are not currently usable using dlopen().
My impression was that this means that regular use of shared libraries
*is* possible with gccgo. And, indeed, my attempts at building
codesearch using gccgo was sucessful -- with the sparse, index, and
regexp packages compiled as shared libraries.

Ian just recommends against using shared libraries pretty much for
"consistency" reasons.

Cheers
-Hilko
Post by Michael Stapelberg
gccgo -shared -o lib.so file.go
There is no way to use the go tool to build a shared library.
(http://article.gmane.org/gmane.comp.lang.go.general/81686)
Post by Michael Stapelberg
When using gccgo you can create and use shared libraries. Loading new
code via dlopen doesn't really work; the Go initialization code for
the dlopen'ed shared library will not be run.
(http://article.gmane.org/gmane.comp.lang.go.general/81567)
Post by Michael Stapelberg
I don't really recommend providing packages as shared libraries even
when using gccgo, it's not what Go programmers expect.
(http://article.gmane.org/gmane.comp.lang.go.general/81687)
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@msgid.hilluzination.de
Michael Stapelberg
2013-01-30 15:00:02 UTC
Permalink
Hi Hilko,
Post by Hilko Bengen
I drew a different conclusion from Ian's messages the thread you
mentioned (see the quotes below). Apparently, one *can* build shared
libraries using gccgo, but they are not currently usable using dlopen().
My impression was that this means that regular use of shared libraries
*is* possible with gccgo. And, indeed, my attempts at building
codesearch using gccgo was sucessful -- with the sparse, index, and
regexp packages compiled as shared libraries.
Can you please list the full instructions you did to accomplish building
sparse/index/regexp as a shared library?

I have asked numerous times in various places and nobody could give me
instructions on how to build shared libraries.
--
Best regards,
Michael
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@midna.zekjur.net
Hilko Bengen
2013-01-31 12:10:01 UTC
Permalink
Post by Michael Stapelberg
Can you please list the full instructions you did to accomplish building
sparse/index/regexp as a shared library?
Sure. See the Makefile at the end of this mail. Please note that I don't
really know what I'm doing here -- just tried a few things and tried to
make sense of error messages until I got working binaries.

I put symlinks to the index, sparse, regexp directories into
src/code.google.com/p/codesearch and added -Isrc to every gccgo
invocation, so that the compiler looks for the local imports in the
right place. For the shared libraries, I added

1. -shared -fPIC

2. -fno-split-stack
Otherwise I could not link executables and got the following error
message:
/usr/bin/ld.bfd.real: cindex: hidden symbol `__morestack' in
/usr/lib/gcc/x86_64-linux-gnu/4.7/libgcc.a/morestack.o) is referenced
by DSO
/usr/bin/ld.bfd.real: final link failed: Bad value

When the libraries are put into
src/code.google.com/codesearch/lib{sparse,regexp,index}.so, they are
subsequently found by the compiler -- and apparently used for imports,
even if the source files are missing.

I have not yet figured out how to specify the full namespace-related
path to the linker yet: -lcode.google.com/p/codesearch/index causes ld
then to look for libcode.google.com/p/codesearch/index.so which is
almost but not quite what is needed.

Cheers,
-Hilko

PREFIX=src/code.google.com/p/codesearch
GCCGO_FLAGS=-Isrc
GCCGO_LIBFLAGS = -shared -fPIC -fno-split-stack
GCCGO_LDFLAGS = -L$(PREFIX)
GCCGO_EXE_LDFLAGS = $(GCCGO_LDFLAGS) -Wl,-R,$(PREFIX)

dirs:
mkdir -p $(PREFIX)
ln -sf `pwd`/regexp `pwd`/index `pwd`/sparse $(PREFIX)/

LIBS=sparse regexp index

DYNLIBS=$(patsubst %,$(PREFIX)/lib%.so,$(LIBS))

dynlibs: $(DYNLIBS)

$(PREFIX)/lib%.so: $(PREFIX)/%/*.go
gccgo $(GCCGO_FLAGS) $(GCCGO_LIBFLAGS) -o $@ $^
$(PREFIX)/libregexp.so: $(PREFIX)/regexp/*.go
gccgo $(GCCGO_FLAGS) $(GCCGO_LIBFLAGS) -o $@ $^ $(GCCGO_LDFLAGS) -lsparse
$(PREFIX)/libindex.so: $(PREFIX)/index/*.go
gccgo $(GCCGO_FLAGS) $(GCCGO_LIBFLAGS) -o $@ $^
$(GCCGO_LDFLAGS) -lsparse -lregexp

BINS=cindex csearch cgrep

cindex: cmd/cindex/*.go
gccgo $(GCCGO_FLAGS) -o $@ $^ $(GCCGO_EXE_LDFLAGS) -lindex
csearch: cmd/csearch/*.go
gccgo $(GCCGO_FLAGS) -o $@ $^ $(GCCGO_EXE_LDFLAGS) -lregexp -lindex
cgrep: cmd/cgrep/*.go
gccgo $(GCCGO_FLAGS) -o $@ $^ $(GCCGO_EXE_LDFLAGS) -lregexp

bins: $(DYNLIBS) $(BINS)
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/87ip6d4l1p.fsf_-***@msgid.hilluzination.de
Matthias Klose
2013-01-31 14:00:02 UTC
Permalink
Post by Hilko Bengen
2. -fno-split-stack
Otherwise I could not link executables and got the following error
/usr/bin/ld.bfd.real: cindex: hidden symbol `__morestack' in
/usr/lib/gcc/x86_64-linux-gnu/4.7/libgcc.a/morestack.o) is referenced
by DSO
/usr/bin/ld.bfd.real: final link failed: Bad value
better use -fuse-ld=ld.gold, or in a packaging context, build-depend on
binutils-gold. gold should be available on all archs where gccgo is available.

Matthias
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@debian.org
Michael Stapelberg
2013-02-04 22:40:02 UTC
Permalink
Hi Matthias,
Post by Matthias Klose
Post by Hilko Bengen
2. -fno-split-stack
Otherwise I could not link executables and got the following error
/usr/bin/ld.bfd.real: cindex: hidden symbol `__morestack' in
/usr/lib/gcc/x86_64-linux-gnu/4.7/libgcc.a/morestack.o) is referenced
by DSO
/usr/bin/ld.bfd.real: final link failed: Bad value
better use -fuse-ld=ld.gold, or in a packaging context, build-depend on
binutils-gold. gold should be available on all archs where gccgo is available.
This doesn’t work unfortunately:

1. -fuse-ld=ld.gold results in:
gccgo: error: unrecognized command line option ‘-fuse-ld=ld.gold’
This is with gccgo 4:4.7.2-1 (current Debian testing)

2. Installing binutils-gold doesn’t help either. When calling
gccgo -Wl,-debug -I /tmp/godyn/src -shared -fPIC -o /tmp/build/libsparse.so ./set.go
I get:

ld_file_name = /usr/bin/ld
c_file_name = /usr/bin/gccgo
nm_file_name = /usr/bin/nm
strip_file_name = /usr/bin/strip
c_file = /tmp/ccks2q2o.c
o_file = /tmp/ccjfIMxS.o
COLLECT_GCC_OPTIONS = '-fsplit-stack' '-I' '/tmp/godyn/src' '-shared' '-fPIC' '-o' '/tmp/build/libsparse.so' '-shared-libgcc' '-mtune=generic' '-march=x86-64'
COLLECT_GCC = gccgo
COMPILER_PATH = /usr/lib/gcc/x86_64-linux-gnu/4.7/:/usr/lib/gcc/x86_64-linux-gnu/4.7/:/usr/lib/gcc/x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/4.7/:/usr/lib/gcc/x86_64-linux-gnu/
LIBRARY_PATH = /usr/lib/gcc/x86_64-linux-gnu/4.7/:/usr/lib/gcc/x86_64-linux-gnu/4.7/../../../x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/4.7/../../../../lib/:/lib/x86_64-linux-gnu/:/lib/../lib/:/usr/lib/x86_64-linux-gnu/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-linux-gnu/4.7/../../../:/lib/:/usr/lib/

/usr/bin/ld --sysroot=/ --build-id --no-add-needed --eh-frame-hdr -m elf_x86_64 --hash-style=both -shared -o /tmp/build/libsparse.so /usr/lib/gcc/x86_64-linux-gnu/4.7/../../../x86_64-linux-gnu/crti.o /usr/lib/gcc/x86_64-linux-gnu/4.7/crtbeginS.o -L/usr/lib/gcc/x86_64-linux-gnu/4.7 -L/usr/lib/gcc/x86_64-linux-gnu/4.7/../../../x86_64-linux-gnu -L/usr/lib/gcc/x86_64-linux-gnu/4.7/../../../../lib -L/lib/x86_64-linux-gnu -L/lib/../lib -L/usr/lib/x86_64-linux-gnu -L/usr/lib/../lib -L/usr/lib/gcc/x86_64-linux-gnu/4.7/../../.. /tmp/ccRsBUuQ.o -lgobegin -lgo -lm --wrap=pthread_create -lgcc_s -lc -lgcc_s /usr/lib/gcc/x86_64-linux-gnu/4.7/crtendS.o /usr/lib/gcc/x86_64-linux-gnu/4.7/../../../x86_64-linux-gnu/crtn.o
/usr/bin/ld.gold.real: error: /tmp/ccRsBUuQ.o: could not convert call to '__morestack' to '__morestack_non_split'
/usr/bin/ld.gold.real: error: /tmp/ccRsBUuQ.o: could not convert call to '__morestack' to '__morestack_non_split'
/usr/bin/ld.gold.real: error: /tmp/ccRsBUuQ.o: could not convert call to '__morestack' to '__morestack_non_split'
/usr/bin/ld.gold.real: error: /tmp/ccRsBUuQ.o: could not convert call to '__morestack' to '__morestack_non_split'
/usr/bin/ld.gold.real: error: /tmp/ccRsBUuQ.o: could not convert call to '__morestack' to '__morestack_non_split'
/usr/bin/ld.gold.real: error: /tmp/ccRsBUuQ.o: could not convert call to '__morestack' to '__morestack_non_split'
collect2: error: ld returned 1 exit status
[Leaving /tmp/ccks2q2o.c]
[Leaving /tmp/ccjfIMxS.o]
[Leaving /tmp/ccC2k82l.ld]
[Leaving /tmp/ccP48tyP.le]
[Leaving /tmp/build/libsparse.so]
--
Best regards,
Michael
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@midna.zekjur.net
Marcelo
2013-01-31 22:40:02 UTC
Permalink
Post by Hilko Bengen
I have not yet figured out how to specify the full namespace-related
path to the linker yet: -lcode.google.com/p/codesearch/index causes ld
then to look for libcode.google.com/p/codesearch/index.so which is
almost but not quite what is needed.
I haven't tried this with your makefile, but you can use
-l:code.google.com/p/codesearch/index.so (note the colon between -l
and code and the addition of .so at the end).

Maybe this works,

Marcelo
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/CAEP7XAyHxV_2adq=***@mail.gmail.com
Michael Stapelberg
2013-02-04 23:20:01 UTC
Permalink
Hi Hilko,
Post by Hilko Bengen
Sure. See the Makefile at the end of this mail. Please note that I
[...]
Thanks for the instructions. I reproduced them and got shared libraries
plus dynamically linked binaries.

Aside from details about the split stack flags, now one big question
arises:

Assuming we ship Go libraries compiled as shared libraries, where do we
get the SONAME from? There is no mechanism for Go libraries to declare
an ABI break. Inventing one and asking all upstream projects to adopt it
seems unlikely to succeed. Ignoring SONAMEs altogether could make
software break on the user’s installation. Then again, the same could
happen to every interpreted language, e.g. Perl.

What do you think?
--
Best regards,
Michael
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@midna.zekjur.net
Paul Wise
2013-02-05 01:10:01 UTC
Permalink
Post by Michael Stapelberg
Assuming we ship Go libraries compiled as shared libraries, where do we
get the SONAME from? There is no mechanism for Go libraries to declare
an ABI break. Inventing one and asking all upstream projects to adopt it
seems unlikely to succeed. Ignoring SONAMEs altogether could make
software break on the user’s installation. Then again, the same could
happen to every interpreted language, e.g. Perl.
Could it be automatically created based on some sort of hash of the
exported symbols and their argument lists?

If not, based on your discussions with upstream I would say that the
culture of the Go development community is incompatible with shared
libraries. So your choices are:

Convince Go upstream to add an officially-supported and encouraged
shared library mechanism that includes ABI mechanisms.

Invent some automatic, Debian-specific method for ABI stuff. Like
Provides: go-foo-abi-<hash> and put that info in shlibs.

Use static libraries, Built-Using and frequent binNMUs.
--
bye,
pabs

http://wiki.debian.org/PaulWise
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/CAKTje6HFUEx6-1Qtx_=BvjFN0QH=Dh=***@mail.gmail.com
Steve McIntyre
2013-02-05 11:00:03 UTC
Permalink
Post by Paul Wise
Post by Michael Stapelberg
Assuming we ship Go libraries compiled as shared libraries, where do we
get the SONAME from? There is no mechanism for Go libraries to declare
an ABI break. Inventing one and asking all upstream projects to adopt it
seems unlikely to succeed. Ignoring SONAMEs altogether could make
software break on the user’s installation. Then again, the same could
happen to every interpreted language, e.g. Perl.
FFS, yet another new language where the implementors have refused to
think ahead and consider ABI handling? Idiots. :-(
Post by Paul Wise
Could it be automatically created based on some sort of hash of the
exported symbols and their argument lists?
If not, based on your discussions with upstream I would say that the
culture of the Go development community is incompatible with shared
Convince Go upstream to add an officially-supported and encouraged
shared library mechanism that includes ABI mechanisms.
Invent some automatic, Debian-specific method for ABI stuff. Like
Provides: go-foo-abi-<hash> and put that info in shlibs.
Use static libraries, Built-Using and frequent binNMUs.
Considering the mess that we already have with (for example) Haskell
in this respect, I would vote strongly against accepting or pretending
to support any more languages in this vein.
--
Steve McIntyre, Cambridge, UK. ***@einval.com
"When C++ is your hammer, everything looks like a thumb." -- Steven M. Haflich
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/E1U2g9A-0005WN-***@mail.einval.com
Chow Loong Jin
2013-02-05 12:30:02 UTC
Permalink
Post by Steve McIntyre
FFS, yet another new language where the implementors have refused to
think ahead and consider ABI handling? Idiots. :-(
I totally agree with you here.
Post by Steve McIntyre
Considering the mess that we already have with (for example) Haskell
in this respect, I would vote strongly against accepting or pretending
to support any more languages in this vein.
This won't really help matters much -- people who want/need to use the language
will install it for themselves out of the package manager, leading to an even
bigger mess. Otherwise someone will just end up supporting Debian packages out
of Debian, which is also suboptimal.
--
Kind regards,
Loong Jin
Joachim Breitner
2013-02-05 15:40:02 UTC
Permalink
Hi,
Post by Steve McIntyre
Post by Paul Wise
Post by Michael Stapelberg
Assuming we ship Go libraries compiled as shared libraries, where do we
get the SONAME from? There is no mechanism for Go libraries to declare
an ABI break. Inventing one and asking all upstream projects to adopt it
seems unlikely to succeed. Ignoring SONAMEs altogether could make
software break on the user’s installation. Then again, the same could
happen to every interpreted language, e.g. Perl.
FFS, yet another new language where the implementors have refused to
think ahead and consider ABI handling? Idiots. :-(
Post by Paul Wise
Could it be automatically created based on some sort of hash of the
exported symbols and their argument lists?
If not, based on your discussions with upstream I would say that the
culture of the Go development community is incompatible with shared
Convince Go upstream to add an officially-supported and encouraged
shared library mechanism that includes ABI mechanisms.
Invent some automatic, Debian-specific method for ABI stuff. Like
Provides: go-foo-abi-<hash> and put that info in shlibs.
Use static libraries, Built-Using and frequent binNMUs.
Considering the mess that we already have with (for example) Haskell
in this respect, I would vote strongly against accepting or pretending
to support any more languages in this vein.
At least to me my work on Haskell in Debian feels more than pretending,
and from personal experience with the creators of the language, I have
strong doubts that they are Idiots.

In fact I don’t see how you can have modern features like
cross-module-inlining without having to potentially recompile depending
packages.

And it is clearly not (anymore) the case that ABIs are not handled in
Haskell. In fact they are handled in a very precise way that allows us
to guarantee on the package level that user’s installations won’t get
broken. I think our priorities should be the user’s experience, and we
should be willing to accept a little infrastructural complications on
our side for that goal.

Also, I’d like to point out that the challenges provided by languages
like OcaML and Haskell have motivated improvements to our infrastructure
that benefit the project in general, such as the BD-Uninstallable
support in wanna-build that has prevented a lot of failed builds (and
hence manual intervention).

Greetings,
Joachim
--
Joachim "nomeata" Breitner
Debian Developer
***@debian.org | ICQ# 74513189 | GPG-Keyid: 4743206C
JID: ***@joachim-breitner.de | http://people.debian.org/~nomeata
Adam Borowski
2013-02-05 16:10:01 UTC
Permalink
Post by Joachim Breitner
At least to me my work on Haskell in Debian feels more than pretending,
and from personal experience with the creators of the language, I have
strong doubts that they are Idiots.
In fact I don’t see how you can have modern features like
cross-module-inlining without having to potentially recompile depending
packages.
And it is clearly not (anymore) the case that ABIs are not handled in
Haskell. In fact they are handled in a very precise way that allows us
to guarantee on the package level that user’s installations won’t get
broken. I think our priorities should be the user’s experience, and we
should be willing to accept a little infrastructural complications on
our side for that goal.
It's not a matter of "a little infrastructural complication", it's about
having the slightest chance of reasonable security support -- or even
regular bug fixes, when multiple layers of libraries are involved.

If there is a bug in library A, if you use static linking, you need to
rebuild every single library B that uses A, then rebuild every C that uses
B, then finally every single package in the archive that uses any of these
libraries.

Just imagine what would happen if libc6 would be statically linked, and a
security bug happens inside (like, in the stub resolver). Rebuilding the
world on every update might be viable for a simple scientific task[1], but
not at all for a distribution.

Static linking also massively increases memory and disk use; this has
obvious performance effects if there's more than one executable running
on the system.


[1]. Simple as for the number of diverse packages/systems involved.
--
ᛊᚚ᚟ᛁᛏᚣ᛫ᛁᛊ᛫ᚠᛟᚱ᛫ᚊᛖ᛫ᚹᛖᚚᚲ
Joachim Breitner
2013-02-05 16:40:01 UTC
Permalink
Hi,
Post by Adam Borowski
It's not a matter of "a little infrastructural complication", it's about
having the slightest chance of reasonable security support -- or even
regular bug fixes, when multiple layers of libraries are involved.
If there is a bug in library A, if you use static linking, you need to
rebuild every single library B that uses A, then rebuild every C that uses
B, then finally every single package in the archive that uses any of these
libraries.
Just imagine what would happen if libc6 would be statically linked, and a
security bug happens inside (like, in the stub resolver). Rebuilding the
world on every update might be viable for a simple scientific task[1], but
not at all for a distribution.
why not?

I agree that it is not desirable, but it is possible, and if it it were
possible easily (e.g. with little human interaction), this would be an
indicate of a very good infrastructure.

And in fact we have it: binNMUs and buildds enable us to rebuild large
parts¹ of the distribution after such a change (otherwise, maintaining
Haskell wouldn’t be feasible).
Post by Adam Borowski
Static linking also massively increases memory and disk use; this has
obvious performance effects if there's more than one executable running
on the system.
True, static linking has its disadvantages. But this is unrelated to the
problem of supporting languages with often-changing ABIs.

Greetings,
Joachim

¹ I do it regularly for more than 2.5% of all our source packages, and –
besides machine power – I see no reason why this would be impossible for
larger fractions of the archive.
--
Joachim "nomeata" Breitner
Debian Developer
***@debian.org | ICQ# 74513189 | GPG-Keyid: 4743206C
JID: ***@joachim-breitner.de | http://people.debian.org/~nomeata
Don Armstrong
2013-02-06 02:20:02 UTC
Permalink
Post by Joachim Breitner
Post by Adam Borowski
Just imagine what would happen if libc6 would be statically
linked, and a security bug happens inside (like, in the stub
resolver). Rebuilding the world on every update might be viable
for a simple scientific task[1], but not at all for a
distribution.
I agree that it is not desirable, but it is possible, and if it it
were possible easily (e.g. with little human interaction), this
would be an indicate of a very good infrastructure.
It's certainly possible, and our infrastructure enables us to even
consider doing it, but it's still a design flaw in the underlying
languages and their interfaces that should be addressed and resolved.

We need to be better at communicating to upstreams who are designing
and maintaining packages which do these kinds of things the problems
that it causes for people who want to use their packages in stable
production systems.


Don Armstrong
--
I really wanted to talk to her.
I just couldn't find an algorithm that fit.
-- Peter Watts _Blindsight_ p294

http://www.donarmstrong.com http://rzlab.ucr.edu
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@teltox.donarmstrong.com
Hilko Bengen
2013-02-05 23:00:01 UTC
Permalink
Post by Adam Borowski
If there is a bug in library A, if you use static linking, you need to
rebuild every single library B that uses A, then rebuild every C that
uses B, then finally every single package in the archive that uses any
of these libraries.
But wouldn't it be great if all the people wishing for something that
combines the strengths of Debian's and Gentoo's idea of software
packaging for years finally got the system they wanted as a side effect
of automating such rebuilds?

Cheers,
-Hilko
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@msgid.hilluzination.de
Enrico Tassi
2013-02-06 14:50:02 UTC
Permalink
Post by Joachim Breitner
At least to me my work on Haskell in Debian feels more than pretending,
and from personal experience with the creators of the language, I have
strong doubts that they are Idiots.
They are not, they are very smart, but they are academic people
with a very little idea of what it means to build a real software
distribution like Debian. I love type systems to, but whenever I talk
about static linking with in the academic context, they think it just
saves few KB of disk space.

I can't assert the same for go's designers, but for different reasons
they ended up incurring in the same design flaw. They want a binary to
be shipped on their production servers, and be sure that no matter how
crappy they are, it will work (no missing .so, no .so ABI problem).
Or at least, this is what I've understood.

Here we build a realistic system where a security update can be
made just by recompiling and pullig 1 package, not recompiling the whole
archive. Static linking imposes so.

To me it is just a problem of scale. You see the problems of static
linking only on a reasonably large scale. And Debian is huge...

Ciao
--
Enrico Tassi
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@birba
Florian Weimer
2013-02-07 12:10:02 UTC
Permalink
Post by Hilko Bengen
I drew a different conclusion from Ian's messages the thread you
mentioned (see the quotes below). Apparently, one *can* build shared
libraries using gccgo, but they are not currently usable using dlopen().
My impression was that this means that regular use of shared libraries
*is* possible with gccgo.
The problem with shared libraries in Go is that the API guarantee for
Go itself allows changes which break the shared library ABI, such as
adding struct fields (thus changing struct sizes and offsets). Sizes
and offsets are compiled directly into the code, without a relocation.
Even with shared libraries, we will have to recompile all dependent
packages in many cases.

This binary compatibility issue was addressed in the Java front end,
which had similar issues.

This is not unlike C or C++, of course, but some library authors there
have a more stringent attitude towards ABI compatibility and build API
change guidelines based on that. (Technically, we even have to
recompile all library packages when we make major changes to eglibc
because the static libraries are tied to a very specific eglibc
version because the symbols aren't bound to versions at that point.
So we probably shouldn't complain to loudly about other languages not
getting this completely right.)

Fedora hasn't got a solution for this, either, FWIW. OpenSUSE seems
to have support in their build infrastructure for soname bumps, which
could be used for this as well.
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@mid.deneb.enyo.de
Hilko Bengen
2013-02-02 20:20:02 UTC
Permalink
Michael,

after re-reading your GoPackaging Wiki page and digging around a bit in
the sources for the go tool, I think that only minimal changes to go(1)
are needed for its build system to "Just Work" without breaking
expectations:

1. Debian user's/admin's perspective: As I have already written
elsewhere in this thread, using upstream build systems without extra
parameters usually results in libraries and binaries being installed to
/usr/local. This is not what "sudo go get" currently (version 1:1.0.2-2)
does -- it happily puts both source and binary files into GOROOT
(=/usr/lib/go). This is bound to break things in interesting ways at
some point.

One way to fix this would be changing the default GOPATH setting from an
empty string to a path that has "/usr/local/lib/go" as its first entry.

Another quick solution might be changing go(1)'s behavior so that it
will still look for libraries in GOROOT, just not attempt to modify
files there.

2. Software developer's perspective: Since the text "How to Write Go
Code" at http://golang.org/doc/code.html tells me right away that
setting GOPATH is something I need to do, changing the GOPATH default
value should not cause too much confusion.

3. Debian package maintainer's perspective: I disagree with your
position that "Go libraries [...] should not be used directly for Go
development". If I put together a Debian package for a Go library,
developers should still have the option to use it. And it should be
easy, preferably be installed into the default GOPATH, so a second entry
(after /usr/local/lib/go) does not seem unreasonable.

Upstream has not made it clear whether 3rd partylibraries should be
installed into GOROOT or somewhere else. There still are a few
unanswered questions about how go(1) should behave, and what bugs are
still lurking in that code. Perhaps upstream will change go(1) so that a
concept of third-party libraries is possible. Until then, putting
libraries into a separate directory such as /usr/lib/gocode seems like a
good idea.

But what about gccgo? Apparently, go(1) can be used with gccgo, but
gccgo-compiled code cannot be linked with golang-go-compiled code. So,
perhaps it makes sense to install the libraries produced by each
compiler into separate directory hierarchies -- /usr/lib/gocode/golang
and /usr/lib/gocode/gccgo?

Cheers,
-Hilko
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@msgid.hilluzination.de
Michael Stapelberg
2013-02-04 22:50:02 UTC
Permalink
Hi Hilko,
Post by Hilko Bengen
/usr/local. This is not what "sudo go get" currently (version 1:1.0.2-2)
does -- it happily puts both source and binary files into GOROOT
(=/usr/lib/go). This is bound to break things in interesting ways at
some point.
Indeed. This is when having an empty GOPATH, right?
Post by Hilko Bengen
One way to fix this would be changing the default GOPATH setting from an
empty string to a path that has "/usr/local/lib/go" as its first entry.
Another quick solution might be changing go(1)'s behavior so that it
will still look for libraries in GOROOT, just not attempt to modify
files there.
I prefer the latter. We should ask upstream about their point of view on
this first, though.
Post by Hilko Bengen
3. Debian package maintainer's perspective: I disagree with your
position that "Go libraries [...] should not be used directly for Go
development". If I put together a Debian package for a Go library,
developers should still have the option to use it. And it should be
easy, preferably be installed into the default GOPATH, so a second entry
(after /usr/local/lib/go) does not seem unreasonable.
Upstream has not made it clear whether 3rd partylibraries should be
installed into GOROOT or somewhere else. There still are a few
unanswered questions about how go(1) should behave, and what bugs are
still lurking in that code. Perhaps upstream will change go(1) so that a
concept of third-party libraries is possible. Until then, putting
libraries into a separate directory such as /usr/lib/gocode seems like a
good idea.
I am slightly confused. First you say you disagree, but then you admit
that there still are problems and agree with regards to placing
libraries into /usr/lib/gocode after all? :-)
Post by Hilko Bengen
But what about gccgo? Apparently, go(1) can be used with gccgo, but
gccgo-compiled code cannot be linked with golang-go-compiled code. So,
perhaps it makes sense to install the libraries produced by each
compiler into separate directory hierarchies -- /usr/lib/gocode/golang
and /usr/lib/gocode/gccgo?
I’d say it makes more sense to pick one compiler instead.
--
Best regards,
Michael
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@midna.zekjur.net
Hilko Bengen
2013-02-05 00:20:01 UTC
Permalink
Post by Michael Stapelberg
Post by Hilko Bengen
/usr/local. This is not what "sudo go get" currently (version 1:1.0.2-2)
does -- it happily puts both source and binary files into GOROOT
(=/usr/lib/go). This is bound to break things in interesting ways at
some point.
Indeed. This is when having an empty GOPATH, right?
Right. Anf if I have interpreted the code correctly, "go get -u
<package>" will at the moment try to update the first occurence of the
package that it finds in $GOROOT or $GOPATH.
Post by Michael Stapelberg
Post by Hilko Bengen
One way to fix this would be changing the default GOPATH setting from an
empty string to a path that has "/usr/local/lib/go" as its first entry.
Another quick solution might be changing go(1)'s behavior so that it
will still look for libraries in GOROOT, just not attempt to modify
files there.
I prefer the latter. We should ask upstream about their point of view on
this first, though.
Maybe they can also be persuaded to put the two roles that have been
assigned to GOPATH into separate variables: A search path and an
installation target.
Post by Michael Stapelberg
Post by Hilko Bengen
3. Debian package maintainer's perspective: I disagree with your
position that "Go libraries [...] should not be used directly for Go
development". If I put together a Debian package for a Go library,
developers should still have the option to use it. And it should be
easy, preferably be installed into the default GOPATH, so a second entry
(after /usr/local/lib/go) does not seem unreasonable.
Upstream has not made it clear whether 3rd partylibraries should be
installed into GOROOT or somewhere else. There still are a few
unanswered questions about how go(1) should behave, and what bugs are
still lurking in that code. Perhaps upstream will change go(1) so that a
concept of third-party libraries is possible. Until then, putting
libraries into a separate directory such as /usr/lib/gocode seems like a
good idea.
I am slightly confused. First you say you disagree, but then you admit
that there still are problems and agree with regards to placing
libraries into /usr/lib/gocode after all? :-)
I may have misunderstood what you meant with "used directly for Go
development". Ah well, let me try again: I would very much like to see a
number of useful Go library packages distributed by Debian that:

- make the process for developing or building executables (and other
libraries) as easy as or easier than using "go get". (This is what I
understand as direct usage.)

- make it possible and easy to package such executables and libraries
for Debian. Specific DBMS support packages for the database/sql
interface come to mind...

I agree with putting libraries into a directory separate from $GOROOT
because I consider it the lesser evil as long as the go(1) questions
aren't sorted out.
Post by Michael Stapelberg
Post by Hilko Bengen
But what about gccgo? Apparently, go(1) can be used with gccgo, but
gccgo-compiled code cannot be linked with golang-go-compiled code. So,
perhaps it makes sense to install the libraries produced by each
compiler into separate directory hierarchies -- /usr/lib/gocode/golang
and /usr/lib/gocode/gccgo?
I’d say it makes more sense to pick one compiler instead.
To get things started with as little effort as possible, pick one
compiler and package some libraries and executable packages, such as
codesearch. I'm all in favor of this compiler being golang-go.

It will also make sense to support gccgo[1] at some point -- if only for
portability across all Debian platforms. Picking an extensible naming
scheme directory hierarchy (such as /usr/lib/gocode/$IMPL) would make
this possible from the start.

By the way: Where does the idea for the "gocode" directory name come
from?

Cheers,
-Hilko

[1] and possibly other implementations that may come with yet different
incompatible binary interfaces...
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: http://lists.debian.org/***@msgid.hilluzination.de
Continue reading on narkive:
Loading...