[NTLUG:Discuss] Upgrade breaks things
brian@pongonova.net
brian at pongonova.net
Fri Apr 12 23:49:11 CDT 2002
On Fri, Apr 12, 2002 at 06:46:13PM -0500, Steve Baker wrote:
> However, consider a heavily class-laden C++ program. If the library's
> header file says something like:
>
> class X
> {
> public:
>
> virtual int abc () ;
> virtual int def () ;
> } ;
>
> ...and the application does nice object-oriented things like:
>
> class X *my_X = get_me_an_X_please () ;
>
> my_X -> def () ;
>
>
> ...then that program will break if a subsequent version of the
> library adds:
>
> class X
> {
> public:
>
> virtual int abc () ;
> virtual int xyz () ;
> virtual int def () ;
> } ;
>
> ...because the location of 'X::def' in the virtual function table
> will have changed. Unless the application is recompiled against
> the new header files, you'll be in deep trouble.
This is easily solved by simply declaring the virtual functions as pure virtual,
and requesting the interface pointer from the library itself:
class X
{
public:
virtual int abc () = 0 ;
virtual int def () = 0 ;
} ;
extern "C"
X *CreateX();
The vtbl generated for the header would simply contain null pointers (or possibly
pointers to C routines, depending on the compiler), and wouldn't require
re-compilation of the client in the case of your scenario where a new virtual
function is added.
If this sounds like COM, it is -- one of the few things Microsoft has ever managed
to get right. It's too bad many of the C++ implementations on Unix don't follow
this same model, or we *would* have true backwards-compatibility by completely
separating interface from implementation.
In any case, re-compilation of code is to me an acceptable price to pay for
upgrading to a new shared library. I don't have a problem with that. The
backwards-compatibility issue is the big thorn for me.
> You may argue that it's simply inappropriate to use C++ class objects in
> the interfaces of public libraries - but it would be hard to implement
> something like my PLIB scene graph library without things like inheritance
> and virtual member functions.
Not having seen your code, I really couldn't say.
> > What I'm seeing more of lately, especially in stalwart low-level libs like glibc
> > and zlib, is the *addition* of interfaces, along with the deprecation of others,
> > which simply renders older code uncompilable against the new interface. This is an
> > issue of poor/inadequate planning, as a new contingent of developers
> > with ideas that are different from the original lib developers seek to change the
> > interfaces to better align them with their beliefs of what the interfaces should
> > really look like. I've come across this mindset in a number of GNU-based
> > libs.
>
> Well, if you want progress, sometimes things have to change.
This, unfortunately, is the very same attitude adopted by the same developers that
are breaking backwards compatibility. Change is all well and good -- *if* change
is needed, and *if* there are good reasons for it. Consider glibc: glibc2 was
supposed to herald the "new" glibc, a radical departure from previous glibc libs.
All well and good -- I waited a while until the libs were stable, then downloaded
the 2.0.7 versions.
Well, guess what? The interfaces are *still* changing! Even within minor changes,
the interfaces are changing. Here's an example:
Version 2.2.2
* Lots of headers were cleaned up. Using the tool in the conform/ subdir
we can now check for namespace violations and missing declarations. The result
is that almost all headers are now Unix-compliant (as defined in the upcoming
XPG6). The negative side is that some programs might need corrections, too, if
they depend on the incorrect form of the headers in previous versions which
defined too many symbols and included too many other headers.
This is simply bad planning, *not* a result of progress. Unexcusable bad planning,
I might add. Not only did they "clean up" many of the headers, they simply ignored
the effects on clients which may have been dependent upon the "pre-cleaned"
headers.
> Some of the things in the original UNIX standard C library were just *NOT*
> well thought out. Other libraries are similarly problematic - no developers
> are perfect - none have 20:20 foresight - API's simply have to change
> sometimes. You'd hope that it would be done SLOWLY so that applications
> become obsolete before they cease to function - but that's not always
> possible either.
Been there as well...my point is that there seems, to me at least, to be little
regard in the Linux world as to the effect of making interface changes. This is
based upon my observations over several years of progress (at least since 1996) in
the Linux world. I've noticed now it has become de rigueur to close out bugs in
many open-source projects with the notation "old libs/incompatible libs." In many
cases, this is clearly a cop-out -- the developers simply didn't want to invest the
additional time necessary to accomodate changing interfaces, instead opting for the
"tough luck for not having the latest/greatest libraries" excuse.
> I think the biggest problem is actually the situation with the C/C++
> compiler. The fiasco with RedHat's release of 2.96 and 2.97 of GCC
> right before the big changeover to GCC 3.0 meant that we have TWO
> binary incompatibilities where only one was really needed.
Of course, I would (flippantly) argue RH is a very good reason *against*
upgrading to the latest/greatest distros. A very stupid move by RH, and one which
provides me even more reason to seek out alternative distros.
--Brian
More information about the Discuss
mailing list