Path: chuka.playstation.co.uk!news From: gil@snsys.com (Gil Jaysmith) Newsgroups: scea.yaroze.problems.pc Subject: Re: long long ints for improved accuracy Date: Wed, 17 Dec 1997 11:58:10 GMT Organization: SN Systems Lines: 61 Message-ID: <3497b487.81595878@news.playstation.co.uk> References: <673eim$9k36@chuka.playstation.co.uk> <6740vu$9k39@chuka.playstation.co.uk> <676jav$9k310@chuka.playstation.co.uk> Reply-To: gil@snsys.com NNTP-Posting-Host: trish.snsys.com X-Newsreader: Forte Free Agent 1.11/32.235 "Graeme Evans" wrote: >That's an incredibly useful way for a linker to behave. Who was complaining >about crap tools? >I take it the linker just links in the bits of libgcc that you actually use? Good linkers should only ever load in the individual object modules which are referenced from libraries. A linker should combine all your object files and anything else required from any of the named libraries, in the order they're mentioned. All of the unresolved references in each of your object files are resolved by looking first in other object files and then in the symbol index tables of the libraries you specify. If a module in a library contains the symbol you need to resolve the hanging reference, that module is read from the library and added to the code image, and its unresolved references get added to the resolution list, possibly requiring further library objects. But the linker won't add every module from every library just because you've named the libraries on the command line. This is why the granularity of the allocation of library functions to object files is important, especially on targets like the PlayStation; you don't want to have to link in 50K of code just to get a 4K routine if the other 46K of code never gets called. You need either a linker which performs static coverage and strips dead code (Watcom does this; VC++ might do it; our new one will do) or a library build with only one function per object file. The time taken to link with a one-to-one granularity is a little longer since you don't get accidentally useful library loads and you have more section fragments to position in the memory map, but the end result is generally a smaller image so it's worth it. The problem with this desirable state of affairs is, of course, that the Net Yaroze build of GNU ld doesn't do it like that at all. libps.a is only 40K because all it contains is a large number of absolute addresses under the names of the library functions. Any reference to one of these gets replaced with that address. And the whole of the actual libps code (libps.exe in the mon directory on the boot CD, I'd warrant) is downloaded to a fixed address in RAM when you run the Net Yaroze. But the link itself is handled through totally standard GNU ld operation, so custom libraries such as the proposed libgcc2.a should work as above. An interesting - if totally illegal - project would be to undo this lamentably hardcoded configuration, which might or might not be possible depending on the structural information remaining in libps.exe. My guess would be there isn't any and it isn't possible; we don't handle that format ourselves and I can't be bothered to look it up on DTS but I expect it's just a raw binary output with what looks like a 2K header on the front for position control. I probably don't need to add that if it were possible, even the vaguest attempt would break your Net Yaroze user licence at something approaching the speed of light. >If someone builds all of libgcc, I guess that means we get sscanf as well; i >seem to remember someone asking for that, too. *There's* a nice wee project. Afraid not, squire. sscanf isn't in libgcc2, which is the module you need to build to get the math functions. It's most likely in libc, a separate GNU distribution containing the entire standard C library, which is a complete mofo to build. - Gil