Path: chuka.playstation.co.uk!scea!greg_labrec@interactive.sony.com From: Ed Federmeyer Newsgroups: scea.yaroze.programming.2d_graphics Subject: More GsBG questions Date: Thu, 24 Apr 1997 02:16:48 -0500 Organization: (no organization) Lines: 317 Message-ID: <335F08E0.7758@charlie.cns.iit.edu> Reply-To: fedeedw@charlie.cns.iit.edu NNTP-Posting-Host: charlie.cns.iit.edu Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Mailer: Mozilla 3.0C-GZone (Win95; I) Okay, I've done some more experimenting with the order that I call the GsSort() routines, etc. Its working a lot better now, thanks Mario! BUT I've come across even more things I don't understand: 1) Why are 4-bit CLUT images reversed? If you look at the image definitions for my tile1_image_data[] array, you'll see that I needed to do some wierd byteswapping to get the image to display the way I wanted it to. I started experimenting with 15bit-direct images (with sprites), and they were stored intuitively, not requiring byteswapping. 2) Performance: Is there a general difference in the performance of working with 4bit vs. 8bit CLUT vs 15bit vs 24bit direct images? Just a generalization is all I'm after at this point. I would assume that the CLUT images are slower, since the routines or GPU must translate the CLUT info into real RGB values before the image is painted into the display buffer. 3) You'll notice in the attached code, when I define the second cell, which LoadImage() puts his texture right next to the texture of the first cell, I need to define it's ".u" value as being 16, even though it is physically in VRAM 4 pixels to the right of the first cell. Apparently the routines/(GPU?) automatically adjust the ".u" member depending on if it is a 4bit/8bit CLUT or 15bit direct texture? Is that right? 4) Why is GsSortClear() somehow different than the other GsSort() functions? All the GsSort functions do is add a command to the list of GPU commands in the given OT, right? They don't actually set the GPU into action, right? Then later you tell the GPU to start working on the complete list of commands, right? Well, it's almost as if when preparing the commands, the GsSort() funcs don't need to know about the current display buffer (the GPU will figure out where exatally in VRAM the drawing commands should write to when the time comes), but GsSortClear() seems to peek at the current display buffer to figure out what VRAM area to create a clear command for. I don't get it. Works: Gives Black Screen: Clear OT Clear OT Call GsSort() funcs Call GsSort() funcs VSync/SwapBuffer Call GsSortClear() Call GsSortClear() VSync/SwapBuffer Draw OT Draw OT 5) When I define the second of my double buffers via GsDefDispBuf() to be on the VRAM y-location of 256 (as is done in some examples), rather than *immediatly* below the first buffer (at 240), the GsSortFixBg16() routines draws the BG 16 pixels lower on the odd frames!!! Below is a more full-blown version of code I'm testing with. It puts up a tiled background, and a sprite in front of it. The way it is now, the image is "jumpy", because the BG jumps up and down 16 pixels every frame, so tiles that are different are flickery. See the comments for ways to "fix" the code. Any ideas why the code (GsSortFixBg16() in particular) behaves this way? Thanks, EdF -------------------- Code for testing strange behavior --------------------- #include #include "pad.h" #define PACKETMAX (10000) #define PACKETMAX2 (PACKETMAX*24) #define OT_LENGTH (14) GsOT WorldOrderingTable[2]; GsOT_TAG zSortTable[2][1<