Path: chuka.playstation.co.uk!argonet.co.uk!argbc08 From: R Fred Williams Newsgroups: scea.yaroze.programming.3d_graphics,scee.yaroze.programming.3d_graphics Subject: Re: Camera math Date: Wed, 03 Feb 1999 20:37:29 GMT Organization: ArgoNet, but does not reflect its views Lines: 87 Distribution: world Message-ID: References: <799tkt$lbf5@scea> Reply-To: R Fred Williams NNTP-Posting-Host: userp274.uk.uudial.com X-Newsreader: NewsAgent 0.85 for RISC OS X-NNTP-Poster: NewsHound v1.37ß2 Xref: chuka.playstation.co.uk scea.yaroze.programming.3d_graphics:341 scee.yaroze.programming.3d_graphics:1182 In article <799tkt$lbf5@scea>, "Ed Federmeyer" wrote: > I have read a lot of books and online documentation describing all the 3D > matrix math to transform object-space coordinates to world-space > coordinates > to screen-space coordinates, but they ALL ASSUME that the camera is at the > origin facing down the "Z" axis. > Can anyone point me to some documentation that describes what needs to be > done to have the camera at an arbitrary location facing in an arbirtrary > direction? Well, it's just another case of "run the vector through a 3*3 matrix and add an offset". The only real difference between a "world space to camera space" matrix and an "object space to world space" one being that you generally want any transformations to happen in the opposite order. Think in terms of the "world" just being another level in the object hierarchy that has a position & orientation relative to the camera. That position & orientation is, obviously, the inverse of the camera's position & orientation WRT the world. (And after that, all that's left to do is the perspective transform: Camera-space to screen-space, if you like) Say you've got an object, and are doing things with roll/pitch/yaw angles. (for true 3d work, this isn't always the best way to go, but anyway...) Your "object-to-world" coordinate-transformation will usually be:- A z-axis rotation (roll) Then an x-axis rotation (pitch) Then a y-axis rotation (yaw) (Combine the above into a single 3*3 matrix) Then add an offset (The object's position). Now if you've got a camera, also with a position,roll,pitch and yaw... Take your world-coordinate. Subtract the camera's position. Rotate in the y-axis by "-camera's yaw" Rotate in the x-axis by "-camera's pitch" Rotate in the z-axis by "-camera's roll" And, hurrah, you've got your object coordinate in terms of "z is camera-relative forwards". Same thing with matrices:- To get a "world to camera" matrix-and-offset pair in the same arrangement as an "object to world" one.... Take your "camera-to-world" 3*3 matrix (ie, one that treats the camera as if it were an object). Invert it. (Shortcut:- Note that the inverse of a non-scaling rotation matrix is just it's transpose, for reasons that become *so* obvious once you've got your head round what a rotation matrix is) Run (minus) the camera's position through the resulting 3*3 matrix, for a post-rotation offset rather than a pre-rotation one. Ta da! > Luckily our Net Yaroze libs take care of this with the SetRView() > function, > where you specify the viewpoint and refpoint, but I'd really like to > understand what goes on behind the scenes. It's just a matter of getting the hang of "what matrices are". In the case of rotation matrices, they're just a representation of one set of xyz axes within another set. Eg:- The first "row" of an "object-to-world" rotation matrix *IS* the vector representing the object's x-axis in world space. 2nd row's the y axis. 3rd row's the z. In terms of GsRVIEW2, the camera matrix's 3rd row (or is that "column"?, I can't be bothered working out which just now!) will be a normalised "reference - viewpoint" vector, and the other rows will be a pair of vectors at right angles to that, set according to the "twist" input. HTH, Fred (Yaroze /~RFREDW)