No, I haven't actually coded and tried my solution yet, I was just mathematizing off the top of my head. The solution is specifically for the problem of determining how much each hull vertex in a set of hull vertices should be moved so that all of the limit points that go with these are moved by the same amount in the same direction.

- John VanSickle
**Posts:**189**Joined:**Sat Feb 16, 2008 2:17 am

I don't think that this would work, and here's why:

Imagine a large number of vertices, layed out on a evenly spaced grid on the XZ plane (to illustrate this, I've used a b-spline curve, which basically is the 2D equivalent of an SDS):

We now select 5x5 vertices (16 faces) inside that grid and would like to move their limit positions from Y=0 to Y=1.

If we move all selected vertices by (0,1,0), we'd have the inner 9 vertices at the correct position, but the outer rim of our selected patch would be too low:

To compensate, we move the outermost vertices a bit upwards:

That worked, but now the inner 8 vertices are too high, so we move them a bit downwards:

Looks good, but now the innermost vertex is too low, so we move it upwards again:

All limits are on the desired position, but we've introduces ugly undulation.

I know that the approach described above would not yield a 100% correct limit-positions. But solving the linear-system as suggested would, nevertheless the unwanted undulation would remain. IMHO there's absolutely no way of getting rid of it, except introducing new vertices.

The only solution I can think of would be to also put the tangent planes of each vertex into the equation - and we'd be looking for a solution that does not modify that tangent planes. Of course, there is none, but a least-squares approach should yield a good approximation (it could even be biased towards tangentplane- or limit-accuracy). The problem that I see with this approach is that the matrices can get very large once more that a hundred points are selected. With 9 floats per vertex (x/y/z for position, u-tangent and v-tangent), we'd have a 9000x9000 matrix if 1000 controlpoints were selected. As long as it's sparse, there's no problem, but I think that the least-squares approach would yield a non-sparse matrix. If non-sparse, the matrix alone would take about 650 Megabytes of RAM, and I guess it would take at least several minutes to run an LU decomposition, unless you've got some peta-FLOPS numbercruncher in your garage...

Imagine a large number of vertices, layed out on a evenly spaced grid on the XZ plane (to illustrate this, I've used a b-spline curve, which basically is the 2D equivalent of an SDS):

We now select 5x5 vertices (16 faces) inside that grid and would like to move their limit positions from Y=0 to Y=1.

If we move all selected vertices by (0,1,0), we'd have the inner 9 vertices at the correct position, but the outer rim of our selected patch would be too low:

To compensate, we move the outermost vertices a bit upwards:

That worked, but now the inner 8 vertices are too high, so we move them a bit downwards:

Looks good, but now the innermost vertex is too low, so we move it upwards again:

All limits are on the desired position, but we've introduces ugly undulation.

I know that the approach described above would not yield a 100% correct limit-positions. But solving the linear-system as suggested would, nevertheless the unwanted undulation would remain. IMHO there's absolutely no way of getting rid of it, except introducing new vertices.

The only solution I can think of would be to also put the tangent planes of each vertex into the equation - and we'd be looking for a solution that does not modify that tangent planes. Of course, there is none, but a least-squares approach should yield a good approximation (it could even be biased towards tangentplane- or limit-accuracy). The problem that I see with this approach is that the matrices can get very large once more that a hundred points are selected. With 9 floats per vertex (x/y/z for position, u-tangent and v-tangent), we'd have a 9000x9000 matrix if 1000 controlpoints were selected. As long as it's sparse, there's no problem, but I think that the least-squares approach would yield a non-sparse matrix. If non-sparse, the matrix alone would take about 650 Megabytes of RAM, and I guess it would take at least several minutes to run an LU decomposition, unless you've got some peta-FLOPS numbercruncher in your garage...

- sascha
- Site Admin
**Posts:**2792**Joined:**Thu May 20, 2004 9:16 am**Location:**Austria

Still haven't run the algorithm yet, so I'm still in Internet Debate Mode, but, again, the solution is for determining how much to scale the translation of each hull point in a group drag so that all limit points move by the same amount. The effect this has on the limit surface wasn't part of the original problem specification. Moving a portion of the limit surface of a manifold, without changing the shape of that portion, is very likely not possible at all without changing the topology of the hull.

Note that with the solution I offered, hull vertices which are entirely surrounded by vertices that are also part of the drag operation will have a scale value of 1, and the other vertices will have a scale value greater than one (I don't see any scale value less than one coming up from this, given the nature of the original matrix). This yields the result you have in the third image (flat in the middle, bulged at the edges in the direction of movement). Granted, the user may not want this.

Trying to think of what the user may find most useful, I would either:

Note that with the solution I offered, hull vertices which are entirely surrounded by vertices that are also part of the drag operation will have a scale value of 1, and the other vertices will have a scale value greater than one (I don't see any scale value less than one coming up from this, given the nature of the original matrix). This yields the result you have in the third image (flat in the middle, bulged at the edges in the direction of movement). Granted, the user may not want this.

Trying to think of what the user may find most useful, I would either:

- Find the smallest scale value, and apply that to the translation for all of the hull vertices in the drag operation. This causes at least one final vertex to move with the mouse, but leaves some of the limit vertices dragging behind.
- Forget all scaling the translate values at all and let the user worry about it.

- John VanSickle
**Posts:**189**Joined:**Sat Feb 16, 2008 2:17 am

You could be right, I'm not entirely sure. All I can tell is what I've seen from my patch-model importer - e.g. the ugly undulations on the Moai model - which is a similar, but different problem.

I'm also not quite sure if your solution would actually yield a scale factor of 1 for all "inner" control-points, I'd have to think this through (or better yet, try it out). But I agree that for our application, the scale factor of the inner control-points should be one, which leads to an interesting shortcut:

Why not use a scale-factor of 1 for all controlpoints, in all cases where there are "inner" control-points (i.e. at least a 3x3 patch is selected). This would result in a normal translation operation without any correction applied. All the inner control-points would move as desired, and only the vertices at the rim would lag behind, but we won't get any undulation or unwanted bulges, so it's probably the best we can do.

Whenever a single (or multiple, isolated) controlpoint(s) are selected, we already have an analytical solution.

What's left are the cases where only a few points are selected (e.g. a single face, or a single edge). In this case we could solve your linear system to yield exact solutions - there won't be any undulation since the patch is too small for that anyway. Alternatively it could use the tangent-vectors to find the best approximation using least-squares, since only a few points are selected this shouldn't be any problem.

In any case, I think this is a classical trial and error problem.

One related question (I'm not a math expert, so let me know if I'm talking nonsense): Do you have experience with solving least-squares problems? The math package I was using can solve for large, square matrices when they are sparse (since they can be stored in a memory efficient way), but it hasn't got any direct support for least squares. As I understand, in this case I'd have a non-square matrix, and to get the least-squares solution I'd have to multiply it with it's transpose. This yields a square matrix which most likely is not sparse, so it requires a lot of memory and time to LU decompose it.

My question is: Is there a way to "run" the least squares algorithm on the original, rectangular (but sparse) matrix? If this could be done efficiently, it would save a lot of memory and perhaps processing time. Does LINPACK or any other package support such things?

I'm also not quite sure if your solution would actually yield a scale factor of 1 for all "inner" control-points, I'd have to think this through (or better yet, try it out). But I agree that for our application, the scale factor of the inner control-points should be one, which leads to an interesting shortcut:

Why not use a scale-factor of 1 for all controlpoints, in all cases where there are "inner" control-points (i.e. at least a 3x3 patch is selected). This would result in a normal translation operation without any correction applied. All the inner control-points would move as desired, and only the vertices at the rim would lag behind, but we won't get any undulation or unwanted bulges, so it's probably the best we can do.

Whenever a single (or multiple, isolated) controlpoint(s) are selected, we already have an analytical solution.

What's left are the cases where only a few points are selected (e.g. a single face, or a single edge). In this case we could solve your linear system to yield exact solutions - there won't be any undulation since the patch is too small for that anyway. Alternatively it could use the tangent-vectors to find the best approximation using least-squares, since only a few points are selected this shouldn't be any problem.

In any case, I think this is a classical trial and error problem.

One related question (I'm not a math expert, so let me know if I'm talking nonsense): Do you have experience with solving least-squares problems? The math package I was using can solve for large, square matrices when they are sparse (since they can be stored in a memory efficient way), but it hasn't got any direct support for least squares. As I understand, in this case I'd have a non-square matrix, and to get the least-squares solution I'd have to multiply it with it's transpose. This yields a square matrix which most likely is not sparse, so it requires a lot of memory and time to LU decompose it.

My question is: Is there a way to "run" the least squares algorithm on the original, rectangular (but sparse) matrix? If this could be done efficiently, it would save a lot of memory and perhaps processing time. Does LINPACK or any other package support such things?

- sascha
- Site Admin
**Posts:**2792**Joined:**Thu May 20, 2004 9:16 am**Location:**Austria

Sorry, but my experience with doing least-squares analysis was with an AmigaBasic program I wrote 15-16 years ago to computer the best-fit polynomial for a set of points in 2d space. Unfortunately, there's a step in the math that I've completely forgotten and will have to spend an hour or so trying to rediscover, so I can't help you there.

However, the size of your matrix is determined by the number of coefficients you are trying to fill into the function that needs to fit your data set. The data set can be larger than the number of coefficients, and in fact can be much larger (but NOT smaller), without affecting the size of the matrix to be crunched.

However, the size of your matrix is determined by the number of coefficients you are trying to fill into the function that needs to fit your data set. The data set can be larger than the number of coefficients, and in fact can be much larger (but NOT smaller), without affecting the size of the matrix to be crunched.

- John VanSickle
**Posts:**189**Joined:**Sat Feb 16, 2008 2:17 am

However, the size of your matrix is determined by the number of coefficients you are trying to fill into the function that needs to fit your data set. The data set can be larger than the number of coefficients, and in fact can be much larger (but NOT smaller), without affecting the size of the matrix to be crunched.

You're right about that - so in my 1000 controlpoints problem above there would be a (sparse) 9000x3000 matrix, and the least-squares approach would yield a 3000x3000 (dense) square matrix (assuming that there's no way around that , which I don't really believe).

3000 because we're looking for the x, y and z coordinates of a 1000 controlpoints, 9000 because we also need x/y/z u- and v- tangents.

There seems to be a way to avoid that X XT step, but I'd have to look into that - and I'd rather use some existing library that can do it (and handles the sparse matrices).

Anyway, if you make progress in that direction, please let me know!

PS: Amiga rocks, do you still have it? I once had a 500, a 2000 and later a 1200, which should still be somewhere in the basement. I somehow skipped that C step because as a schoolboy I couldn't afford a compiler, and went straight from AmigaBasic to 68000 assembler.

- sascha
- Site Admin
**Posts:**2792**Joined:**Thu May 20, 2004 9:16 am**Location:**Austria

sascha wrote:Anyway, if you make progress in that direction, please let me know!

I did, and it's pretty complicated to explain here, so I might type up a web page about it. Time will tell.

PS: Amiga rocks, do you still have it? I once had a 500, a 2000 and later a 1200, which should still be somewhere in the basement. I somehow skipped that C step because as a schoolboy I couldn't afford a compiler, and went straight from AmigaBasic to 68000 assembler.

I still have my 1200 lying around here somewhere. My old A500 I gave to my brother-in-law because he had bought something on my behalf and I owed him. I stepped into C and 68000 assembly at about the same time; usually I had my C compiler spit out an assembly listing, and I optimized the code to be about half as big and twice as fast. I was working on a Doom engine when the wife finally persuaded me to buy a PC clone in 1995. Never had a hard drive for it; I did everything on floppies.

A couple of my IRTC (www.irtc.org) entries pay homage to the Amiga; in the animations one of the characters owns a Freundin 5000 computer.

- John VanSickle
**Posts:**189**Joined:**Sat Feb 16, 2008 2:17 am

Having a case of Feature Creep right now. I seem to recall one of the modelers that I have tried to use having a command line as part of the user interface. Well, why not for my modeler as well?

Then it occurred to me: That's only a step away from a scripting function.

Then it also occurred to me: If these commands use the same format as the file format (which is text-based), then all files become scripts.

V1.8 may be a while...

Then it occurred to me: That's only a step away from a scripting function.

Then it also occurred to me: If these commands use the same format as the file format (which is text-based), then all files become scripts.

V1.8 may be a while...

- John VanSickle
**Posts:**189**Joined:**Sat Feb 16, 2008 2:17 am

John VanSickle wrote:V1.8 may be a while...

But v1.7.12 is available at the usual location.

While testing, I found out why the subdivision preview was so slow for the 3duLoResGirl: She has 4800 or so polygons. Churning out 4800 or so Beziers isn't going to be quick. I did put in an option (under the View menu) which allows the user to set the number of faces used to draw each Bezier. The default is 3x3, but it can go down to 1x1. The 3duLoResGirl model actually looks quite good at a 1x1 resolution, and isn't unacceptably slow at that resolution.

I seem to recall now that I didn't do any range checking on the value supplied by the user-- the dialog uses a text box, so the user could put a huge value in there, or a non-zero value. Hope that doesn't crash someone's system. I should probably turn it into an up-down box or a pull-down list.

- John VanSickle
**Posts:**189**Joined:**Sat Feb 16, 2008 2:17 am

Would it be worth it to make the application more portable? I did a little of this during the past week, but I was wondering how much more to do.

- John VanSickle
**Posts:**189**Joined:**Sat Feb 16, 2008 2:17 am

What you you mean by "portable?"

Are you referring to changing the underlying toolkit, such as to a "pure" OpenGL solution, or something like wxWidgets?

Then again, we could try to convince you to come over to the dark side and code in Java, but that's not likely to happen to someone who liked to hand-tune assembly code.

Are you referring to changing the underlying toolkit, such as to a "pure" OpenGL solution, or something like wxWidgets?

Then again, we could try to convince you to come over to the dark side and code in Java, but that's not likely to happen to someone who liked to hand-tune assembly code.

- dcuny
**Posts:**2902**Joined:**Fri May 21, 2004 6:07 am

I would like more portability. Trying to use LS with wine is a real pain in the behind... the buttons on the toolbar disappear and reappear with an ease that would make the Great Presto sick with envy. You have to run the cursor along the bar until until the one you want appears.

--- Jim

--- Jim

"We're so sorry, Uncle Albert,

But we haven't done a bloody thing all day."

--- Paul McCartney

But we haven't done a bloody thing all day."

--- Paul McCartney

- pndragon
**Posts:**591**Joined:**Sun Dec 05, 2004 1:27 am**Location:**North Carolina

Would it be worth it to make the application more portable?

Asking that question to a bunch of Linux geeks, what answer do you expect?

If you'd just like to target Linux users, testing it with Wine on a recent Linux distribution (try Ubuntu) might be the best option (I think Wine also runs on OSX, but I've never tested that.)

Everything else, like switching to wxWidgets, QT or GLUT (shudder), will probably lead to a total rewrite, in which case you could as well use Java (which makes an accomplished C/C++ developer shudder, I guess.)

- sascha
- Site Admin
**Posts:**2792**Joined:**Thu May 20, 2004 9:16 am**Location:**Austria

I just tried it on my Vista machine. It doesn't display anything in the viewport:

Doing various things to damage and force the window to repaint (occluding it, resizing) doesn't seem to fix it.

Doing various things to damage and force the window to repaint (occluding it, resizing) doesn't seem to fix it.

- dcuny
**Posts:**2902**Joined:**Fri May 21, 2004 6:07 am

dcuny wrote:What you you mean by "portable?"

I should have written, "More easily portable," as that is what I meant.

Are you referring to changing the underlying toolkit, such as to a "pure" OpenGL solution, or something like wxWidgets?

I was referring to wrapping up everything OS-specific into as small a number of classes as possible (one, if I can) so that whoever takes up the task of porting this to another platform has only one class to rewrite. I hope.

Then again, we could try to convince you to come over to the dark side and code in Java, but that's not likely to happen to someone who liked to hand-tune assembly code.

Heh. I used to hand assemble 6809 code, and even wrote a word processor for the C64 directly in machine code.

- John VanSickle
**Posts:**189**Joined:**Sat Feb 16, 2008 2:17 am

Users browsing this forum: No registered users and 1 guest