Back

PhysX Knowledge Base/FAQ

 
 

Materials and Properties of Matter:

Questions about materials:
1) How much management is required on our end for materials? For instance, do we need toverify that duplicate materials are not being created?
2) Do we need to handle removal of a material from the scene? For instance, if all actors that refer to a material are removed from a scene, is the material automatically removed or is something we need to support?
3) What is themaximum number of materials a scene supports?

I would assume that if you create two materials then they exist as two instances regardless of whether they are identical or not, as with every other object. If you are creating materials and you wish to share them between objects then you will have to ensure that the material is only created once, and that both instances of the shape use it. Normally this is done by creating the shapeDesc objects to reference the shared material prior to run time, the creating shapes from the correct shapedesc objects.

Materials are created using a member function of, and owned by, the scene. Deleting a scene should delete the complete list of materials. Any objects using them will also be deleted which is why this is safe. If you do not wish to release the scene, you may release a material, but you must be sure that the material is not in use. One way of doing this is to overload the createMaterial and releaseMaterial functions in NxScene to perform reference counting.

The scene can hold an arbitrary number of materials. I would avoid having more than you need, but I doubt they are hugely expensive, 100 bytes maybe.

The hardware scene manager will even take care of duplicating your materials from the master scene to your managed rigid body, fluid or cloth scenes. Take a look at the appropriate hardware scene manager docs for details.

Rigid Body Dynamics

Can I use NxTriangleMeshShapes for dynamic actors?

This is a big no-no, for a number of reasons:
  • Dynamic triangle meshes are computationally expensive (parallel BV Tree traversals).
  • They tend to get caught up into other triangle meshes, as triangle-triangle resolutions between multiple pairs of triangles tend to conflict with each other.
  • Large collision resolution velocities are possible, because of #2.
  • Since a triangle mesh has very little thickness, you tend to see more bullet-through-paper issues.
  • The inertia tensor of a non-manifold triangle mesh is ambiguous.

Instead, more complex dynamic objects should be composed of a compound shape composed of multiple simple shapes (sphere, capsule, box, convex)--the fewer shapes, the better, of course.

Does PhysX have optimizations for using multiple copies of the same shape in a scene?

For convexes and trimeshes, it is indeed possible (and recommended) to share the cooked data between all instances. Primitives such as boxes, spheres, capsules to not support any form of instancing, but their memory footprint isn't nearly as large.

The key is the difference between NxConvexMesh and NxConvexShape. The Shapes are the instances, the Mesh is the template data. When you cook the NxConvexMeshDesc you get an NxConvexMesh object. This object can be shared between all NxConvexShapeDesc that you create for each instance you want of the mesh within your scene. The SDK will prevent you from deleting an NxConvexMesh object while one or more NxConvexShapes exist in the scene with a reference to it. NxTriangleMesh and NxTriangleMeshShape work equivalently for trimeshes.

How do I make my vehicle less "floaty"?

It is common in physics engines to add extra gravity to vehicles to make them more reactive and exciting to drive. The amount is usually 3x-5x standard gravity.

To achieve this in the PhysX SDK, add an extra impulse to the center of mass of the vehicle chassis each frame. This impulse should be scaled by the mass of the object, the desired "gravity" value, and the length of the timestep. Keep in mind that the SDK will also be applying standard gravity to the chassis.

How is distance to the origin, "d", defined by NxJoint::getNextLimitPlane(NxVec3 & planeNormal, NxReal& planeD) and NxPlaneShapeDesc?

The two actually use a different convention. For the next release, the documentation has been updated to explicitly define the plane equations for each case.

For NxJoint::getNextLimitPlane(), the documentation now includes the equation:
dot(n,p) + d == 0 (n = normal, p = a point on the plane).

For NxPlaneShapeDesc, the documentation now includes the equation:
normal.x * X + normal.y * Y + normal.z * Z = d.

Is there any way to change the gravity vector per actor? For instance, if we wanted an actor to be able to walk up walls or have a scenario where the character’s gravity vector is constantly changing.

To disable PhysX gravity for an anctor call NxActor::raiseBodyFlag (NX_BF_DISABLE_GRAVITY).

Note: You should be careful changing gravity (or enabling/disabling) during the simulation. The change will not wake up sleeping actors (for performance reasons. It may be necessary to call NxActor::wakeUp () manually on affected actors.

However you will not neet to do that if you apply your own gravity to the actor, as the next step will wake up any actors necessary.

Use NxActor::addForce (force, NX_FORCE) to apply your per-actor gravity force to the actor in the direction and magnitude of the force vector (as specified in world space).

See the 2.3.1 User Guide, on the Guide>Dynamics>Actors>Applying Forces and Torques section for further information.

My spheres never come to rest. How do I model rolling resistance, or air resistance against motion?

Smooth spheres rolling across a smooth level plane will never come to rest in a PhysX simulation. But due to air resistance and the rolling resistance of surfaces which in real life are not entirely smooth, real balls do come to rest if rolled across a floor.

How should the phenomenon of rolling resistance ideally be modelled?

I looked up rolling resistance, and found a formula which accounts for the deceleration of objects due to rolling resistance µR:

µR= (v-u)/gt

where v is final velocity, u is initial velocity, g is gravity, and t is time.

Basically this allows us to calculate a coefficient of rolling resistance by letting an object roll to a stop (v=0) from a known initial velocity (u) and measuring the time (t) that was taken. If the contact force is not the force of gravity g, but some other value, then substitute that for g. For example a Dodge Dakota shifted into neutral at 13.4 m/s (30 mph) takes 72.4 seconds to come to a standstill giving a value of µR (neglecting air resistance) of 0.0188. (Knowing the mass of the vehicle and integrating the air resistance equation below for between 0 and 13.4 m/s allowed me to eliminate air resistance and calculate a more accurate µRof 0.0177, assuming a linear deceleration to standstill).

Some other values for µR that I found on the Internet:

  • train wheel on rail 0.001
  • bicycle tire on wooden track 0.001
  • bicycle tire on smooth concrete 0.002
  • bicycle tire on asphalt road 0.004
  • bicycle tire on rough but paved road 0.008

Substituting into v=u+at yields:

g*µR= (v-u)/t = a

as a force f=ma

froll=mg*µR

The rolling resistance force will never make the velocity reverse, so the effect should be clamped in a way that it obeys this, and the force ceases at the moment the object becomes stationary without any overshoot, which would cause the object to vibrate. Such a constraint looks like

f <= mv/t

where t is the simulation timestep duration over which the force is applied. Equivalently, if applying an instantaneous impulse ft:

ft <= mv

An alternative approach for modelling rolling resistance is to tessellate the sphere into polygons. The inherent roughness of the surface will then model rolling resistance accurately for the sphere. This is because the roughness is responsible (at any scale) for introducing interaction between the friction and rolling momentum that causes rolling resistance.

Of course neither of these cases models air resistance, and for most purposes neglecting air resistance is probably just fine.

For this we can use the standard equation:

fair = ½× CD× A× rair× v2

where CD is the coefficient of drag, rair is the density of air, A is the area facing the air and v is velocity. The coefficient of drag of a Dodge Dakota is 0.465, the Area 1.31 m2. Density of air is about 1.1 kg/m3. At 13.4 m/s I calculate the force due to the resistance of air to be about 60 Newtons. Unlike rolling resistance, air resistance is highly dependent on velocity.

So the total resistive force f can be given as:

f= froll + fair = mg*µR + ½× CD× A× rair× v2
where f <= mv/t

Object scales that cause jittering

Simulations that use inches or centimeters as the basic unit (as is the default in 3dsmax) require a gravity factor on the order of 1000 and a skinWidth of 2.5. At these values, your simulation is likely to exhibit a lot of jitter straight off.

If you reduce all these values by a factor of 100 (to bring into meters scale), you should so no more jitter.

The reason for this is that there are other variables, such as NX_BOUNCE_TRESHOLD and NX_DEFAULT_SLEEP_LIN_VEL_SQUARED to name two, that also need to be scaled. The default values of all current and future variables are (and will be) based on meter units, so the safeguard is to use meters for your own PhysX data, if possible, or scale it whenever you pass data to and from the PhysX simulation.

Once an Actor is created in a Scene, is there a way to "remove" the Actor from the Scene so that no calculations are performed on him? Other systems seem to have the notion of being able to Add and Remove entities from the world, but I haven't really seen that capability in Novodex. I'd like to be able to call Scene->createActor, but then either "pull the Actor out of the Scene" without destroying (ie, Releasing) him, or possibly have him remain as part of the Scene but as some sort of "ghosted" entity (ie, no forces can act on him). Putting an Actor to sleep seems to be the closest thing I can find to this, but external forces seem to be able to wake him up.

Try:

NxActor::raiseBodyFlag (NX_BF_KINEMATIC);

Should do the trick, and there are some other funky flags to fly with too. Use the lowerBodyFlag when you want the object to respond to forces again.

What kind of solver does Novodex use, specifically?

The rigid body constraint solver is an iterative solver.

This means that each constraint applies an impulse that satisfies the

constraint without regard for any other constraint in the system. After all the constraints have been 'solved' in this way, one iteration has taken place.

Of course, it is inevitable that solving some constraints has counteracted the solving impulses of of other constraints to some degree, though usually the resulting 'error' is smaller than the original. By performing multiple iterations, these errors can be reduced to invisible amounts.

Increasing the number of constraints increases the computation by O(n)

Jakobian matrix solvers tend to increase in complexity by O(n2) for each

additional constraint.

When I call getGlobalPose on an actor that caused a trigger or collision callback, the pose is outside of the trigger or not in contact with the other actor. Why is this?

The callback is triggered from within the simulation thread. Since the simulation is double-buffered, queries to the scene from within the callback occur on the old scene state, since the new state is not updated yet (at least locally, since the new state still lives on another CPU core, on PhysX HW, on an SPU, etc). If you need the new pose, you should simply store the actor pointer in the callback, and call getGlobalPose after fetchResults--when you process all actions that affect the simulation state. A nice side effect of the double-buffering is that you actually have the opportunity to store any of the old state you want, in case you need it to compare against the new state for various advanced user-implemented features (such as determining loss of energy during the step). Depending on your use, you may need to take heed of sub-stepping, if this is enabled in your application's scene stepping strategy. If it causes problems, you always have the option of disabling sub-stepping, of course.

Where should the mass be set when constructing dynamic actors? Both the NxShapeDesc class and NxBodyDesc class have a mass member and both of these classes are used when creating the actor description.

What is the difference between a class, like rigid body class NxActor, and a descriptor of that class, like NxActorDesc? A little bit like animals have a living, moveable, physical form, say, a chicken, the NxActor represents an instance of the chicken. The chicken can generate a descriptor, like a gene form, which contains all the information to generate a correctly configured chicken. The gene form is small, and can be stored away efficiently, say on a disk, for a long time, loaded, and can be used to instantiate one, or thousands of new chickens, all exactly like the first. Typically the designer (an electronic chicken God?) creates the chicken and saves its descriptor, which is loaded at the start of the game, as the asset. Typically the descriptor for a chicken will include the NxActorDesc for each rigid body, which contains an NxBodyDesc (unless the actor is static) and one or more NxShapeDesc objects (or derivative classes). Every time the programmer wants a new chicken, it is created from the descriptors and turns back into a living, moving instance.

The upshot of that, is that if you set the mass in an NxBodyDesc, we are assuming that you are doing this for all subsequent chickens, but if you set the mass of the NxActor, the analogy is that just one living chicken changed. Enough of chicken analogies.

So if you want to set the mass of an actor that is in NxActor form, you can use NxActor::setMass(). If it is in descriptor form, set NxBodyDesc::mass. You will also need to set the mass pose (the position and orientation of the center of mass with respect to the actor frame) using NxActor::SetCMassOffsetLocalPose() or NxActor::SetCMassOffsetGlobalPose() for an existing actor, or by setting the member variable NxMat34 NxBodyDesc::massLocalPose in the body descriptor.

Whilst simple at the interface level, there are a couple of problems with this, from a practical standpoint. The first being that you might not actually know the mass of the object, nor where it's center of mass is. If you are creating an object descriptor on the fly (meaning that you have no precomputed data) you are more likely to know the density of the object. If the density of a wooden table is uniform, no problem, use NxBodyDesc::mass = 0.0f, NxActorDesc::density = 0,7f; When you create the actor from this, the volume and positions of the component shapes will be used, and it will have the correct mass and mass pose.

But what if the table is not of uniform density? Slate table top, density 2.7 g/cm³, wooden legs, density 0.7 g/cm³, for instance? This is where you have to assign density to the individual shape descriptors, depending on the material, using NxBodyDesc::density = 1.0f; NxShapeDesc::mass = 0.0f; NxShapeDesc::density = 2.7. The global density is still considered -- densities are multiplied if both body and shape density are specified -- so if you intend to rely on the shape density you should set the global density to 1.0f to avoid this. At least, I can't think of a reason to include both. You can also set the mass of a shape directly if you happen to know it, e.g. NxShapeDesc::mass = 27000.0f;

All of this computation is slow, and so it is best to do this in the production pipeline. For the production pipeline, we ultimately want to save a descriptor with the correct mass, mass pose, and the angular form of the mass, the inertia tensor. Unfortunately, only the NxActor form of the rigid body can be persuaded to recompute the global mass, mass pose and inertia tensor, so we have to jump through a hoop to do it for a descriptor. The strategy is to create the actor descriptor with a body descriptor. Give the body the global density of 1.0. Add all the shapes with the correct densities, and local positions, and the shape geometries. Use the actor descriptor to create the actor, which computes the mass, mass pose and inertia tensor, then call NxActor::saveToDesc() and NxActor::saveBodyToDesc(). Now we have all the data we need, serialize the actor, body and all the shape descriptors. When you load them, everything is there so nothing is computed when you create the actor.

By the way it looks like a bad idea to provide an inertia tensor without providing a mass and mass pose.

Collision

Can I optimize my static triangle mesh collision geometry by splitting it?

The PhysX SDK has three phases to collision detection:

  1. Broadphase: A very fast, three-axis sweep-and-prune that uses the axis-aligned bounding boxes (AABB) of actors to determine potential collision pairs based on axis-overlaps.
  2. Midphase: For Broadphase pairs that involve a static triangle mesh, a Bounding Volume Tree (BV Tree) is logarythmically traversed to determine candidate triangles for additional testing.
  3. Narrowphase: These are all the individual triangle-box, box-box, etc collision tests that determine the final result.

Large triangle meshes obviously have large AABBs, so they will force most dynamic objects to engage in Midphase collision detection against them all the time. Splitting these meshes into smaller, spatially-coherant meshes for the creation of seperate actors will likely reduce the number of Broadphase hits, especially for terrains with a lot of variance in height. And when the Midphase is hit for each of these, the number of BV Tree branches that need to be tested will be reduced (though this is not a huge consideration). Cache performance would be improved, though, by using these smaller-mesh actors, as dynamic actors tend to stay in proximity to the same mesh actors for quite some time, and smaller meshes means their collision data tends to be more likely packed into the same cache lines.

While you may be reducing the Midphase cost, you will be doing this at the cost of increased effort for the Broadphase, as there are now more actors for it to handle. Our Broadphase is very fast, however, and tends to be low on the list of CPU users, so there is usually benefit in shifting more effort to the Broadphase.

So there will be some point in the middle of the complexity/object count curve that could be found that would be optimal. This optimal point would vary, based on:

  • The variance of the heights of your mesh data.
  • The complexity of your meshes.
  • If you can page unused meshes out of the scene.
  • If you can share some of the mesh data, which reduces your memory costs and improves cache performance
  • Where your dynamic actors typically live in relation to your mesh data (flying high above, or in constant contact).
  • How often you add or remove dynamic actors from the scene.
  • How many actors live in the scene.
  • How much of your static geometry is in active collision over the course of the simulation.

We plan to provide more profiling data in the future, which would help in this optimization process (which should be done by running through the game, perhaps using a time-demo approach).

We also have a grid-based heightfield in the works that would greatly improve the memory and performance cost of a major portion of the static triangle mesh data of many developers.

I can't seem to get a trigger callback with a capsule shape as a trigger, other shapes work fine.

To make a shape a trigger, you must set the NX_TRIGGER_XXX bit(s) (see NxShapeFlag in the documentation) on the "shapeFlags" field of the shape descriptor. Capsules contain an additional field called "flags" that is used for swept shapes (used by raycast wheels). Check to make sure that the trigger flag is not being set on the "flags" field rather than the "shapeFlags" field.

Is there a way to attenuate the physics response?

The simplest means is to use lower restitution values, or to increase damping in the system. Setting a lower maxAngularVelocity value would help, too. These are all things you set up before the collision actually occurs, and either affest the response immediately, or soon after the response.

If you find you need to affect the collision reaction after it has already occurred (you cannot step in and immediately affect it, as the simulation happens in parallel to your application and thus the response has already occured by the time you got the collision event stream), then you can check and set the object's velocity, or apply force/impulse to slow it down.

If this is STILL too late, then you can turn off collision response for the object, detect the collision, and apply your own response. This will be more laggy than allowing the system to handle response for you, and will cause problems for cases such as stacking.

We have a feature in our list for setting a maximum linear velocity on a per-object basis. That would also be an effective way to handle your question.

My actors appear to sink somewhat into my heightfield terrain!

It is quite possible your heightfield is tesselated in one direction for rendering and the other for physics. Thus, 50% of the time your objects will sink to a certain degree and 50% of the time they will float (which is not too noticeable with high camera angles).

We need notification whenever a collision occurs. It seems that we get notification for most collisions, but there are some that are never reported.

To get contanct notifications, the application must derive a class from the NxUserContactReport class to implement the onContactNotify() function. It should also pass an instance of this class to NxScene::setUserContactReport().

The application should also set pairs of actors to report contacts as follows using NxScene::setActorPairFlags(). Flags that are supported are:

  • NX_NOTIFY_ON_START_TOUCH
  • NX_NOTIFY_ON_END_TOUCH
  • NX_NOTIFY_ON_TOUCH

Important: All the shapes should be created for actors either from the NxActorDesc structure during construction, or using NxActor::createShape() *before* the actor pair flags are set, or filtering will ignore some collisions and collision reports will not be generated.

When I call getGlobalPose on an actor that caused a trigger or collision callback, the pose is outside of the trigger or not in contact with the other actor. Why is this?

The callback is triggered from within the simulation thread. Since the simulation is double-buffered, queries to the scene from within the callback occur on the old scene state, since the new state is not updated yet (at least locally, since the new state still lives on another CPU core, on PhysX HW, on an SPU, etc).

If you need the new pose, you should simply store the actor pointer in the callback, and call getGlobalPose after fetchResults--when you process all actions that affect the simulation state. A nice side effect of the double-buffering is that you actually have the opportunity to store any of the old state you want, in case you need it to compare against the new state for various advanced user-implemented features (such as determining loss of energy during the step).

Depending on your use, you may need to take heed of sub-stepping, if this is enabled in your application's scene stepping strategy. If it causes problems, you always have the option of disabling sub-stepping, of course.

Joints and Contraint Resolution

Is there a way to determine if a joint is broken apart from the callback mechanism?

You can determine if a specific joint is broken by calling NxJoint->getState() to get the NxJointState and then checking if it equals NX_JS_BROKEN.

There is no way to generally query the scene for broken joints, but you can iterate over the joints in the scene via NxScene->getNextJoint() and check their state as above.

Also, there is no way to "fix" a broken joint, however you can use loadFromDesc() if you still have the descriptor, or save the joint to a descriptor, update the actors and then load the descriptor (the broken state is not saved to the descriptor as joints cannot be created/loaded in a broken state).

If some other process is receiving the callbacks, it can potentially request that the joint be released by returning "true" from the callback. If it does so, then the joint won't be in the scene at all and any information in the joint object will be lost. In this case you will need to associate some label (either userData or name) and identify which joints are missing by process of elimination.

Cooking

Do NxMeshes generated through the createConvexMesh and createTriangleMesh functions stay around after the NxActor is created?

The PhysX mesh data (NxTriangleMesh and NxConvexMesh) is memory that you have control of, as you need to enable sharing of meshes for best memory usage. Deleting one actor using the mesh does not remove the mesh (which would cause memory bloat if you haven't been getting rid of it up to this point...)

For most game use, developers would load all the meshes at the start of the level and simply index into an array of mesh pointers when creating each instance of an object. This would work for your standard, highly-instanced, units, but you might need a separate system for your terrain meshes, as this is where you often don't share meshes and may run out of memory to store the data if you loaded the entire level's worth of mesh data.

Does mesh data get changed during calls to createConvexMesh or createTriangleMesh?

Yes, we do clean up the meshes so that we only have the minimum needed to represent the shapes. So, for convex objects, internal and duplicate (to some tolerance) verts are removed and a triangle mesh is generated. If you pass in both the verts and your own mesh, we use them as specified (but then it is up to you to ensure they are clean...) For static triangle meshes, we again remove duplicate verts and degenerate triangles, which helps keep the BV tree to a minimum size and helps ensure good quality simulation.

For HW cooking, we also split up the mesh into a series of pages that can be conveniently streamed to the HW over the PCI bus.

Run-time cooking and caching of PhysX mesh data

Since cooking (mainly mesh data, but to a lesser extent convex data) is a CPU-intensive process, it is typically done as a preprocessing step and stored on disk to be shipped with the game. On rare occasion, some data may not cook properly without some modification to the input data, so pre-cooking is also a means of ensuring that the data will not fail at cook-time.

There are some cases where run-time cooking may be required, however:
  • User-created content
  • Dynamic content generated by the application, such as deformable, non-heightfield, terrain
  • Massive levels of unique geometry that would require too much CD space to ship--and possibly even too much HD space to store
  • Downloadable content

In these cases, it may be desired to cook at runtime--though you would still want to verify that the data cooks properly in the shop before shipping it off.

Because cooking is expensive, you would want to cache the cooked data to the user's HD so that it can be quickly loaded the next time it is needed. If you have large meshes to cook, you will want to look into either subdividing them, cooking them at load-time, or cooking them in a seperate thread that might be allowed to work over a period of frames so as not to impact framerate.

If disk space is a concern, we would recommend something like the following caching method (assuming the environment is divided into a regular grid of cells):

  • At any one time, the system maintains a certain number of meshes on disk, perhaps configured by the gamer (who allows a certain amount of space for the cache).
  • When the player enters a new cell, the application checks all the cells he can get to from this one to determine which are not currently in the cache. This check will prioritize adjacent cells, but will likely extend out to some radius, adding cells to a priority queue.
  • The application spawns another thread to handle creation of these cached versions while the player is playing through the current cell. This thread may remain active for the entire game, consuming cooking requests as they are generated.
  • This additional thread is somehow time-sliced so that it only steals a bit of performance from the main application and physics threads.
  • When the player nears an exit, this thread could also pre-load the next level pre-emptively. Most likely, there is no mesh caching still going on for the current level, so the cooking load is spread throughout the level.
  • In the worst case, such as when a player can simply teleport to any map in the game, you may have a longer level load (but the player may feel this is appropriate, depending on the type of game, and the time could be concealed by some sort of graphics effect, or simply a lack of PhysX until enough data is loaded).

Triangle mesh cooking and material indices: How does one handle materials, when you need to pre-cook your geometry and don't know actual material indices until they are created?

Basically, when you load a cooked mesh that has face materials, you load the material indices and not the raw material data. The problem is, you have no idea what materials these indices correspond to when you actually load the data in an application. To work around this you need to save your material indices with the corresponding material data to a file and then read in the material data from this file when you load each cooked mesh.

You should likely create a material library, either a list in one of your headers (or built from a script, loaded from a file, etc.). You will load this library whenever you cook your material data and whenever you run your primary application.

Whenever you make changes to your material library that affect indices, you'll need to recook all your mesh data with this library loaded so your meshes get cooked with the proper material indices. Obviously, changing material parameters alone does not require recooking.

This is roundabout, yes, but in a way kind of necessary since saving away all the material data per face would mean you'd need to check and index them as you loaded them in, making your cooked data larger and increasing your load times, which sort of defeats the purpose of cooking the data. This is probably also a good general programming practice as well, so that you have the materials of your application consolidated in one library.

As long as you only modify the individual material attributes, and not the triangle mappings, you will not have to recook all of your meshes. But you will definitely want to plan ahead, of course. At level design time, specify a material library with more than enough indices to cover intended physical parameters. Give default values to all that have not been tuned, yet. Some may find using a spreadsheet for editing material values per level convenient.

Support and Debugging

How do I make the Visual Remote debugger follow the game camera?

The code to add to a program to activate recording in the Visual Remote Debugger (VRD), and make the VRD camera follow the application camera is quite straightforward. First to do, in the initialization routine, immediately after creating the global PhysX SDK object, is to connect the application to the VRD. This finds the VRD application, at the given IP Address and, if supplied, port number, and establishes the communication link. In the current implementation, the VRD application must be running at this time.

      gPhysicsSDK->getFoundationSDK().getRemoteDebugger()->connect("localhost");

Then we add code to create the camera, and give it an initial position and orientation. We specify the position (x, y, z) in world space as the Origin tag, then the unit vector in the direction the camera is facing (the camera local z axis in world coordinates), and finally, an orthogonal unit vector that points out of the top of the camera. These initialize the camera position in the Visual Remote Debugger.

      gPhysicsSDK->getFoundationSDK().getRemoteDebugger()->createObject(&gCameraObj, NX_DBG_OBJECTTYPE_CAMERA, "Camera", NX_DBG_EVENTMASK_EVERYTHING);
      gPhysicsSDK->getFoundationSDK().getRemoteDebugger()->writeParameter(NxVec3(5,5,5), &gCameraObj, true, "Origin", NX_DBG_EVENTMASK_EVERYTHING);
      gPhysicsSDK->getFoundationSDK().getRemoteDebugger()->writeParameter(NxVec3(0,0,0), &gCameraObj, true, "Target", NX_DBG_EVENTMASK_EVERYTHING);
      gPhysicsSDK->getFoundationSDK().getRemoteDebugger()->writeParameter(NxVec3(0,1,0), &gCameraObj, true, "Up", NX_DBG_EVENTMASK_EVERYTHING);

Until this code has been executed, we will not be able to use our new camera, but at any time after that, the application's camera can be selected from the View-->Camera Menu. Orbit and Fly, the built-in cameras will be in the list, but with the addition of our own camera, with whatever name we gave it, will also be avavilable in the list. To view the scene from this viewpoint, the camera must be selected in the menu, and the current camera is highlighted with a dot.

Each frame, we simply update the three vectors, to update the camera in the VRD. In our case, we are reading the camera position from GLUT, our renderer.

      gPhysicsSDK->getFoundationSDK().getRemoteDebugger()->writeParameter(gEye, &gCameraObj, false, "Origin", NX_DBG_EVENTMASK_EVERYTHING);
      gPhysicsSDK->getFoundationSDK().getRemoteDebugger()->writeParameter(gEye+gDir, &gCameraObj, false, "Target", NX_DBG_EVENTMASK_EVERYTHING);
      gPhysicsSDK->getFoundationSDK().getRemoteDebugger()->writeParameter(NxVec3(0, 1, 0), &gCameraObj, false, "Up", NX_DBG_EVENTMASK_EVERYTHING);

Simple!

What kind of support do non-licensee, registered users receive?

All developers are able to use the PhysX SDK for commercial and non-commercial development without a fee through our new EULA. There are several methods of gaining support for this use of the SDK, intended mostly to assist you in supporting yourself and each other. Many of our contract-licensees also receive this same level of support.

SDK Documentation

This is naturally the first place to look for information. With every new release, we ship a CHM help file that not only contains information auto-generated from our source code, but also several other areas of professionally-written guidance on using the SDK. There will also be a seperate set of release notes pointing out significant changes from the last release, which we will cross post to the Knowledge Base.

Knowledge Base

We make every effort to push as much information as we can into the Knowledge Base, so that you can have fruitful searches for information. Indeed, we have found the Knowledge Base to be the area of the Support Site that is growing the fastest in terms of usage. While we will try to keep this information up to date, you should check the last modification date of each article to ensure you are looking at currently-relevant information. The Knowledge Base is populated with information discovered while helping contracted developers, common or important items discussed in the Forum, potential SDK features, and current known-issues.

Downloads

Besides the SDK installers, we will occasionally release additional sample code in the downloads area, typically to go with a Knowledge Base article or a Forum thread.

Forum

If you don't find your information in the areas above, the Forum is your best place to go. Use the Advanced Search feature to find exactly what you are looking for, subscribe to important threads (such as the Public Releases announcement thread), request a daily email digest, and contribute as much as you can! The Forum is really intended to help our thousands of users to help each other--but we do keep a pretty close eye on the Forums designated for the most recent versions of PhysX, too. We rate many discussions based on how useful we feel they will be to most users. If you ask intelligent questions and show a willingness to do some of your own work, you are likely to receive a response not only from other users, but also directly from us (these are usually rated 3-5 stars). If you ask half-formed questions that are already contained in the resources listed above or already posted on the Forum, you may only hear crickets... (usually rated 1-2 stars).

Sending email

Sending email to some generic address you may guess or gather from our website is definitely not the way to go. If your email generates a Support Ticket, you will receive an auto-response and the Ticket will eventually be deleted without notice. Please use the Forum.

Calling the office

Since we have so many non-contracted developers (almost 10,000 at the time of this edit), we are interested in scalable processes such as the Forum and Knowledge Base. We only offer phone support to selected licensed developers--and this is typically only as an escalation to the Ticket process (also only available to selected licensed developers).

Character Controller

Deleting Shapes or Actors under the character controller - If an Actor or shape is deleted (that the character controller is currently in contact with) after the current simulate() has been completed will cause a crash in the next update if a OnShapeHit callback exists for the controller. The shape passed into the callback is invalid. Its is as if the character controller is 'caching' its current contact shapes from the previous frame without reference counting the objects, and is un-aware that they will be invalid in the next simulation loop.

You are quite correct in your assessment that the pointer was cached. In fact static objects are cached between frames and never updated, whereas dynamic objects are updated in the cache once per frame (meaning it is not safe to add or remove those in the callback, but it is safe between frames).

In 2.4.1 and above:

One should call NxController->reportSceneChanged() when a static object has been changed. This voids the contact cache.

However, it is _not_ recommended to delete any objects in the callback.

This problem seems to occur in SDK 2.3.1 and 2.3.2 and 2.4.0 as well, prior to getting the API to fix it. We suggest upgrading to the 2.4.1 or later version of the SDK in this case.

Network Physics

In my client/server game, how should I update the orientations of my actors based on authoritive server updates?

Avoid the use of setPosition whenever possible:

  • This hurts the performance of the engine, which is tuned for the case of temporal/spatial coherance. setPosition causes the object to be removed from the Broadphase and re-inserted as if it were a new object, which is much slower than the other means of update listed below. Only use setPosition if the actor is actually intended to "teleport" elsewhere, especially over a long distance.
  • It causes artifacts: If the actor is teleported into another actor, resolution of the penetration will cause one or both of the actors to fly off, and performance will be negatively affected due to the cost of penetration resolution.

Two alternatives to setPosition are:

  1. Calculate impulses to apply to the actor to push it toward the desired position. This will allow the actor to affect other, client-side-only physics actors in a believable manner. If there are large or immobile actors in the way, however, you may have to switch to #2 to resolve.
  2. Switch the actor's state to keyframed and execute a moveTo operation on it. This is friendly to the Broadphase and still allows interactions with other dynamic actors, but allows the actor to pass through other static or keyframed actors that would otherwise have block it.

Note that MMO developers we have talked to have reported that updating an object's angular velocity is more effective in maintaining synchronization than setting its linear velocity, rotation or position. So to reduce bandwidth, you should prioritize your updates in that order.

Is there anyway to retrieve the current acceleration of an actor? We could use acceleration for our client prediction of actors in our network games.

There is no direct way of getting the acceleration of the body, as we convert forces (accelerations) into impulses (instantaneous change in acceleration) and use those.

However, if you get the velocity and angular velocity of the actor, store it for a frame, and then get the new velocity and angular velocity after the frame using the following functions:

virtual NxVec3 NxActor::getLinearVelocity(void)

virtual NxVec3 NxActor::getAngularVelocity(void)

Now subtract the old velocity vector from the new one, and divide it (each component) by the duration of the frame in seconds, and you have the exact acceleration vector (or angular acceleration vector) that the body experienced during the frame.

What networking strategies can be employed to make physics work with multiplayer games and MMOGs?

Lockstep

The first strategy to consider is to run all clients in lockstep. To do this, the input to the game (from all participants) is collected on the server, and distributed to all clients. The clients then do everything identically, including the physics simulation. This requires the physics simulation to be deterministic. I.e. for any set of inputs, the outcome of the simulation will always be the same, no matter how often or on which machine the simulation would be run with that set of inputs.

The first problem here is that the PhysX SDK is not deterministic.Especially when running different hardware setups, bus latencies can vary between runs, or on different machines. Even without hardware in the machine, we do not guarantee any type of determinism.

Another problem with the use of a lockstep strategy is that there is a delay between the user applying some input, say turning the steering wheel, and that input affecting the car. The data has to travel from the client machine to the server, and then after all the inputs have been collected the data is returned to the local client and then may be used. The delay is twice the client-to-server latency of the slowest client. The round-trip time of the slowest client is a key problem. In Seoul, which has the fastest internet in the world, round trip times are typically up to 200ms. In America it is not uncommon for a round trip time to exceed 400ms, or almost half a second.

Some research was done in Korea on acceptable input delay. They found that a delay of up to 70ms was unnoticeable to most users. A delay of up to 100ms felt clunky, though most users became accustomed to the delay. A delay of above 100 ms was unacceptable to many users.

Either problem is enough to eliminate the strategy.

Prediction-Correction

The above suggests that it is important to make a player’s vehicle, character or unit respond directly to player input. This means the unit is moved without a complete knowledge of what the rest of the world is doing.

In a driving game, for example, if a networked (remote) player provides input to their car, for example she is driving at 100 mph (44.7 m/s) and starts a turn with radius 75 m (tight at that speed, but let us assume this is no station wagon). This information has to travel from the remote client to the server, then to the local client, before your client knows about it. Let’s say that our worst case network delay is half a second. By the time the information arrives, (and assuming the local machine predicts she was still driving straight), her position is 3.3 meters (10 ft) away from what you thought and facing 17 degrees off course. To aggravate the problem the networked car has already been rendered in the incorrect position.

So the bad news arrives, and the only option is to apply a correction. The necessary correction can be done in a single time step, but since the position and orientation is half a second out of date, it is necessary to make a new prediction based on the data as to where the car is now. This involves taking the linear velocity from the half-second old data and calculating a position for the car now. If the player has not applied any further input, then the prediction is likely to be very good.

Turning, braking and so on are absolutely routine, so these corrections must look really good, so much so that the user is unaware of it happening. I know most car drivers these days are on their mobile phones, but if I saw another vehicle translocate ten feet without passing through the intervening space, I would notice and think it more than a little bit weird. In other words it would violate the "law of least surprise".

Let’s assume that I have 250 ms before I think the next network packet is likely, and we are running at 60 frames per second. That means I actually have 15 frames to perform the correction if I want. At 100 mph, the other vehicle is moving forwards at 0.75 meters per frame. Over 15 frames, the correction is 0.22 meters per frame so this is okay but obviously a large part of the motion. We divide the total correction into the 15 equal portions and add one each frame.

Another option is to use a smaller corrective amount, and keep adding it to the car until the next network packet comes in, then add the remainder of the old correction to the new one. In practice, it seems unlikely that such a large correction would continue for very long, as most cars go mostly straight, most of the time, and sometimes the old and new correction may even cancel out. However, the local position of the networked car would never seem to align as closely as with a faster correction.

What you would really like to do is just nudge the position of the car by the correction amount, without adding the extra energy from the corrective motion as well, so the ideal solution would just be to use the ‘moveGlobalPosition’ to perform the adjustment. Unfortunately this runs into problems if you set an object so it penetrates another one, as the two objects stand a good chance of entangling. (This might be improved with Continuous Collision Detection in version 2.3.1 and above; this would need researching.) So the practical result is that an extra velocity is added to the vehicle, to move it over the course of the simulation step.

What can really get messy is when two cars collide. Each client is sure of the position of the local car only, and knows only where the other one was half a second ago. If two cars collide, it is almost inevitable that extra data needs to be sent to make the collisions seem to work similarly on both clients. Otherwise the vehicles will collide in different positions, with differing initial conditions, and the physics engine can quickly cause a small divergence to become a large one.

In the worst case, they won’t even agree that a collision has take place. If two cars are running side by side, and driver A turns into the other, B. The turning client A knows a collision has taken place; its car is clearly intersecting the position that car B is predicted to be. Client B has no idea. Until it receives the position of car A over the network, to it they are still riding side by side. At this stage, client B has to backtrack and create a collision, thus changing its idea of where its car was and where it should be now.

If this makes your head hurt, try this variation. Car A turns into car B, just as car B moves out of the way. Now client B is correct that there is no collision, but client A is already rendering the carnage. One likely solution to this mess, is that driver B’s input is overruled by news from client A that a collision occurred, and as before, this reduces to the above problem.

Using input delay to cover up network delay

In particular, if the local machine thinks a networked car may be about to collide, an optimization may be employed. When we were thinking about the problem of running clients in lockstep, having too much input delay was a problem. However, adding just a little deliberately improves the latency of the network packet in the prediction-correction scenario. Suppose we start to add just 70 ms of delay. In other words, each client gets user input, immediately sends it over the network, where the other clients have it in a worst case of 500 ms. We store input in an input queue and only apply it to the local scene after 70 ms. We receive the network packet a further 430 ms later, and the latency appears less by 14%. In another example where the network delay is a more normal 300 ms, transmitting the new user input with an input delay of 100 ms reduces latency by 33%. This makes the clients’ actual data about where other vehicles are more accurate. Their predictions are likely to be more accurate too.

When objects are not very near each other, the input delay can be reduced for a more responsive feel.

Cloth

Do cloth vertex and triangle indexes change from frame to frame during simulation?

The order of the vertices and the triangle indices are the same throughout the simulation. The vertex ordering is even the same as the one provided to cooking.

When tearing takes place, new vertices are inserted and some triangles get new indices (necessarily). For new vertices, you can get parentIndices through NxClothMesh. For an original vertex i the parent index is i. For a vertex i generated by tearing the parent index j is the index of the vertex i is a clone of. This can be used to copy vertex attributes to newly generated vertices.

Performance Considerations

Do NxMeshes generated through the createConvexMesh and createTriangleMesh functions stay around after the NxActor is created?

The PhysX mesh data (NxTriangleMesh and NxConvexMesh) is memory that you have control of, as you need to enable sharing of meshes for best memory usage. Deleting one actor using the mesh does not remove the mesh (which would cause memory bloat if you haven't been getting rid of it up to this point...)

For most game use, developers would load all the meshes at the start of the level and simply index into an array of mesh pointers when creating each instance of an object. This would work for your standard, highly-instanced, units, but you might need a separate system for your terrain meshes, as this is where you often don't share meshes and may run out of memory to store the data if you loaded the entire level's worth of mesh data.

Run-time cooking

See "Cooking"

Soft Body

Could you provide some information on the fundamental Softbody algorithms being used?

Soft-Body Algorithm

Our soft-body algorithm is an instance of the general constrained-particle-dynamics method described in Matthias Müller's paper "Position Based Dynamics" (see http://graphics.ethz.ch/~mattmuel/publications/posBasedDyn.pdf).

Its main purpose is not accurate modeling of reality but rather to achieve

  • real-time performance
  • unconditional stability
  • visual plausibility

Soft-bodies are modeled as volumetric meshes made up of tetrahedra (analogous to how the paper models cloth as a collection of triangles). The user-supplied original configuration of the tetrahedra defines the resting state of the soft-body. When deformed by external forces during time-integration, the tetrahedron-vertex positions/velocities are modified through iterated constraint-projection to minimize the soft-body's deviation from its resting state . Each tetrahedron defines two constraints on the particle positions, used to

  • penalize deformations of the tetrahedron's resting volume
  • penalize deformations of the tetrahedron's resting edge lengths

Therefore, the method probably cannot be directly classified as a 'Finite-Element-Method' or 'Mass-Spring-System' but rather as a (computationally simplified) algorithm inbetween the two approaches.

For soft-tissue simulation, it might be of interest that we also have an experimental solution for soft-body-tearing ready. However, it has not yet been added to any released version of the SDK, but it might be integrated in the future (e.g. in the form of a BriX).

Documentation

There's an entire chapter in the released SDK Documentation devoted to soft-bodies (creation/parameters/attachments/rendering etc.)



 
 
LinkedInTwitterGoogle+FacebookReddit