RC24 is out, and with it we have new convergence algorithms. Finally, there is a way to run your simulation without having to worry about whether or not you used enough rays.

There are two different convergence algorithms - Minimum Convergence and Detailed Convergence.

Minimum Convergence: As you know, every 3 seconds, Pachyderm checks the simulation status and provides the user feedback. When you choose minimum convergence, instead of giving you an estimate of the time remaining, it checks up to two receivers for convergence. Convergence, in this case, is defined as a minimum amount of change for the time interval from 0 to 50 ms, from 50 to 80 ms., and from 80 to inf. ms. When the impulse response has been changed less than our pre-defined minimum for a minimum of 10 checks in a row, the simulation is considered converged, and concludes. This should be enough to get a reasonable estimate of most of your acoustical parameter values, often within just a few minutes. It generates this cool feedback display:

Detailed Convergence: Using detailed Convergence, every three seconds, the algorithm will check up to two receivers at every 1 ms. interval for a minimum amount of change. When the impulse response has been changed less than our pre-defined minimum for a minimum of 10 checks in a row, the simulation is considered converged, and concludes. This should be enough to get a very realistic auralization - with the caveat that this simulation may take a while. Try leaving it running overnight. It generates this cool feedback display:

This is very new technology, so if you find any problems or think of something that I need to rethink, please let me know. If it works for you (which it ought to) then I hope you enjoy the peace of mind associated with knowing that the computer has your ray-count covered. After all, it can see many things that we can't. Who better to figure this out?


We have added (reinstated) perforated layers in transfer matrix, and transmission loss is more robust than before. I would still be careful using it for transmission loss, however.

We have also added visual guides to the Scattering calculation method. The green sphere is the surface along which receivers are located. The blue box indicates the extents of the Finite Volume Modelspace. The black box indicates Perfectly Matched Layers. The source is the location from which the test signal is emitted. For best results, keep the sample entirely below the green sphere, and do not have any geometry in the model except for the sample. DO NOT ENCLOSE THE MODEL. Let the algorithm decide how to represent the space around the sample. Have fun! 


If you are using the Finite Volume Method in Pachyderm, hopefully you have been wondering about the lack of documentation on my very elaborate and less than intuitive interface tools.

I am working on it, but there is less time available for this stuff these days. Please bear with me.

See the new tutorial page dedicated to the Pachyderm_Numeric_TimeDomain Method here, and proceed with caution. If in doubt, please contact me.

Remember that under the GPL, you are responsible for your own application of these tools.

We have updated the downloads section. (apologies if anyone has been wondering where to download the software... we were unaware that our web host had deleted the download files... If anyone knows of a reasonably priced and reliable web host, we are open to suggestions).

The release candidate that is linked is not perfect, but we think it is very usable. It includes improvements such as:

- First order Biot Tolstoy Medwin edge diffraction

- Transfer Matrix materials design, including sensitivity to indicdent direction and a method for finite size correction (don't use a smart material if you are going to use the finite size correction)

- Finite Volume Method, including an eigenfrequency calculator that is great for modal detection, a method for determining the correlation scattering coefficient, and a great system for visualizing wave behavior.

- Multiple source objects, Common Loudspeaker Format (CLF) and arbitrary directionality support.

- Line sources: Traffic noise, and aircraft takeoff and landing.


- Auralizations over any speaker array you can design

- Smart particle animation (this is something we invented ourselves... it is very similar to other particle animations, but at every frame, each particle searches for its nearest neighbors, and they share energy, leading to a more visually coherent result that is easier to read)

- Animation over maps

- Support for Grasshopper, including icons (thanks to Pantea Alembeigi, RMIT)

- Support for IronPython