[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: 3D version of ETesla



Original poster: Jim Lux <jimlux@xxxxxxxxxxxxx>

At 12:39 AM 7/8/2005, you wrote:
Original poster: "Mark Broker" <mbroker@xxxxxxxxxxxxxxxx>

O
Well, to be honest, all the hard work has already been done! Now it would really just take what is already done in ET6 and revolving it around the coil (Z) axis. The "special boundaries" would be set up using some simple boxes and spheres that could be inputted using dialog boxes or something (no hard coding necessary).



(IIR, it made the code harder to read, so speed was sacrificed in the name
of readability.)  It's also possible to use fixed point math using
integers, which "may" crunch faster than floating point numbers.

In this case, the "idea" of the program is now fairly well understood and known. Now, we need speed!!! I used to set the alarm to wake me up at 3:30am so I could load the next test E-Tesla BASIC model on the laptop... Now the computer can do that model in 15 seconds!!! I now run 500 x 500 arrays like it is no big deal!!!

I remember the first time I ran it in ~March of 2000 I spent about 3 hours on "medium acuracy" on my Athlon 500. I think everyone who uses it is grateful for the speed improvements!



To date I can only recall three instances where I could have used a 3D
version of ETesla - I was wondering how the proximity of a grounded rod
would affect things, I was wondering how a radial streamer/spark affected
the tuning, and yesterday when I was wondering what the potential of a
"floating" object would be.

That "floating" potential things is messy for sure!!! I have no idea how that would be handled....

Basically the control grid would have a unique value for a floating object in which adjacent cells would be averaged and the average applied to all the cells. I did this as a Laplacian surface experiment in the spring or fall of 2000 (in MathCAD!).



Maybe iterate the 500 x 500 x 500 array 250000 times :o)))

Perhaps with some careful grid setup, the number of iterations could be significantly reduced. I believe I had suggested that about ET6 while it was still a BASIC program (no pun intended!)



Next is to make the program handle "dynamic" cases....  That
will easily blow today's computers clean out of the water...  Have to hook
up a massive parallel array of old GameBoys or something...

My head hurts to merely think about a dynamic case. Massively Parallel Array of GameBoys - that's pretty funny! :o)


----------
If you're serious about going to 3D, you really need to think about variable gridding to reduce the computational load. Conceptually, it's not much more difficult. The trick is in keeping track of which cells are neighbors to which cells, and how big the cells are. The usual approach is storing it as something called an Octree (think binary tree, but in 3 d) because it lends itself to a cell's neighbor being two cells half the size.


Most of the FEA things use some sort of recursion relation:

B(i,j) = k0*B(i,j) + k1*B(i-1,j) + k2*B(i+1,j) + k3*B(i,j-1) + k4*B(i,j+1)

where the k's are chosen according to convergence speed, and the physics underlying it (for instance, in heat transfer, you put in thermal conductivities between cells). That is, the new value for a given cell is a combination of the cells previous value plus some contribution from the neighboring cells.

Say you had two cells (half the size) on the j+1 side.. the recursion relation would change to something like:

....+k3*B(i,j-1) + k4a * Ba(i,j+1) + k4b*Bb(i,j+1)

This is quite applicable to what we're doing, because we have large swathes of space where there are perfectly good analytical representations of the field across the space, given the boundary conditions. That is, most of the "free space" around the coil falls in this category. The tricky parts are the ones where the field is varying fast, i.e. near objects.

----------
On networks of GameBoys.. (or Xboxes, or PS/2s, all of which have been mentioned on slashdot at one time or another) It's not worth it. The real computational power of these devices (which is considerable) is not exposed to the programmer in any useful way.


You'd be better off with a classic cluster kind of computation, or given the number of people on the list, some sort of grid/cycle farming approach like SETI@home. You load the program on your computer, and it periodically fetches the next work package from hot-streamer.com Make the work quanta something like 10 seconds of computation on a 1 GHz machine.