[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Experiment - Displacement Current's Magnetic Fields



Original poster: "Paul Nicholson by way of Terry Fritz <twftesla-at-qwest-dot-net>" <paul-at-abelian.demon.co.uk>

Hi Terry,

> I guess one could look at the "invention" of dD/dT as a great
> theoretical leap, or a kludge to make the darn thing work ;-))

A fair description.  Picture yourself as a 19th century scientist
trying to make sense of electricity and magnetism. They are each
described separately by Coulombs law and Biot-Savart, and you know
they are somehow connected because you have also have Faraday's and
Ampere's laws.  You have an 'action-at-a-distance' description of
gravity and static electricity based on a 'force field', ie the force
that a test particle experiences (presumably instantly) in response to
the 'source' of the field.  You desire a similar thing for B too.  You
know it must be a vector because it has to describe the force on a
test charge.  And you know that B must somehow satisfy Ampere's law.
So you invent a B field vector and use it to describe Faraday's law,
etc, fine. But when you come to look for a mathematical description of
Ampere's law, you hit a big snag.  You find you cannot get a self-
consistent mathematical description of this law.  It offers two or
more different values for the voltage induced in a wire, depending on
which surface you integrate the field across.  This was the famous
19th century crisis in electromagnetism.

Maxwell was motivated to plug in a dD/dt term because it gave the
simplest bit of math which described Ampere's law and it still worked
for the other stuff.  The requirement was a self-consistent description
of Ampere's law, that's all.  Maxwell will have also noticed that
putting this term in made the description of Ampere's law identical in
form to Faraday's law, which was also pretty neat.

So far so good: the quest for a self-consistent description of the
existing laws of electromagnetics was successfull.  If that was as
far as it went, it would be pretty good stuff anyway.  

But then it gets interesting.  You explore the consequences of the
mathematical description you have assembled, and you quickly realise
that they imply some new, hitherto unknown, phenomena: You find that
sources cannot instantly affect test charges - there's a small time
delay, and you find that energy can leave the system through a 'wave'
type variation of the field vectors.

The vital lesson from this, which says something deep about
nature:  In striving to ensure the self-consistency of a mathematical
description of a known physical law, the math unavoidably *predicts*
a completely new phenomena.  Just as 2+2=4 unavoidably predicts that
4+4=8, you find that sources can only affect test charges after a
small but finite delay, and that a B field can raise an E field (and
vice versa) in the complete absence of any movable charges.   

So in one sense, the math descriptions of the laws of physics can be
regarded simply as compact recipies which work, therefore we use them.
But, in addition, they provide strong hints of deeper connections in
nature between apparently unrelated phenomena.  This certainly shook
the 19th century scientific establishment, who were beginning to 
think that they had everything sown up.

This happens all the time now, and we take it for granted. 
For example, you're a mathematician (Dirac) looking for a self-
consistent description of electrons, one that is compatible with
relativity, but the math forces you to use objects called 'spinors'
rather than scalars or vectors, and these in turn force you to a
prediction that electrons spin, and that there is an undiscovered
particle which is the opposite of the electron - the positron.

As a general rule, we find that if you make up a mathematical 
description of something that is a) the simplest possible, and b)
self-consistent, and c) compatible with existing stuff; then as often
as not, your equations will also *predict* something that you didn't
know before hand.  You're getting knowledge out of the equations that
you didn't have to put in.

We can experience this process personally:  Make up the simplest 
possible self-consistent description of a TC resonator, using the
existing elementary laws of electromagnetics.  You don't have to use
any math or physics beyond what's taught in high school. The resulting
equations predict a certain voltage profile, and you measure this and
confirm that your calculations are right.  Then you persue the
mathematical consequences of your equations and just as 2+2=4 implies
6+6=12, so you find that a certain shape of current profile is an
inevitable consequence.  For example a predicted feature is that the
current max is raised a little way above the base, and that sometimes
the effective inductance can be greater than the DC inductance.
Of course, these aren't new discoveries in physics, but they are new
to us, and we can experience the fascination of unsuspected features
of nature being predicted from the requirement of mathematical self-
consistency of the descriptions of the features we already know about.

So the measurement of current profiles of certain coils is a nicety,
to help demonstrate the point, but the outcome will have a certain
inevitability since you've already validated the voltage profile.

And of course, nowhere along the way is any 'belief' required. Belief
is quite a counter-productive thing.  If a belief is true, then it
can be replaced with a self-consistent, validated, reliable and
productive description.  If a belief is false, then it simply obscures
what's really going on and prevents progress.  Reliance on belief is
nearly always fatal for progress - Malcolm's ruler, quarter-wave wire
length, Corum's nonsense about 'coherence', use of DC inductance at
resonance, importance of Q factor in impulsed TCs, the endless
meaningless debates on 'self-capacitance',  the great and silly
lumped vs transmission-line saga inspired by another Corum-ism.  The
list goes on into even crankier beliefs.

Replace all of these beliefs by self-consistent rational descriptions.
Check that they are valid. Then explore the mathematically inevitable
consequences and predict new things.  And so on.  Proceed in this
fashion in order to make up the for the 'lost century' of work in this
field.  The process ensures that if there's any exciting 23rd century
physics waiting to be discovered, its discovery will be kind of
inevitable, we will notice it when it comes because we will have open
minds, unhampered by preconceived entrenched beliefs, and we will have
the wits and methodology to investigate, understand, and exploit it.
--
Paul Nicholson
--