The next figure presents the geometry of this topic. Two conducting and concentric cylindrical shells, infinitely long, constitute a system that some capacitance, C.

Begin by noting that we can only determine the capacitance per unit length, C/L, because the total capacitance is an unreal value. The definition of capacitance is,

where Q is the charge of the system and V is its potential. From this expression it is seen that capacitance is the amount of charge that can be stored in a system by holding it at a potential, V. The charge is the amount that can be held separate, not just the total. For example, in a parallel plate capacitor the charge used in is equal to the absolute value of the amount on just one of the plates (if we used the total charge then we would have zero).

To determine the capacitance of this system we will place some charge of the cylinders. Put +Q on the center cylinder and -Q on the outer. The net charge of the system remains zero. This charge will distribute itself over the surface of the cylinders. Our expression for the capacitance per unit length of this system is,

where Q = Q is known (we put it there ourselves). It remains to determine the potential, V, that is maintained between the cylinders by the separation of this charge.

Since we know where all the charge is in this system it is possible to determine the electric field everywhere. Knowing the electric field, **E**, between the cylinders allows for the calculation of the potential through the relation,

where this represents the potential between the end points of the line **l**. The line taken here will be along the radial coordinate connecting the cylinders. The system is symmetric and this connecting line between the cylinders represents the potential between them at all points.

The symmetry of the system is further exemplified by the electric field pattern of the inner cylinder shown in the figure below. The infinitely long cylinder produces a uniform electric field along the **r** vector in the cylindrical coordinate system.

Returning to the problem of calculating the electric field, recall Gauss’ law,

where Q_{enc} is the total charge enclosed by an area A.

Since we want to determine the electric field between the cylinders it is necessary to find a surface that is everywhere perpendicular to it (i.e. a surface with a normal that is parallel to the electric field). The dot product in the above expression is non-zero in such a case. Figure 3 illustrates the surface that satisfies this requirement. The normal vector of the Gaussian surface, **A**, is everywhere parallel to the electric field vector.

In the above figure, only one example electric field vector has been drawn. This field is everywhere parallel to the radial coordinate vector. The charge of the outer cylinder does not contribute to the total charge enclosed by the surface. The enclosed charge is determined entirely from that of the inner cylinder. This cylindrical surface is three dimensional and represents another cylindrical shell. The entire charge of the innermost cylinder is enclosed by our surface, Q_{enc} = +Q.

The differential area element, d**A**, can be rewritten in terms of this geometry. The total area of the Gaussian surface is given by the expression for the surface area of a cylinder of radius r. The surface is defined at a fixed radial position, so only the axial (z) and azimuthal (Φ) coordinates are necessary to compute the total area. This is shown below.

Returning to Gauss’ law, let us solve each side separately,

where the last step makes use of the fact that I am saying the length of the infinitely long cylinders can be written as L.

Equating these results leads to,

where this is directed along the radial coordinate vector.

where it is very important to remember the reasoning behind the order of the limits in the integral.

Part of the definition of electric potential is that the potential at infinity is zero. When calculating the potential it is necessary to perform the line integration beginning at infinity (or just as far away as possible) and work your way back in. That is why the integration limits proceed from the outermost point, b, and end at the innermost point, a.

The potential is negative, which presents a problem for us because capacitance is positive definite. The following identity is useful, ln(α/β) = -ln(β/α). The potential between the cylinders, after we place equal and opposite amounts of charge on them, is,

and we can now return to the earlier generic expression to calculate the capacitance per unit length of this system.

where the capacitance per unit length is not a function of either the charge on the system or L itself. The only parameters that matter are the radii of the cylindrical shells. This should be expected because capacitance is a feature of a system’s geometry and does not depend on applied charges or potentials.

]]>While the above is an example from each case, radiative and non-radiative, the vast majority of discussions in electrodynamics are actually about electrostatics since radiation is not considered. Whether this is a valid approximation will be discussed in this topic. We will examine the case of an electron undergoing a well known acceleration and then determine the energy involved in its radiation.

Consider an electron that is released from a zero velocity state (i.e. from rest) and allowed to fall to the Earth. Let this electron fall a distance of one meter under only the influence of gravity. Let us determine how much energy this electron will radiate and whether that should have any effect on the kinematics of its fall. The setup is shown very simply in figure 1.

This topic is essentially a one dimensional system. The kinematic equations for a one dimensional system (coordinates given by *y*, which is chosen because this is a vertical system) in which the acceleration (*a*) is constant are,

where *v* is the velocity of the object, *t* is time, and a subscript *0* denotes the initial value of some variable.

One physical concept immediately presents itself. The expressions in Eqs. 1 and 2 assume that all of the kinetic potential energy, the gravitational potential energy in this case, is converted into kinetic energy. If the object converts its gravitational potential energy into other forms of energy, such as radiation, then the kinematic expressions will not accurately describe the object’s motion. From an experimentalist’s point of view we could set up a measurement of the electron’s motion during a one meter fall and then compare it to the motion predicted by the kinematic equations. The theoretical equivalent of this is to calculate the energy converted into radiation for an electron and then compare it to the kinetic energy.

If some of the gravitational potential energy of the electron is converted to radiative energy, then that means less is converted to kinetic energy. If less is converted to kinetic energy, then the particle will not be moving as fast as predicted by the kinetic equations. The kinetic energy, *W _{k}*, of the electron is given by,

where *m _{e}* is the electron mass. Radiative losses mean that

Put another way, the maximum possible kinetic energy for the electron is determined by converting all of its gravitational potential energy into kinetic energy. The change in potential energy, *ΔU _{g}* over the distance of the fall is given by,

where *y _{f}* is the final location of the electron and

where the units of each variable is given in square brackets. This is an incredibly small amount of energy. Electrons have little mass so this is reasonable.

The next issue is to calculate the amount of energy converted to radiation by the electron. While the energy of radiation is not a commonly referenced expression, the power, *P*, radiated by a single charged particle undergoing acceleration is often used. This is known as the Larmor formula and is given by,

where *μ _{0}* is the permeability of free space,

The power radiated by the falling electron is,

where the power is given in units of Watts [W], which are equivalent to energy per unit time (*W ≡ E / t*). By determining the time it takes the electron to fall the prescribed distance of one meter, we will be able to determine the total amount of energy that it has radiated away. Conceptually, be mindful that the electron radiates based on its acceleration, which is constant as it falls in the gravitational field. The electron emits radiation at a constant power during its fall. Knowing the total time of the fall allows us to calculate the total energy it radiated because it did so at a constant rate.

To determine the amount of time for which the electron fell, we return to the kinetic equations. In this case our approximations work against us. Since the kinetic equations provide the maximum velocity of the electron it is reasonable to assume that they similarly provide the minimum time for this fall. That would decrease the theoretical radiative energy. Being aware of that we should remember that our theoretical experiment here is to determine whether the radiative energy is anywhere close to the magnitude of the kinetic energy. If they are close then we must revisit all of this physics. If they are not comparable, then these corrections must be minor and generally ignorable.

The time of fall for the electron can be found in the expressions at the beginning of this problem. The initial velocity is zero as given in the setup.

where there should be a negative sign on the left side (*y – y _{o} = y_{f} – y_{o} = -Δy*), but that cancels with the negative sign of the acceleration. For a coordinate system in which the

The time of fall for the electron is,

and this is not large enough to make the total energy radiated comparable to the kinetic energy.

The energy radiated, *W _{r}*, by the electron during this fall is,

and now we may compare the different energy values.

The kinetic energy is on the order of *10 ^{-30}* while the radiative energy is on the order of

†Technically, the mass of the electron changes depending on its velocity, but for the types of velocities we are considering here that will not be a significant effect. This topic focuses on the difference between electrostatic and electrodynamic assumptions, not relativity, though an analogous topic may be written later.

]]>One problem with using aluminum to make these probes is evident in the figure below. These are photographs of a probe after removing it from the plasma (it ran for a couple days inside the vacuum chamber).

If you noticed the black marks near the center of both images then you have spotted the problem. Those images show the “top” and “bottom” of my planar triple probe. There is no absolute orientation for the probe, so that is essentially a view of both burned sides.

The following figure is a diagram, from the triple probe construction download, that illustrates the basic design features of these probes. The “Head Holder” is the piece that is made from aluminum. It is used to provide centering alignment for the probe head. On a side note, the conducting tips of the probe head are electrically insulated from the probe shaft by alumina. Not to be confused with aluminum, alumina is a hard ceramic with good insulation and vacuum properties. It held its form even under the scorching laid upon it by the aluminum centering piece (head holder).

The burn marks on the probe are likely caused by sputtering from the aluminum. Ions in the plasma impact the aluminum with enough force to knock some of the aluminum atoms out of the solid. In effect, it appears as though the aluminum is arcing out into the ceramic. The problem area is small, as shown in the following image. Notice how far away the burn area is from the probe tips themselves (≈ 7 cm). This is good because the further away the burn is occurring, the less likely there is an effect on the acquired probe signal. While there have been no obvious signal effects, the aluminum pieces are being replaced with stainless steel that is unlikely to demonstrate this behavior. If it does, then I will have to think of something else to remedy the problem.

Significant sputtering does not seem to be a problem for probes in which the volume between the aluminum and ceramic is minimal. The photograph below shows a probe for which there is little to no space between the protruding ceramic and aluminum centering piece. This suggests that using a different material may not solve the problem. It is possible that plasma entering through the gap between the centering piece and ceramic is related to the burn effect.

The lesson is, whether it is the root of my problem or not, that aluminum is not a good material for plasma facing components. Unless, of course, your intention is to sputter, in which case that is a great choice. If there is any interesting (i.e., burn mark) behavior from the stainless pieces I will put some pictures here.

]]>Consider a grounded sphere of radius a that is made of a conducting material. This conducting, and solid, sphere is then covered by a shell made of an insulating material. The insulation material is a shell with radius *b*, in which it is required that *b > a* (treat the shell as though it has no thickness). The electrostatic potential on the insulating shell, **Φ**_{shell}, is known to be,

where α is a constant and θ is the angular coordinate of the spherical coordinate system. Figure 1 shows the setup for this topic.

For this system we will find the electrostatic potential everywhere in space and then determine the charge densities on the spherical objects.

To begin, the only charge densities we can possibly solve for are surface charge densities. The enclosing shell is just that, a shell, and therefore has no inner volume in which charge may be found. This is just part of the definition for this setup, so there is no physical intuition to be gained. The solid sphere is a conductor and any charge density within it will reside on the surface. This is part of the definition of conductors and is a generally applicable piece of electrodynamic theory. The volume charge density inside of the solid sphere is zero, though it remains to solve for the charge density on the surface of this sphere (i.e., at *r = a*).

It is possible to solve for the charge densities after we have determined the value of the potential everywhere in space. For this spherically symmetric system the form of the potential is given by a sum over Legendre polynomials. The potential of the solid sphere is given, as is the potential at infinity, so the potential is described by,

and it remains to solve for the regions between the objects and beyond the shell. These expressions serve as two of the three boundary conditions given. The third boundary condition is given by the potential on the shell.

Between the objects, for *a ≤ r ≤ b*, the solution for the potential is given by the complete sum over Legendre polynomials,

where *P _{l}(cosθ)* represents the

The functional form of the potential for the region *b ≤ r ≤ ∞* begins with the expression in Eq. 1.

where *C, D* are the same type of constants as *A, B*, but not equal to them. In this case the potential must go to zero as *r → ∞*, therefore all of the constants, *C _{l}*, must be zero. The form of the potential in this region simplifies to,

where all that remains is to perform some algebra to rewrite these with the numerical values of the *A, B*, and *D* constants. The phrase “some algebra” usually means a significant amount of tedious labor is on its way, as is the case here.

Beginning with Eq. 3, we know the exact value this expression must equal at the position *r = b*. This is given in the setup,

and it seems difficult to equate these in a manner that allows for the determination of the infinite values of *D*. It is reasonable to assume that there are not an infinite number of *D*‘s for which a value must be found. Rewriting the *α sin ^{2}θ* term as a function of Legendre polynomials will help simplify matters. Since our Legendre polynomials take the

and now we can rewrite,

where we have only needed to employ the expressions for two of the Legendre polynomials.

Combining the results of Eq. 4 and Eq. 6 we have,

and the left side shows us that only the *l = 0* and *l = 2* terms must be kept. Technically, it means that *D _{l} = 0* for every value of

The non-zero *D* values are found by writing out the sum in Eq. 7 and equating terms with the same *l* values,

where this provides the solution all the way out to *r → ∞* and we may add a piece to our map of the potential.

where it is seen that as *r* approaches infinity, the inverse *r* terms will quickly go to zero.

At the position *r = b*, the value of Eq. 1 must also be equal to Eq. 6. Equating these gives,

and we are once again in a position to remove all but two values of *l*.

The previous equality simplifies to,

and there are two more equations found by equating similar Legendre polynomial terms.

and the issue now is that we have four unknowns and only two equations.

Another set of equations containing these unknowns may be achieved by examining the remaining boundary at *r = a*. The potential is zero here, but still described by Eq. 1. This leads to,

and these are combined with the previous two equations to solve for the *a* and *b* constants.

Inserting Eq. 9 into Eq. 8 gives,

and the corresponding values of the *A*‘s from Eq. 9 are,

With all of these constants known and the form of Eq. 1, the potential may be written,

where the full expression for the region between the spheres is,

A key expression for determining the surface charge density, *σ*, at a location for which the electric fields above, * E_{above}*, and below,

where * n* is the normal vector pointing away from (i.e., above) the surface and

and since we are spherically symmetric there will only be a gradient along the radial direction. This becomes a one dimensional system and a further simplification is available,

As we have already determined that charge density exists at *r = a, b*, let us now apply the previous equation to determine the value of these densities. For *r = a* the region below the surface is *0 ≤ r < a* (with a potential of zero) and the region above the surface is *a < r ≤ b*. At this surface the normal vector is directed along the radial direction, * n = +r*. In the following steps the substitution

where this is the charge density of the surface at *r = a* and *P _{2}* is given in Eq. 5.

The same procedure is applicable in the pursuit of the charge density on the shell located at *r = b*. In this case the below region is that between the spheres and the above region is the solution for *b ≤ r ≤ ∞*. Our work is made easier because the derivative for the below region has been done in Eq. 10, we begin at that step and replace the *r*‘s with *b*‘s (also multiply the entire expression by *-1* because it was the above region in the previous case) for the present situation.

We have,

and this is the charge density on the surface of the insulating shell.

In this topic determination of the potential is made considerably easier by taking advantage of symmetry. For spherically symmetric systems such as this, the form of a Laplacian-type field (the potential here) will have a form given by the sum over Legendre polynomials. This generic result serves as the starting point and allows us to determine the fields in a fairly direct manner.

]]>A cylinder of radius, R, and length, L, in which R << L (i.e. the cylinder is so long that we will not be concerned with edge effects) is known to have a magnetization, **M**, given by **M** = α r^{2} **Φ**, where α is a constant. Figure 1 shows this cylinder. Note that the drawing does not accurately reflect that the cylinder is much longer than it is wide.

The goal is to determine the magnetic field of this object. In general, magnetic fields are determined according to the kind of source we expect. Currents are one known source for magnetic field, while time dependent electric fields are another. In this case, the magnetization of an object is known to be related to currents (bound currents to be precise) and there is no mention of any time dependence so it is sensible to proceed with the intention of solving for the magnetic field using currents as a source. Once the currents are known, the magnetic field may be calculated using Ampere’s law,

where the integral is taken over the entire closed loop of length, l, μ_{o} = 4π × 10^{-7} H m^{-1} is the permeability of free space, and I_{enc} is the current enclosed by the loop, l.

The magnetization depends on the radial coordinate of the cylinder. At the position, r = 0, the magnetization is zero. Magnetization is a property of the material, so it is also equal to zero outside of the cylinder (if there is no cylinder, then there cannot be any magnetization). We should now realize that the currents must be determined in two different regions; inside the cylinder and outside of the cylinder. Building on this intuition we should also expect that the magnetic field must be determined as a function of space both inside and outside of the cylinder.

Magnetization is related to bound currents. Since the cylinder has a magnetization throughout its interior, both a bound volume current density and a bound surface current density must be examined. The bound volume current density, **J**_{b}, is related to the magnetization by, **J**_{b} = **∇** × **M**. The bound surface current density is related to the magnetization by **K**_{b} = **M** × **n**, where **n** is the unit vector that is normal to the surface being investigated (it is equivalent to n-hat as seen in the equations that are presented as images).

Begin by solving for the bound volume current density. In the region outside of the cylinder, r > R, the magnetization is zero and therefore, **J**_{b} = 0. Inside the cylinder we have,

Next, move on to the bound surface current. There are three surfaces of the cylinder to evaluate; the tubular surface of length, L, and the two circular faces. The cylinder has been purposefully made very long so that end effects may ignored. We’ll consider the bound surface current densities on the end faces anyway, for the sake of further developing our physical intuition with respect to bound current paths. The circular faces have unit normals of ± **z**, depending on which end we are looking at figure 1 illustrates this, but be sure to understand that the unit normal to a surface is always directed away from the object). At the end face shown in figure 1 the bound surface current density is given by,

and for the other end face the surface current density is along the –**r** direction (technically, this assumes that the cylinder is centered on the position (r,z) = (0,0) and that we are examining the face located at z = L / 2, which was not mentioned in the beginning because end effects were said to be unimportant). Once again, this does not matter for determining the magnetic field of this object because we have decided to purposely neglect contributions from the end of the cylinder. The reason for doing this is that the solution will be simpler if we concern ourselves with only the central region of the cylinder. It is important to note that even bound currents form closed loops. Figure 2 provides an illustration of how this is accomplished in the present discussion. Notice that on one end face the bound surface current flows outward from the center while at the other face it flows inward towards the center (r = 0). We know that the bound volume current density increases with radial position and always flows along +**z**. The red lines in figure 2 represent the bound current and show that this current forms closed loops within the cross sectional plane of the cylinder. Now that this has been mentioned, please ignore the effects of these end faces while we continue to examine the magnetic field of this object.

The bound surface current that matters to us is the one along the cylindrical side of this object. This cylindrical surface is located at r = R and the bound surface current density is given by,

All of the current is flowing along the z axis. Neglecting the edge effects means we may ignore the r axis directed currents at the edge faces. This z axis symmetry allows for a direct application of Ampere’s law. Drawing a circular loop centered on the z axis (r = 0) will enclose some amount of current flowing along the z axis. As we expand this loop in radius, it will enclose more current. Once the loop is larger than the radius of the cylinder (r > R) there is no more current that can be enclosed. The two Amperian loops that must be drawn to determine the magnetic field everywhere in space are one inside the cylinder and one completely outside.

For the loop taken inside the cylinder, only bound volume current is enclosed. Ampere’s law gives,

This is the value of the magnetic field inside the cylinder. The only component of magnetic field inside the cylinder is along the **Φ** direction. The next step is to increase the size of the Amperian loop until it is larger than the radius of the cylinder. The total enclosed current will include contributions from all of the bound volume current density and the bound surface current located at r = R. The left side of Ampere’s law remains the same as that found previously.

Before performing this calculation, a mention of the bound surface current must be made. When dealing with a bound surface current density, the total bound surface current is found by multiplying this density and the length over which it passes. Surface current density has units of Amperes per length ([A] / [m]), so multiplying this by the length across with the current flows results in a value that has units of current ([A]). The current in question is the bound surface current that is flowing along the z axis. This current is passing over a length that is the circumference of the cylinder. Note that this length is determined according to a line that is everywhere perpendicular to the surface current density. In the lines below, **l**_{cir} is the circumferential length of the cylinder.

Outside the cylinder there is no magnetic field. The magnetic field everywhere in space is given by,

$latex \vec{B} = \mu_o\alpha r^2 \hat{\phi} \qquad \text{for} 0 < r < R &s=2$ $latex \vec{B} = 0 \qquad \text{for} r > R&s=2$

This topic shows that the magnetization of an object is related to the magnetic field the object generates according to the bound currents. Magnetization is a concept that exists to make it easier to describe bound currents in one term. Imagine how difficult it would be to describe the bound currents throughout the cylinder’s interior and along its surface. Instead of that, we define the magnetization and then leave it to the interested party to solve for the equivalent bound currents. Once these currents are known, the magnetic field is found using techniques common to the current/magnetic field relationship.

The magnitude of the magnetic field is plotted in figure 3.

This plot was generated with the following IDL code,

r = FINDGEN( 1e3 ) / 1e3 a = r cylr = 600 a[ cylr : 999 ] = 0 plomake, r, a^2, yr = [ 0., 0.4 ], ys = 4, xs = 4 AXIS, XAXIS = 0, XTICKNAME = ['0', ' ', ' ', 'R', ' ', ' '], COLOR = 0, CHARTHICK = 2, CHARSIZE = 2 AXIS, YAXIS = 0, COLOR = 0, CHARSIZE = 2, CHARTHICK = 2, YTICKNAME = ['0', ' ', ' ', ' ', ' '] XYOUTS, 0.2, 0.02, 'Radial Position', COLOR = 0, CHARTHICK = 2, CHARSIZE = 2, /NORMAL PLOTS, [ 0, .7 ], [ .36, .36 ], /DATA, COLOR = 18, THICK = 3 XYOUTS, .72, .35, '!4l!3!Io!N!4a!3R!E2!N', COLOR = 18, CHARTHICK = 2, CHARSIZE = 3, /DATA XYOUTS, .15, .55, '|B!I!4u!3!N|', /NORMAL, CHARSIZE = 4, CHARTHICK = 2, COLOR = 0 END]]>

My test data set is shown below. This is a time series of 4096 points with a sampling rate of 1.5625 MS/s (1.5625 × 10^{6} samples per second). These parameters are chosen just to more closely match those in my work, but the filtering process is the same regardless. The time series is just over 2.5 ms in length and it features an offset (DC) value in addition to a clear oscillation of approximately 5 kHz. On top of this 5 kHz oscillation there is a much faster oscillation.

The goal is to remove the higher frequency fluctuations so that a cleaner signal comprised of solely the 5 kHz fluctuation can be studied. This removal of the undesired fluctuations will be achieved by transforming the raw data into its frequency representation by way of the FFT.

Before the FFT can be performed, the offset of the data must be removed. In the plot below, the offset of the data is removed by subtracting a smoothed version of the data from itself. From the command line in IDL this might be achieved in the following manner (“data” is the variable name of the data, “time” is the time axis),

;;plot the original signal (black trace in figure)

IDL> plot, time, data

;;plot the smoothed version of the signal (red trace)

IDL> plot, time, smooth( data, 1201, /edge_truncate )

;;plot the fluctuating component of the signal

IDL> plot, time, data − smooth( data, 1201, /edge_truncate )

IDL> plot, time, data

;;plot the smoothed version of the signal (red trace)

IDL> plot, time, smooth( data, 1201, /edge_truncate )

;;plot the fluctuating component of the signal

IDL> plot, time, data − smooth( data, 1201, /edge_truncate )

where the amount of smoothing (the 1201 points) is dependent on the signal and your value may be entirely different.

At this point the fluctuating component of the signal, f(t), still has some offset. The remaining offset can be removed by subtracting the mean value of f(t). Our new working signal is s(t),

IDL> s = f – mean( f )

and the resulting signal, s(t), will not be very different from the fluctuating component, f(t).

The real part of the FFT of s(t) and the corresponding frequency array are shown in the figure below. The IDL Reference Guide has a good example of generating the frequency array for use with their FFT function (see the entry for FFT). Notice that the frequency array goes from zero to the largest positive value, and then from the largest negative value back to near zero. Almost all of the FFT amplitude is located at the edges of the frequency array that contain the lowest values because the test signal does not have much in the way of high frequency noise.

The basic idea behind frequency filtering with the FFT is that we can reduce the values of the FFT shown above (the full complex value, however, not just the real part) and then perform the inverse transform to return to a regular time signal. The next figure shows the result from the standard usage of the Butterworth function. This is achieved using,

IDL> filter = butterworth( 4096 )

where 4096 is the number of points in the signal (or just the number of points passed to the FFT if you are working with a subset of a larger signal). The figure below shows that the Butterworth filter is near zero for all frequencies except those at the edges. The edge frequencies are also the lowest absolute values, therefore, this filter function would result in a low pass filter and remove most of the higher frequencies in the signal.

The Butterworth function has a keyword that serves to make this a high pass filter. The following IDL command results in the filter function shown in the next figure,

IDL> filter = butterworth( 4096, /origin )

and the plot shows that the filter function is near zero for all but the largest absolute value frequencies.

If the FFT of the true zero mean signal is given as FFT(s(t)), then filtering data using these standard Butterworth filters may be done with,

IDL> lowpass = fft( FFT(s(t)) * low, 1 )

IDL> highpass = fft( FFT(s(t)) * high, 1 )

IDL> highpass = fft( FFT(s(t)) * high, 1 )

where “lowpass” and “highpass” are the filtered time signals. The variables “high” and “low“ are the Butterworth functions (recall that the high-pass filter is achieved with the ORIGIN keyword). An example of this filtering is found under the Butterworth entry in the IDL Reference Guide. The next figure plots the time signals that result from these standard filters. Notice that the low-pass signal is very smooth because most of the high frequency noise has been removed. Some low frequency behavior is still seen in the high-pass signal, but it is of very low amplitude (the high-pass signal has been multiplied by 10 in order for it to show up compared to the low-pass amplitude). At this point it is desirable to learn how to set the Butterworth function to return a filter that cuts out frequencies as set by the user.

The CUTOFF keyword in the Butterworth function can be used to set the lowest (or highest) frequency that will remain in the filtered signal. This keyword must be set to an index number, however, and not a frequency so it is necessary to determine which index (seen in some of the FFT frequency plots above) corresponds to the target frequency.

In the example below I am generating a low-pass filter function.

;; set the maximum frequency to survive (Hz)

IDL> maxFreq = 5e3

;; determine frequency resolution of FFT

;; acq = data acquisition rate (samples / second)

;; FFTsize = number of data points to be passed into FFT function

IDL> deltaFreq = acq / FFTsize

;; determine index value of cutoff frequency

IDL> cutFreq = fix( maxFreq / deltaFreq )

;; generate Butterworth function with the desired cutoff

IDL> filterFun = butterworth( FFTsize, cutoff=cutFreq )

;; filter data (as shown in IDL Reference Guide)

IDL> low = fft( fft( original , -1 ) ∗ filterFun, 1 )

IDL> maxFreq = 5e3

;; determine frequency resolution of FFT

;; acq = data acquisition rate (samples / second)

;; FFTsize = number of data points to be passed into FFT function

IDL> deltaFreq = acq / FFTsize

;; determine index value of cutoff frequency

IDL> cutFreq = fix( maxFreq / deltaFreq )

;; generate Butterworth function with the desired cutoff

IDL> filterFun = butterworth( FFTsize, cutoff=cutFreq )

;; filter data (as shown in IDL Reference Guide)

IDL> low = fft( fft( original , -1 ) ∗ filterFun, 1 )

Finally, the results with a 5 kHz and a 100 kHz cutoff setting are shown in the final figure below. The 100 kHz cutoff result still displays a lot of noise, while the 5 kHz cutoff is considerably cleaner. In this case it was easy to set the filter because I could see the strong ≈ 5 kHz oscillation before trying to do any filtering.

The point here is not to discuss filtering in general, but only to see how to apply the relatively new Butterworth function provided in IDL. Now, if anyone can explain to me what the XDIM and other keywords mean in the Butterowrth function I would be happy to hear it (you can leave a comment if you wish).

]]>**Diary of a Graduate Thesis Experiment**

The laboratory in which I am performing my thesis work (LAPD at UCLA) is used by researchers from all over the world. Groups that are using the device for a research project receive an allotted time in which to perform their experiment. Now is just such a time for me. My present data run extends from May 28 through June 2, 2007. This entry is an account of this run in the form of a diary-type schedule. I hope that this will demonstrate a little piece of the effort that is physics grad school.

The calendar start of a data run is simply the point at which you are allowed to use the machine. In order to have a successful run (i.e., to get a sizable amount of quality data) it is necessary to prepare well in advance. In this case I have been having meetings with my advisors on a mostly regular basis since our previous data run. These meetings set the priorities for the next run while progressing through the understanding of the existing data acquired.

One area in which preparations well in advance were not possible is new probe construction. Past runs have demonstrated the need for more precision in the construction of probe tips, especially for the Janus and triple probes. These particular probes are much more straightforward to analyze when the tips are the same area. Making sub-millimeter tips by hand results in some area differences. I ordered some laser nickel pieces (shown at right), but they did not arrive until the Friday before the run. It usually takes a few days to make one probe. With the help of a fellow experimental plasma graduate student we were able to build four probes over the weekend. Whether these will work when we try to use them remains to be seen.

Monday, Day 1 Tuesday, Day 2 Wednesday, Day 3

Thursday, Day 4 Friday, Day 5 Saturday, Day 6

All of my data comes from physical probes placed in the plasma. These diagnostic tools must be carefully placed into the LAPD vacuum chamber (a write up of the construction process for one of these probes may be found here). As in other similar vacuum systems, inserting materials from outside the chamber requires pumping them down and then opening a valve to the main chamber. It takes about one hour to pump down a probe in the lab. Three separate probes can be pumped down at the same time, however, so it is generally a fast setup. The first accomplishment today is the pumping down of two Langmuir probes and one Bdot probe.

The electron beam, while not exactly delicate, requires careful conditioning and a slow startup. The beam used for my thesis is part of the general lab hardware and it is used in many other endeavors in which I am not involved. For this reason, lab personnel manage the startup of the beam. I try to be helpful by arranging the various items needed to run the beam. An operating setup is shown to the right.

The next item for beginning the experimental run is to prepare the data acquisition system (DAQ). This is the LAPD system that controls the position of probes, acquires their data (obviously), and then saves the data and other machine runtime information to a computer. I create a directory for this run and input some of the details such as which probes are being used.

The best laid plans cannot account for every circumstance during run week. One of the probes I planned on using, arguably the most important one, broke last night. One of the outputs came loose inside of the probe and required it to be completely disassembled in order to be repaired. While the beam is being conditioned and it is not possible to start taking data yet, I decide to try and repair this probe.

The first group of probes is opened to the main vacuum chamber and may now be aligned for DAQ control. Aligning the probes allows their position to be recorded by the DAQ. During a run, the DAQ will move a probe to a specified position within the plasma, record the data, and then move the probe to the next desired position. These movements are only accurate for properly aligned probes. Since my measurements require sub-millimeter precision I will take my time to align the probes. With these probes opened it is now possible to begin pumping out the fourth probe.

After the alignment, the probe outputs must be connected to electronics that will amplify and transmit their signals to the DAQ. Signals from these probes can be reviewed on an oscilloscope, which allows for an initial survey of the plasma column and electron beam performance (it has since been turned on and is emitting normally).

All the preparations are complete and it is now time to begin running actual experiments. A “run” is a self-contained experiment. For example, a run may consist of collecting data across the plasma column for one value of background magnetic field. The next run could then be for a different magnetic field. Once begun, a data run is completely managed by the DAQ and the user (me this week) can sit back and wait until it is time to set up the next run. Usually, the results from one run dictate what is to be done for the following runs. It is typically the case that the user will have two data runs pre-planned so that the second one is underway during review of the first. After this, the third run is dictated by the results of the first, the fourth run is dictated by the results of the third, and so on.

Even graduate students have to sleep sometime, so a longer run is designed to take advantage of the overnight hours. Longer runs typically involve moving the probe (or probes) through a large plane of positions. It is easy to design a run lasting 8+ hours, and during my previous experiment week some runs took over 12 hours to complete.

During the day, my advisors come to the lab to review our status. We make parameter changes and perform manual probe scans (we do not manually move the probes, but we do input movement commands to the DAQ one at a time instead of programming a complete motion scan all at once). I continue to perform short data runs during the morning prior to the arrival of the advisors.

Once the advisors arrive it is time to begin searching. We are looking for interesting features of the probe signals. This manual scan helps to demonstrate that we are accurately repeating the physics results of previous work while simultaneously highlighting new or little explored behaviors. Data is viewed on an oscilloscope (see right, with a close up of the screen shown below) as these adjustments are made. We discuss the signals and formulate our continuing plan, essentially making small changes to the run setup that has been discussed during the time leading up to the run week.

Throughout the rest of our examination of the experiment we prioritize the work that needs to be done. After deciding to try out the new probe (which was fully repaired yesterday) it is clear that pumping it down is at the top of the to do list. The rest of the afternoon involves adjusting circuits to improve signal output, reviewing the data collected thus far, and settling on an overnight run to be started after the advisors have gone home. While those actions are described in few words, they take six hours to complete.

The plan for this evening’s overnight depends on a certain result. This result is determined by performing a short (≈ 2 hours) run. I need to complete this run, analyze the data (in a cursory manner, not publication quality review), calculate the parameter of interest (relating to the absolute position of a certain feature), and then setup the overnight run accordingly.

The previous run provides the necessary information to program an overnight run. This information is incorporated into the overnight setup and begun. As a personal preference I like to remain near the lab for the first hour or so of the automated runs just to make sure everything is going smoothly. This time is passed by analyzing the existing data sets more carefully and printing out some of the results to share with my advisors. It is generally not as useful to have them looking over my shoulder at my computer screen. With printouts they can write notes all over the paper and even walk around the lab with it.

With a collection of printed out results it is time to check on the overnight run and then go home. Unfortunately, the overnight run has experienced an error and requires some effort to restart. This is the type of error that only stops the run, it does not have any detrimental effect on the machine. My efforts are not enough to revive the run and I am forced to wait for assistance from the research staff.

At this late hour it is not worth it to drive home. Sleeping at the lab will actually allow me to get more rest by removing the round-trip commute time. I would have had to come in early in the morning to meet with the research staff anyway, so it is more efficient to hit the couch.

When the first staff member arrives, he is able to revive the overnight run in a matter of minutes. I discuss the issue with him at length and am prepared to follow the necessary steps should this problem resurface. This run is expected to complete in just over 8 hours. My morning is free and I seek to take advantage of it.

For the sake of my advisors and everyone else at the lab (which is located at the southernmost edge of the UCLA campus), I go to the gym to take a shower. I have never worked out during a run week. Usually, the reason is that I am too tired and not motivated for physical exertion. At this moment I am also a little concerned about being dehydrated since I have had a lot more coffee and soda than water recently. It is unlikely that I would have wanted to work out even if I had been drinking water. A leisurely lunch in the student union follows the shower.

The long run is going smoothly and a lot of data is being collected. An example of run monitoring is shown in the image to the right. From the control room it is possible to monitor both the machine performance and the data stream. The data is being written to the file as the run progresses, however, so it is not possible to perform analysis during a run. The data from each shot (the plasma pulses once each second) flashes up on a large screen. If something is wrong with one of the diagnostics it will be obvious, most likely seen as a flat line on this screen.

The advisors arrive and I update them on our status. We survey the existing batch of results and perform new manual scans. Final adjustments are made and it is determine to complete the remaining runs with the present state of electronics and diagnostic setup. Tonight’s run plan is agreed upon.

The overnight run again depends on the result of a shorter run. This will take approximately two hours to complete. Once it is set up and begun, I need to find something else to do.

This is a good time to finish a write up (I use “write up” when referring to documents that I type up for passing around the group†) that my partner and I have been putting together. My partner is a graduate student in theoretical plasma physics here at UCLA. He works on the same physics as I do, though he concentrates on the theoretical treatment of it by using a computer code that models the experimental system and outputs predictions for what we should see in the measurements. His thesis also includes some details that will not be directly studied in the experiments, but we will both include material from the other in our theses.

A few email exchanges and some edits to the write up later, we are happy with the document and send out the group email.

The shorter run has provided the necessary information to setup the overnight run. An unforeseen issue arises but is fixed within an hour. After monitoring the run for its first 45 minutes I determine that everything is operating well.

Because of the late start, the overnight run will not be completed until nearly 9:00 am. Tomorrow’s plan is to test the recently repaired probe. No new runs will be performed until after the advisors have seen the new probe in action. This provides an opportunity to sleep longer this morning.

† The idea of sending preliminary results and experiment summaries to the group is an idea that was given to me by a fellow graduate student. She suggested typing up professional quality documents using LaTeX and then emailing pdf files to the group members. This is useful in multiple ways. First, it forces you, the writer, to clarify your thoughts and results into a coherent set of statements. Second, it leaves you with a collection of excellent notes that will prove useful over the few years (hopefully only a few) of your thesis study. Third, and finally, some of the items you write about are either going to be good enough for your thesis or at least good enough to serve as a first draft or outline for it. Already having the latex formatted material will save time when you start writing your thesis officially.

Last night’s run completed successfully. I take some notes about the run and begin testing the new probe. The most efficient use of machine time would be for me to have the probe tested and in a state that conveys its operating functionality to the advisors when they arrive. Signals indicate that the probe works, which in turn signals me that it is a good time to eat.

None of the following discussion lends itself to picture taking, so here is a photo of the area of the lab where all my experiment equipment is set up. Once again, the plasma device is over 15 meters long so only a portion of it is seen in the photo.

A large power outage in the UCLA area affects the machine. Reports indicate that the Los Angeles Department of Water and Power (DWP) is unable to supply power to areas of campus and the Westwood vicinity.

The LAPD has safety mechanisms in place that prevent such power transients and outages from causing catastrophic damage, but there is no backup power for actually running the machine. To run the machine requires several large power supplies that provide thousands of amps of current to run the magnetic field and plasma discharge. Such demands cannot be met with any backup systems (technically I am sure there are such systems, but they probably cost more than the rest of the lab combined). Systems that maintain device integrity can be backed up and they continue to function during problems originating within the LA power grid.

The machine, while preserved from significant damage, still requires some time to be returned to optimal operating parameters. Once the power problems appear to be over, the process of restarting the machine begins. This process is handled by lab staff and I occasionally peek in to try and learn more about the device and its inner workings.

The plasma is back and we start to look around to assess whether the background conditions have changed. It is fortunate that we are in the lab because there is one more large power spike (I would hear later that traffic lights were out in Westwood and the UCLA Medical Center was running on emergency power) and the machine is again shut down. With staff already present in the lab, the machine is returned to operation within five minutes.

Review of the existing plasma resumes. It is determined that our experiment can continue without ill effect from the changes in the plasma. Basically, the changes that are observed are not relevant to the present experiment. The LAPD plasma is cylindrical with a diameter of approximately 70 cm and a length of over 15 m. Our effort is focused on an area that is a few square centimeters across over the entire axial length. We continue to work through the night and the staff will make some measurements tomorrow to determine any effects this might have on the research projects that follow our run week.

The advisors agree that the new probe is functional. This will provide the measurements for the overnight run. We settle on a plan and make the preparations.

The overnight run begins and appears to be running smoothly.

The run is still progressing smoothly. It is estimated to complete near 4:30 am, which makes the decision to go home difficult.

I decide that my commute time is better spent sleeping.

Last night’s run finishes five minutes after I arrive at the control room. With this early start time I am able to program another 8 hour run that will finish just in time for me to meet with the advisors and continue our effort.

Some data needs to be acquired in order to quantify the state of the machine after yesterday’s power problems. The long run I had planned must be stopped. The data is still saved, though it is not a complete set that can be entirely compared with the other similar sets. A member of the LAPD staff begins taking data that will be used to judge the global state of the plasma.

Last night’s overnigh run featured a measurement that we have never made before. The resulting data from this run is of great interest and I decide to perform some detailed analysis that may direct the next long runs. The results are interesting and it is clear that the measurement itself was a success (i.e., the signal is really what it was intended to represent).

I have made a few plots and left them on my computer screen for quick access any time an advisor inquires about the results. A broad review of the data set includes,

- Profiles – A display of any measurement as a function of space or time (e.g., the amplitude of a signal with the position of the probe as the x-axis).
- Power Spectra – The amplitude of various oscillation frequencies within the time signals.
- Raw Data – A few characteristic raw shot acquisitions provide perspective as to what the data looks like before processing through an analysis routine. This is always a good idea from the standpoint of double checking your results. For example, if your power spectrum shows a huge amplitude at some frequency, then your raw data should probably demonstrate oscillations with the corresponding period.
- Background Parameters – Plots of the machine status are useful for making sure that the system accurately reproduces the setup used for previous data sets. One of the main goals of this run is to measure certain behaviors as a function of background plasma settings. Before a unique looking result gets you too excited, make sure your plasma is truly the same as it was previously.

A review of the status data taken this morning reveals that the power issues from yesterday have affected the plasma. The effect is insignificant in the area where my experiment is being performed. After seeing this data, the leader of the research that is schedule to use the machine after my run week decides to reschedule for a time in which the plasma source will have been replaced. Source replacements are part of the standard device maintenance, so that group will not necessarily have to wait long for their machine time.

It is unfortunate that the next group is going to postpone their time. That means that their is no scheduled use for the machine after my official run ends. I am now able to extend this run into next week, probably until Wednesday.

We perform a two hour run, but it experiences a problem that delays its completion. In a brief moment of terror I smell something burning. It is a false alarm because the odor is the result of work being performed outside with the air making its way to the machine room, which is in the basement, by way of the air vents.

This might be the earliest that an overnight run has been started all week. The run should conclude at 3 am, at which time I would like to begin a similarly lengthy run. Everything appears to be running smoothly so I decide to go home for a few hours.

While the initial plan was to take a nap at home, upon my arrival I cannot sleep. I settle for some food and watch the SciFi Channel.

The overnight run is still going. Walking back to my desk I pass by the Electric Tokamak. Seen to the right, the tokamak is a little creepy during the late hours of the night. If you are interested in seeing a daytime photo of the machine with people standing on it for scale, then take a look at the ET Machine Site. I mentioned when I first got this camera, and the photograph below illustrates the problems I am having while learning about all of the available features.

The image to the left illustrates what happens when a photo is taken using the sensitive settings and a shaky hand. The camera accounts for low light levels (when set to the special low light/no flash mode) by keeping its shutter open longer. This means you need to hold the camera very still in order to get a good photograph. For this photo I thought the camera was done acquiring the image and lowered my hand. This generated the streak pattern.

The overnight run has stopped due to a problem. It was 73% complete and cannot be resumed from its present position. This is a good example of why it is worthwhile to remain at the lab during an experimental run week. The data that has been acquired will be kept and I set up a new run that is only performs the pieces that this canceled run missed. After the setup time and a thorough check to ensure there are no larger problems lingering, this 2+ hour run is started.

After moving quickly to remedy the most recent issue I find myself energized and ready for more analysis. Unable to sleep, I begin Saturday work.

‡ The phrase, “data splashing” is attributed to the lab director of my first research group at UCLA. An accomplished member of the tokamak and fusion physics community, he was impressed by the ability of graduate students to analyze data on computers and produce colorful plots. Making a lot of such plots is splashing because it gets in everyone’s eyes and distracts from the job at hand. Those are not his exact words, which is why I have not put them in quotes, but that is fairly close to the exact statement.

At this point I am working to kill time until the run finishes. I open a remote window so that I can monitor the run’s progress from my desk.

The previous run completes successfully. It is early enough to perform another long run that will complete before the advisors arrive. I watch it for ten minutes and the return to my desk. At my desk I review the remote monitor one additional time before deciding to rest.

The more tired you are, the more comfortable the lab couch becomes.

A phone call wakes me up. I make a mental note to turn off my phone next time I go to sleep, but then I realize it needs to stay on just in case the advisors call to adjust the schedule or get a status report. The remote monitor indicates the run has been going steadily. It is on schedule to complete near 1:30 pm.

A leisurely breakfast gives me a chance to read yesterday’s newspapers. Every once in a while I check the remote monitor.

There are fewer opportunities for taking photographs as the experiment week continues. In order to provide another reference for what the hardware setup I have included the image to the right which is from a run in October 2005. The blue glow is cause by an argon plasma (everything being done this week uses a helium plasma that appears orange in the photos and is not nearly as pretty looking). The well defined circular edge is caused by the cathode, which has been referred to previously as the plasma source.

Visible in the image are four probe shafts. The one coming in from an angle at the top right edge is the electron beam. It is pointed away from the camera so all we see is the supporting structure behind its source. The other shafts are Langmuir probes. While it may appear to be a crowded collection of diagnostics, these probes are actually separated by a few meters axially. Also, they are all measuring plasma parameters within a very similar radial extent of the cylindrical plasma column. The camera is positioned outside the vacuum chamber at one end of the 20 meter long machine (notice that previously I have mentioned the plasma column is over 15 meters long but the total vacuum chamber is longer because of all the support structure needed). The image distorts the position of the probes because of their different distances from the camera.

I want to make sure there are no problems with the run. Our time may be coming to an end and we cannot afford to waste time due to run errors. While it would be great if I could continue high level data analysis, my concentration is beginning to wane. I type in a few commands for analysis and then read some news stories online. All of this while keeping an eye on the remote monitor.

The run completes successfully. I am free to go eat before meeting with the advisors.

It is safe to say that the experiment week is now running efficiently. The advisors determine that we are in a good position to attempt an additional measurement. This particular measurement is non-perturbative as it involves optical measurements with the detectors placed outside of the vacuum vessel. Setting this up takes a decent amount of time. It works, but there is no connection to the DAQ so I will use an oscilloscope to save the data every run. Since I am doing it manually, I decide to save 50 shots for every individual run. These will be averaged and used to represent a typical behavior.

Everything is underway for the next run. I acquire 50 shots of the new diagnostic from the oscilloscope. I write a large note to myself that will remind me to do this for all of the subsequent runs.

By going to see this three hour movie I can guarantee that I will be awake when the run finishes around 2:30 am. The theater is two blocks away from the lab and the walk provides a good stretch. I am happy to spend the three hours without mention of ground loops, power supply shutdowns, and tangled BNCs, none of which are featured in the film.

Three short runs must be performed before the next long run can be setup. The hardest part of these is waiting 40 minutes for them to complete. There are no errors and these short runs are eventually done, paving the way for another long run during which I will sleep.

Waking up at this later time puts me in the mood for lunch. One microwave burrito later I am ready to get back to work.

The image to the right is a photo of the DAQ screen, hence the poor image quality. When this experiment week is over, a large collection of green checkmarks will indicate an efficient run. Some red X’s does not mean complete failure, however, because the data acquired up until the error is still saved. Some of the red X data sets will still provide for useful analysis.

The run names (corr5, etc.) do not mean much because I write thorough notes in my lab notebook. Some people make their run names very long and descriptive, including discriminating parameters and machine settings so that they can easily recall which runs include certain types of data. I just prefer to work with shorter file names.

Another long run is in the bag. Four shorter runs are next in line and these will be followed by another overnight-type run.

This trip home consists of dinner, shower, and a long nap. I will go back to the lab before the long run finishes so that I can immediately begin new runs. Once Monday arrives there will be a constant chance that my time on the machine is over. With a full page of desired runs yet to be completed, I need to be as efficient as possible in acquired data.

The long run will be complete in about another hour. I start working on organizing my notes. During the runs I find myself writing notes in my notebook and on any piece of scratch paper that is nearby when I think of something that needs to be recorded. At the conclusion of the run week I make copies of my notes for my advisors so that they know the details of the runs.

I finished three short runs and am now ready to begin another 8+ hour one. This newest long run will finish some time near 12 pm, at which point I will begin preparations for follow up runs without actually starting them. The advisors will want to examine what we have so far before continuing.

What do we have so far? By this point I have written about performing analysis on data in between runs. While I cannot go into detail, I can provide some example images of what I am producing. The image below is one such analysis result in the form of a contour plot.

I receive a coffee cup that leaks through the bottom. Possibly reaching my second wind, I return to data analysis. There is now a sizable set of results that we have had before.

Next in the list of runs is a set of two hour jobs. I receive word that our experiment time will go through this Wednesday. We already have a list of desired runs that will take us through that allotted time. Discussions with the advisors leads to a plan for this evening. The short runs will be completed by 6:45 pm. At that time we will move probes to different axial positions (the vacuum ports are separated axially, so this type of move requires removing the probe from vacuum and then pumping it down again in its new location) and let them pump overnight. While it is possible to have them ready within about an hour, by the time we get them moved and ready it will be late enough that all the lab personnel will have gone home. Only lab personnel can open vacuum valves on the machine. This is a precaution because if something goes wrong with the vacuum it takes one of the experts to prevent the plasma source from being ruined.

The people authorized to open the machine valves have gone home. I can still work because there are two probes that remain open to the vacuum chamber already. My mindset is that I should continue to acquire data that will fill possible gaps in my thesis. To help myself figure out what these gaps might be, I try to imagine what questions my advisors will ask during my defense◊. An email from one of my advisors makes the decision much easier based on the questions it contains. I start to set up another series of short data runs to address this particular uncertainty in the experiment.

Ground loops are the enemy of the experimentalist. These are basically problems associated with the electronics of an experiment and they result in everything from bad data (this loop was causing half of my data to appear upside down) to melted components.

Now the first of tonight’s run series may begin. The short runs require less than 30 minutes to complete. I spend the rest of the night working on them.

◊ The defense is an oral presentation made to your thesis committee. In the UCLA Physics Dept. the defense is closed, meaning only you and your committee are there. The procedure is the same as the oral exam to advance to candidacy.

It is almost too late to start a long run. I decide to compromise and setup a run that moves the probe to slightly fewer position than the other 8 hour runs. This will complete in 6 hours, which should have it finishing just in time to open the probes we moved yesterday.

I decide to make a quick run to the gym so that I can take a shower. As before, I think my advisors will appreciate this.

We set up the probe that was opened to the chamber this afternoon. After a little time performing manual scans we agree on a set of data runs that will take us through to the end of our machine time.

The plasma environment wears on materials placed within it. When a probe is removed from the chamber it is necessary to examine it and determine whether it has been damaged. If so, it does not need to be fixed right away but I do need to be aware of the problem so that I can fix it before it needs to be used again. The image to the right is of the Bdot probe that was recently removed from plasma operation. This probe consists of copper loops encased in epoxy. I expected to see the epoxy would be darkened as it is cooked by the plasma (the 50,000 °C plasma). As the image shows, there is no obvious damage. During the clean up process at the end of our time on the machine I will examine all the other hardware.

The most important run of the set we still wish to perform is begun first. This should finish some time around midnight. The plan is for me to be rested by the time this finishes so that I can setup as many shorter runs all the way until I am asked to stop using the machine.

During the experimental run week cost is not a factor when it comes to eating. I have gone out for nearly every meal since beginning my time on the machine. It is worth it to spend more money on food because a successful run is priceless. In that regard, I would like to make mention of the best pizza in the UCLA/Westwood area. Enzo’s is 2 ½ blocks from the lab. They have pizza by the slice, but I have eaten the stromboli every time I went in this week. If you are visiting UCLA and want a taste of grad student life, then this is the place for you.

After a hearty lunch I spend the rest of the afternoon chatting with the other graduate students in the LAPD office space. We like to trade stories about issues that arise in the lab (I hope my ground loop story might be useful someday). The long run completes after midnight.

I want to begin a final run some time later this morning. Runs that are about half as long as the previous one are designed. Two of these can be completed before 8 am, leaving just enough time to perform one new measurement before cleaning up. Today is the last day for this experiment.

The second of this early morning’s runs is begun, leaving a three hour sleep window.

Today will consist of a new measurement, one that I have never even tried in previous experimental sessions. A member of the research staff shows me how to set this up and then I program a long run. There is a group scheduled to use the machine after my time concludes. By programming a long run I am basically agreeing to stop the run short whenever the next users are ready to begin changing the machine. This is an acceptable situation because 1) I already have a ton of data from this run, thereby making it a success, and 2) whatever data comes in before I stop the run is still saved and can be analyzed.

As the final run completes its very last steps the lab is already full of activity. Some people are preparing for the next big experiment while others are setting up tests for new measurements in hopes of debugging them before receiving their official lab time.

My own clean up procedure involves removing all the probes I was using and putting away all of the electronics. The BNCs I had strewn across the lab are coiled up and hung on their racks while all the BNC connector pieces are neatly placed back in their drawers. Some administrative work is necessary, such as transferring the remaining data files to the server in our office space.

I have cleaned up all my equipment and the lab is already in use by others. The final stats for this run are:

- ≈ 70 separate data runs
- ≈ 77 GB of data (an overestimate because some of the data is stored in multiple formats)
- ≈ 200,000 plasma shots
- Confirmed functionality of two newly constructed probes (never had time to try the other two newly built probes).
- Didn’t get shocked once, which is quite an accomplishment for anyone working on a plasma experiment.

I hope this entry serves as a useful example of what it is like to perform a plasma physics experiment as a graduate student. It is a lot of work, but also a lot of fun.

]]>Determining Electron Temperature and Plasma Potential

An Additional Concern for Temperature Analysis

This is a simple example of Langmuir probe analysis and the issues related to it. It is intended to serve as a helpful reminder of the technical details in analyzing the data from a swept Langmuir probe and is not a complete theoretical effort. If you have already familiarized yourself with Langmuir probe theory, then you may find this treatment helpful. In this example I begin with the data acquired by measuring the current drawn by a Langmuir probe as the bias applied to that probe is varied. This data is analyzed in order to determine the plasma density, temperature, and potential. While the concept of Langmuir probe usage, digitization of the received data, and even the engineering of the diagnostic are all worthy of discussion, the following has a focus on analysis in order to limit the size of this entry. The data presented in this example was obtained in the undergraduate plasma laboratory (PHYS 180E) at UCLA while I was testing the equipment and helping in the design of the lab exercise as a teaching assistant for the course. Figure 1 represents the circuit used to acquire the signals that will be processed.

An example of what the raw data may look like is provided in figure 2. This data is obtained from an oscilloscope that averages 16 separate acquisitions (while the plasma is continuous, CW, the probe bias sweep is made at a rate of 4 Hz, thereby allowing for multiple acquisitions to be averaged over for one final result). The x-axis units represent the data point number (i.e. if this data was in a spreadsheet, then data point 1000 is the one-thousandth data point you have). The trace labeled V_{bias} represents the applied voltage to the probe. For a properly set up oscilloscope, this signal will be output in the correct units and calibration. The other trace, V_{R}, represents the voltage measured across a resistor in series with the probe. For a given resistor, R, the current through it, I_{probe} (named because it is also passing through the probe) is found using V = I_{probe}R.

Notice that the plot in figure 2 shows a transient effect in the V_{R} trace for values near data index of zero. Be sure to extract only the “proper” portion of these traces when performing your analysis. Zooming in on the data will reveal that we should only consider points 20 through 2500. The points prior to number 20 are an artifact of the bias voltage turn-on. Another striking characteristic of the plot is the stepping feature of V_{R} and to a lesser extent also of V_{bias}. This is not a plasma physics result, rather, the V_{bias} power supply achieves its sweep by stepping the potential up over time. This sweeper is typically operated at a frequency in the kilohertz range and the steps are very difficult to observe. In the 180E lab the plasma is steady state and the sweep frequency has been lowered as much as possible. The sweep rate used is 4 Hz, which results in the individual steps being more noticeable.

It is necessary to convert the V_{R} signal into units of current. Since the electron temperature, T_{e}, trace involves derivatives of the current we must process these raw data in multiple ways. To convert V_{R} into I_{probe} we can use I_{probe} = V_{R}/R. For a resistor of R = 677 Ω this becomes I_{probe} = 1.4771 × 10^{-3} V_{R}.

Figure 3 presents a first look at the actual IV trace. This is probe current plotted as a function of probe bias. This plot can tell us a lot about the plasma. The floating potential, V_{f}, occurs where I_{probe} = 0, which appears to give V_{f} ≈ -40 V. The ion saturation current, I_{sat}, is seen at biases well below V_{f}. The roll-off at large positive values of V_{bias} (technically, in this case the largest values of V_{bias} approach zero and may not actually go positive) corresponds to the electron saturation current, e_{sat}. The location of this roll-off, or knee, is the plasma potential, Φ_{p}. As with the floating potential, some estimate of this value may be made from the plot, but a more accurate method will be used to determine the final value. Figure 3 displays Φ_{p} ≈ -15 V.

For swept Langmuir traces such as those presented here, the most important relation between the measurement and plasma parameters is given by,

where q is the electron charge, k_{B} is Boltzmann’s constant, and the constant term will not be important. Notice that this relationship is in the form of a line given by a function f such that f(V) = mx + b, where m is the slope of the line and b is the y-intercept. If our x is actually x = V_{bias} – V_{f}, then the slope of this line is related to the electron temperature. Our method is to plot the term ln|I_{probe} – I_{sat}| and then fit a line to it. The slope of this best fit is inversely proportional to the electron temperature.

It is necessary to subtract the value of the ion saturation current from I_{probe} in order to continue with the analysis. A closer look at figure 3 provides I_{sat} = 1.73 × 10^{−4} A. It is important to get an accurate value of I_{sat} so be sure to read its value careful (i.e. not from a wide range plot like that shown in figure 3). Since the I_{sat} value is negative, subtracting it from I_{probe} will result in almost all of the values of I_{probe} − I_{sat} being positive. This is the intention because we will be working with the natural logarithm of the current in the next few steps and the natural log is not defined for negative numbers. If you have a few negative values left in your trace after subtracting the I_{sat} value that is acceptable. Those values will not play a role in the temperature calculation that follows. Figure 4 shows the resultant electron current with respect to the total current.

Figure 5 represents the logarithmic plotting of the electron current. The electron current is I_{probe} − I_{sat} because we have removed the ion contribution to the total current by subtracting I_{sat}. The plot decays very rapidly for values of V_{bias} < V_{f}. This is because ln(0) = -∞ and by subtracting the value of I_{sat} we have forced the current trace to be near zero for values of V_{bias} < V_{f}. The part of this trace that we are interested in occurs above the floating potential and this is the region presented in the next plot.

Figure 6 is a zoomed in version of figure 5 that also includes linear fits to the electron saturation current and inverse temperature. The temperature fit is performed over the range −22 < V_{bias} < -17 V, which can be seen as the exponentially rising region in figure 4. It is important to choose a bias range over which to perform this fit that represents the temperature dependent increase in probe current. By overplotting your data with the linear fit it is possible to quickly demonstrate whether the correct temperature region has been identified. According to this temperature fit, the inverse temperature is approximately 0.27, which leads to T_{e} = 3.70 eV.

A linear fit to the electron saturation current is also shown in figure 6. The intersection between this fit and the temperature fit occurs at the plasma potential. This results in a reading of Φ_{p} = -14.3 V.

Knowing both the electron temperature and the ion saturation current allows us to calculate the electron density using,

where the new terms are M for the ion mass and A_{s} which represents the area of the probe sheath. For cases in which the applied probe bias does not greatly exceed the value needed to obtain one of the saturation currents we may approximate the sheath area as the probe tip area. For significant overbiasing this is not a good approximation. In most cases this criteria is met and the approximation is one of the smaller sources of error for probe measurements. The concepts of sheath expansion and Debye length with respect to probe size are worthy of discussion in another effort.

For argon we have M = 6.62 × 10^{−26} kg. The planar probe used to collect this data has an area of A_{probe} = 0.738 cm^{2} (this is the area of one side multiplied by two because it collects ions and electrons from both faces).

To calculate the electron density it is possible to directly insert the electron temperature in units of electron-Volts by noting the following relationship,

where on the left side the temperature is in units of Kelvin.

For the temperature measured in this setup, T_{e} = 3.70 eV, the electron density is found to be n_{e} = 8.09 × 10^{15} particles per cubic meter. Most plasma physicists use cgs units and would report this as a density of 8.09 × 10^{9} cm^{-3}.

This is a treatment to account for the presence of a hot electron distribution in the plasma. For plasmas that are generated through a fast electron breakdown process, such as an electron gun or cathode-anode pair, this effect is likely relevant.

In the preceding section the properties of our plasma have been determined under a few assumptions. Possibly the most significant assumption was that the plasma source (i.e. electron beam or electron spray) did not affect the IV trace. This is not absolutely correct and it is possible to include effects of the plasma source in our interpretation of the data. From a conceptual standpoint, we may expect that our system contains two separate electron populations. One population is the background plasma (hence referred to as the plasma electrons) and the other is from the plasma source directly (beam electrons). Since the plasma electrons are generated by ionization from collisions between beam electrons and background neutral gas they cannot possibly be more energetic than the beam electrons. This translates into the plasma electrons having less energy than the beam electrons, which is equivalent to them having a lower temperature, T_{e,plasma} < T_{e,beam}.

If there are two separate electron populations in the system, then it may further be expected that the IV trace displays two separate linear regions when plotted in the ln|I_{probe} – I_{sat}| fashion. In order to discern these regions it is necessary to better define the behavior that corresponds to the plasma temperature. If there is a hot electron tail (referred to as a tail because this represents the tail end of the electron velocity, or energy, distribution function), then those electrons will continue to strike the probe even after we have made V_{bias} more negative. If these electrons are affecting the probe measurement, then we should expect to see a second linear region in the logarithmic plot for values of V_{bias} approaching the floating potential. As previously, we must avoid actually considering the floating potential because that is where the I_{sat} effects have been subtracted to return a near zero electron current value.

Figure 7 shows a fit to the tail electron region of the curve. The slope of this line is the inverse of the hot electron temperature, T_{e,hot} = T_{e,beam}. The slope of this line (the blue line in figure 7) is less than that of the plasma temperature, but since the slope represent the inverse electron temperature this correctly gives a hotter temperature for these tail electrons. The fit gives T_{e,hot} ≈ 10 eV. The fit is made over the range −36 < V_{bias} < −22. A slightly different range may have provided a better fit. It is always a challenge to verify that you have made the best possible fit. An inclusion of error analysis and a quantitative measure of the quality of these linear fits will help justify your reported values.

It is not correct to use this temperature to calculate the density and plasma potential of the beam electrons. While they do skew the IV trace, we are still applying the general theory of quasi-neutrality and ion collection to the interpretation of the IV characteristic. There are no ions included in the beam distribution and the rest of our probe theory does not apply.

Conceptually, the claim is that the probe collects some of these beam electrons and treats them as plasma electrons. The probe cannot possibly know the difference between the different electrons it collects. To the extent that this simplified notion is true, the beam electrons result in an IV trace that returns a larger temperature than is correct for the plasma that the beam created.

A spreadsheet file is available for download at the following link:

Sample Langmuir Data

As an actual data set this file contains noise and error, both of which contribute to the authentic experience of analysis. The values of plasma temperature, energetic tail temperature, floating potential, and ion saturation current that I obtained are included in the file for comparison to your own results. It is certainly possible to get a better (i.e. more accurate) result than I did.

If you are interested, then good luck.

]]>Anomalous transport is an area of great interest within the plasma physics research community. The field of magnetically confined thermonuclear fusion may benefit significantly from an improved understanding of this topic. It has already been shown that turbulent fluctuations increase the transport of mass and energy [Horton, 1999] in magnetically confined laboratory plasmas. Improved confinement can expedite the development of fusion reactors as controllable energy sources.

Space plasma research also encounters anomalous transport [Committee on Solar and Space Physics, 2004] across naturally occurring boundaries in temperature, density, and magnetic field. The modeling of space weather can be beneficially impacted by improvements in plasma transport understanding.

Filamentary pressure structures, meaning structures that are aligned along magnetic field lines with narrow radial extent compared to their length, are prevalent in both space and fusion plasmas. Figure 1.1 is a satellite photograph of the solar corona in which bright filaments are seen flowing along looping magnetic field lines. Energy transport along these filaments, and even through the solar wind en route to interaction with the Earth”s magnetic field, are ongoing areas of research. An example of filamentary structures from a fusion device is seen in Fig. 1.2, a photograph from the MAST fusion device. This image shows bright filaments in the outer edges of the device. They are the manifestation of edge-localized modes (ELMs) that transport hot plasma from the center of the device out to the walls. Controlling ELMs to minimize their transport or to avoid them altogether is presently a major effort within the fusion community [Evans et al., 2008].

Many features of filamentary structures remain unknown, including their capacity for transport, mechanisms leading to their generation, and which plasma waves they may produce. A difficulty in studying these issues within the space and fusion examples above is that the resulting systems are actually a mixture of many individual filaments. Interactions between the filaments and the existence of background instabilities complicate the interpretation of observations. This thesis utilizes an experiment in which a single filamentary structure is generated in the background of a quiescent plasma. The resulting system may be imagined as the isolation of one of the many structures seen in the previous two images. The fluctuation spectra and associated transport generated by this single filament proves rich with dynamic behavior. Studies related to plasma turbulence, spontaneously generated temperature waves, and non-linear interactions of drift-Alfvén waves are all performed within this configuration.

The experimental configuration used in this project was originally motivated by the desire to present experimental evidence for classical heat transport in magnetized plasmas. A summary of that successful effort is available as a Ph.D. thesis [Burke, 1999]. The theory of heat transport due to Coulomb collisions [Landshoff, 1949; Spitzer and Härm, 1953a; Rosenbluth and Kaufman, 1958; Braginskii, 1965] was developed nearly 50 years before it was quantitatively validated in a laboratory plasma [Burke et al., 1998; 2000b] using this configuration. The experiment consists of a narrow cylindrical region of warm plasma (Te ≈ 5 eV) embedded in a cold background plasma. The heated filament of plasma is manipulated to control the temperature gradient, thus driving classically described heat transport. Classical transport is always initially observed in this experiment, but if the heating is applied over a longer time interval or above a certain temperature threshold, the system transitions to a regime of enhanced, or anomalous, transport greater than that predicted by classical theory. Turbulent fluctuations are observed in this regime, and while some of their features have been investigated [Burke et al., 2000a], a mature understanding requires more detailed experimentation.

A summary of the transition from classical to anomalous transport in this experiment is provided by Fig. 1.3, a spectrogram of Isat power spectra (color contour) with the fluctuating component of a single Isat trace (I~sat, solid white) overplotted. The heated filament is generated at time t = 0 ms and maintained until t = 12 ms. Prior to t = 6 ms there is one well defined mode between 25 and 45 kHz. This is a drift-Alfvén eigenmode that has been detailed extensively both theoretically [Peñano et al., 2000] and experimentally [Burke et al., 2000a]. The presence of this coherent mode does not alter the transport levels, i.e., the observed transport remains classical during the presence of the drift-Alfvén wave. After t = 6 ms, a transition from coherent spectra to broadband spectra occurs. The transition is delineated by the disappearance of the coherent drift-Alfvén line into a broad region of power spread across many frequencies. Transport levels are enhanced, or anomalous, during times after this transition. All of this behavior occurs within a range corresponding to low frequency turbulence. The low frequency range is an area of active research within plasma physics, as discussed in the following section.

Identification of the processes underlying low frequency turbulence in magnetized plasmas is an ongoing challenge within plasma physics [Krommes, 2002]. By “low frequency” it is meant that the frequency of the fluctuating quantity, ω, is less than the ion cyclotron frequency, Ωi. This topic is relevant to the magnetically confined fusion research community because turbulent fluctuations can enhance the transport of mass and energy [Horton, 1999], thereby degrading tokamak performance. The topic is also of interest in space plasma efforts [Committee on Solar and Space Physics, 2004] in which enhanced transport across naturally occurring boundaries in temperature, density, and magnetic field can result in major effects observable by space and ground-based instruments.1.2 Low Frequency Turbulence

A significant effort has been devoted to the identification of universal behaviors in the spectra of turbulent fluctuations. A rich literature exists for both laboratory [Chen, 1965, Kamataki et al., 2007, Labit et al., 2007, Škoric and Rajkovic, 2008, Budaev et al., 2008, Pedrosa et al., 1999, Stroth et al., 2004, Carreras et al., 1999, Zweben et al., 2007] and space [Tchen, 1973, Kuo and Chou, 2001, Milano et al., 2004, Zimbardo, 2006, Bale et al., 2005] plasmas. The cited references are merely a representative sample of the available literature. Kolmogorov”s early contribution [Kolmogorov, 1941] has had a major influence in these activities [Frisch, 1995]. In particular, that pioneering work makes a general prediction of algebraic spectral dependencies that has resulted in most modern spectral results being presented in a log-log format. Piecewise fits are then applied in order to extract power-law values for comparison to the Kolmogorov prediction. A large dynamic range is compressed by the log-log display, however, and important features related to the turbulence may be obscured. An exponential frequency spectrum is one such important feature, and its presence and underlying mechanism are described in this thesis.

In the following an abbreviated description is presented regarding the major results obtained in this thesis. These are:

- Confirmation of previous results due to filamentary geometry.
- Observation of a spontaneous thermal wave in the absence of an externally driven source.
- Observation of exponential power spectra associated with anomalous transport that are generated by Lorentzian pulses in measured time series data.

The previously cited work of Burke, et al. was performed in the LAPD device prior to 2000. The present studies are performed in the machine that replaced the original LAPD, which has been named the LAPD-U, signifying it as an “upgrade” over the original. With similar plasma production sources and plasma properties, the major difference between these two machines is their length along the applied background magnetic field. The LAPD featured a plasma of less than 9.4 m in length. The LAPD-U plasma length is approximately 15 m. Throughout this thesis the LAPD-U designation will be used to emphasize the completely different linear device used for this work compared to the foundational efforts conducted on the LAPD.

Precisely because the LAPD-U is a different machine, all of the results in this thesis confirm that fundamental plasma physics is responsible for the observed phenomena, rather than the geometry of a particular device. The LAPD-U provides boundary conditions that were not present in the previous device, yet the coherent modes observed are the same, along with the important features of transport that have been re-observed.

The existence of low frequency, coherent, fluctuations is documented in the earlier work within this experimental environment [Burke et al., 2000a]. Observations show this is a coherent mode that, while seemingly unrelated to the generation of low frequency turbulence, is capable of strongly modulating the drift-Alfvén modes that are excited by the filament. These fluctuations are identified here as representing a spontaneously excited thermal wave. A thermal wave is the diffusive propagation of a temperature oscillation driven by a similarly oscillating source. Although thermal waves in plasmas have been studied [Gentle, 1988, Jacchia et al., 1991], and even manipulated to deduce subtle issues of anomalous transport in tokamaks [Mantica et al., 2006, Casati et al., 2007], controlled experiments in basic plasma devices are made difficult by the geometry of a magnetized plasma. The complexity arises due to the large difference in the thermal conductivities along and across the magnetic field, κ|| >> κ⊥, requiring plasmas with significant length along the magnetic field direction.

The discrepancy in thermal conductivities results in an extended structure that acts as the cavity for a thermal wave resonator [Shen and Mandelis, 1995]. The results presented here represent thermal wave oscillations that appear without the setting of a driver. Other experimental work involving this phenomenon, including those referenced, drive the wave with a controllable heat source. The drive source is as yet unidentified here, though it is demonstrated that the electron beam heating is not the direct cause, i.e., there are no coherent low frequency oscillations in the beam source. A possible candidate for the drive source is the heat-flux instability that is found in the solar wind [Forslund, 1970] and in laser-plasma interactions [Tikhonchuk et al., 1995]. This work has been summarized in Pace et al., [2008b].

Exponential spectra from a variety of experiments are found throughout the published literature. This is made possible by the semi-log display some researchers have chosen to use for the results. Figure 1a of Xia and Shats [2003] exhibits exponential behavior over four orders of magnitude from floating potential measurements. This experiment was performed in a helical device that reported proof of an inverse cascade. Figure 1 of Fiksel et al [1995] features an exponential dependence in an experiment observing magnetic fluctuation-induced heat transport. Figure 6b in Kauschke et al. [1990] shows an exponential spectrum with embedded coherent modes for a nonlinear dynamics experiment in a low pressure arc discharge plasma. Figure 7 of Maggs and Morales [2003] presents an exponential spectrum from magnetic fluctuations at the free edge of the LAPD-U. The exponential spectra in these examples are readily identified because of the semi-log plot display. The appearance of such spectra in a wide variety of experiments suggests that it may also be present in other results where it is simply compressed by a log-log display. Figure 1.4 provides an example of an exponential power spectrum from the experiment. In a semi-log display, an exponential dependence appears as a straight line. This behavior is used to calculate the scaling frequency (decay constant) of the spectra for comparison with the time width of the Lorentzian pulses. The coherent peaks in Fig. 1.4 (located at approximately f = 30, 60, 90, and 120 kHz) coexist with the exponential behavior that extends from 20 ≤ f ≤ 200 kHz.

The power spectra, P, of measured fluctuations display an exponential dependence in frequency, P(f) ∝ exp(-2f / fs), where fs is a scaling frequency. This exponential feature is only observed after the temperature filament transitions into the enhanced, or anomalous, transport regime. Concomitant with the exponential spectrum is the observation of pulses or spikes in the time series data. These pulses, which can be either upward or downward going in amplitude depending on the measurement location, are Lorentzian in temporal shape. A Lorentzian pulse has an exponential power spectrum, leading to the conclusion that the appearance of these pulses causes the exponential spectrum. A brief summary of this work may be found in Pace et al. [2008a].

This thesis is composed of five chapters. Chapter 2 presents the laboratory device in which this study is performed, along with a review of the various diagnostics employed to measure plasma properties. Chapter 3 details the results surrounding the identification of a spontaneously generated thermal wave in the filament. This is the culmination of an effort to identify coherent oscillations featuring a lower frequency than the other previously known modes of the system. The thermal wave is likely to be supported in many filamentary plasma systems including the solar corona. Modification of the temperature profile by the thermal wave leads to large amplitude pulses in time series signals. These pulses are discussed in Chapter 4, which also presents evidence for a universal characteristic of power spectra in turbulent plasmas. Such spectra exhibit exponential dependencies in frequency and are found to result from the Lorentzian shape of the measured pulses. Similar spectra, and in many cases similar pulses, are observed in the existing plasma literature and in ongoing research at linear machines and tokamaks. A density gradient experiment performed in the same device as this thesis work also exhibits these pulses and exponential spectra. Chapter 5 compares the density gradient experiment to the temperature filament experiment as part of the argument for the universal nature of the exponential spectra and Lorentzian pulses. Conclusions and a unifying summary of these topics are presented in Chapter 7. Finally, the appendices present results on plasma flows in relation to the primary topics, techniques of wavelet analysis that have been applied in power spectra calculations, and a summary of techniques employed to detect the Lorentzian pulses that generate exponential power spectra.

]]>**General Information**

Title: Spontaneous Thermal Waves and Exponential Spectra Associated with a Filamentary Pressure Structure in a Magnetized Plasma

Department: Physics and Astronomy

Institution: University of California, Los Angeles

- Download: PDF (5.5 MB, 162 pages)

Table of Contents

2 Experimental Setup and Overview of the Temperature Filament

3 Identification of a Spontaneous Thermal Wave

4a Exponential Frequency Spectrum and Lorentzian Pulses (Part 1)

4b Exponential Frequency Spectrum and Lorentzian Pulses (Part 2)

5 Comparison Between Temperature Filament and Limiter-edge Experiments

6 Plasma Flow Parallel to Background Magnetic Field

Appendix A Wavelet Analysis to Calculate Power Spectra

Appendix B Pulse Detection Techniques

An experimental study of plasma turbulence and transport is performed in the fundamental geometry of a narrow pressure filament in a magnetized plasma. An electron beam is used to heat a cold background plasma in a linear device, the Large Plasma Device (LAPD-U) [W. Gekelman et al. Rev. Sci. Instrum. **62**, 2875 (1991)] operated by the Basic Plasma Science Facility at the University of California, Los Angeles. This results in the generation of a filamentary structure (~ 1000 cm in length and 1 cm in diameter) exhibiting a controllable radial temperature gradient embedded in a large plasma. The filament serves as a resonance cavity for a thermal (diffusive) wave manifested by large amplitude, coherent oscillations in electron temperature. Properties of this wave are used to determine the electron collision time of the plasma and suggest that a diagnostic method for studying plasma transport can be designed in a similar manner. For short times and low heating powers the filament conducts away thermal energy through particle collisions, consistent with classical theory. Experiments performed with longer heating times or greater injected power feature a transition from the classical transport regime to a regime of enhanced transport levels. During the anomalous transport regime, fluctuations \r\nexhibit an exponential power spectrum for frequencies below the ion cyclotron frequency. The exponential feature has been traced to the presence of solitary pulses having a Lorentzian temporal signature. These pulses arise from nonlinear interactions of drift-Alfvén waves driven by the pressure gradients. The temporal width of the pulses is measured to be a fraction of a period of the drift-Alfvén waves. A second experiment involves a macroscopic (3.5 cm gradient length) limiter-edge geometry in which a density gradient is established by inserting a metallic plate at the edge of the nominal plasma column of the LAPD-U. In both experiments the width of the pulses is narrowly distributed, resulting in exponential spectra with a single characteristic time scale. The temperature filament experiment permits a detailed study of the transition from coherent to turbulent behavior and the concomitant change from classical to anomalous transport. In the limiter experiment the turbulence sampled is always fully developed. The similarity of the pulse shapes and fluctuation spectra in the two experiments strongly suggests a universal feature of pressure-gradient driven turbulence in magnetized plasmas that results in non-diffusive cross-field transport. This may explain previous observations in helical confinement devices, research tokamaks and arc-plasmas.