This is the talk page for discussing improvements to the Heterodyne article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||
|
The contents of the Heterodyne detection page were merged into Heterodyne. For the contribution history and old versions of the redirected page, please see its history; for the discussion at that location, see its talk page. |
Brain behavior
editCould someone include a reference for the stuff (ahem) on brain behavior? At best it isn't well written (not clear which wave phenomenon in the brain is being referred to, e.g. the electrical pulses? If so, what is with the comment on low frequency as it relates to the ear?) At worst, it seems like unsupported pseudoscience. Thanks —The preceding unsigned comment was added by Al Biglan (talk • contribs) 18:17, 3 June 2005 (UTC)
One ref is "Auditory beats in the Brain", Gerald Oster, Scientific American Oct 1973. See also http://gnaural.sourceforge.net which discusses a program for experimenting with brain beats; the page notes that "snake oil salesmen" have made claims about brain beats. It is not an area of any important research and is basically a curiosity; it should not be at the beginning of the talk section on "Heterodyne" except as a coincidence. 76.105.16.80 (talk) 22:09, 3 May 2011 (UTC)
- In addition to what you said, I must say that even the introduction is pretty mushy: what happens when two instruments are playing out of tune is that their sounds get linearly mixed, and thus the sum of two sines at close frequencies is equal to the product of a "beating" sine at a low frequency and an "average" sine at the average frequency. Given that it is a product and not a sum, this does not mean that any new frequencies have been created: for example, if you high-pass it at a frequency higher than the "beating" frequency, the signal will be kept unchanged and the beating will be preserved... Which makes me wonder about the general correctness of Gary D and Foobar's contributions, and their extrapolation to "brain waves". --Ma Baker 18:02, 11 Jun 2005 (UTC)
- While I agree the intro is mushy, the article is close to correct in its interpretation of the mixed frequencies:
- if you mix a frequency: f1=(a+h)
- and a frequency: f2=(a-h)
- you get a new sound which can be expressed as the product of the
- mixed frequency: a
- and the
- beat frequency: f1-f2
- The difference refered in the article is no doubt refering to the difference in the beat frequency formula. --Anuran 00:41, 23 October 2005 (UTC)
- I'm very skeptical about this audio brain stuff - the brain does not receive sine waves but rather what is roughly a narrow-window time-average of the discrete audio power spectrum (at sampling frequencies defined by the lengths of various hairs). Because this destroys things such as phase information, etc, there is no way for two audio signals beyond the hearing range of the human ear to get information into the brain (I doubt the human brain can perceive being shook at audio frequencies). Is there a *REAL* reference on this floating around somewhere, because this is pretty ridiculous stuff.
- From what little I know about the auditory beat effect, this interpretation seems to be right. The ear is basically a frequency domain sampler, so any beat frequency would have to be produced outside of the brain. If someone could varify this, I'd bet that parts of ear basically behave nonlinearly, so you get a difference frequency out. This would mean that some ear component has a non-constant acoustic impedance, which is not hard to imagine. The eardrum, for instance, could very well be the source, as it is a circular membrane, which I believe has a non-linear stress-strain relationship. In any case, I think there's enough ambiguity to not mention the human auditory system at all. --Jschultz 03:07, 27 October 2006 (UTC)
- Yes, this article does seem to be confusing two concepts. Searching for "binaural beats" on Google, I found that almost all of the references were pseudoscientific, but right at the bottom of page 5 I found a reference [1] from a credible institution, the Life Sciences Department of Sussex University (England). A bit later on I found another one [2] from Rutgers University, if you prefer the other side of the pond. These suggest that binaural beats are real, but belief in a connection between these and "brainwaves" takes us into quack territory. The only serious investigation that I can find into a link between binaural beats and EEG output is a report from Penn University[3] on an experiment that produced a negative result (no link). --Heron 20:56, 22 August 2005 (UTC)
Heyyy!!
editThis article has to de: links, that legal? Thanx 69.142.2.68 00:22, 19 September 2005 (UTC)
Cleanup
editMy fellow wikipedians - I've added sections to this article (as it is in quite a mess!) however I still think it could use some more reorganization. An image or two wouldn't hurt either. Maybe also more uses for electronic heterodynes?
formula contradicts other sources
editThis article says that the beat frequency is equal to the difference of the the frequencies divided by 2. Nearly every other source says it is merely the difference (f1 - f2). Does someone want to set this straight?
- There's a difference between the beat frequency and the difference frequency. If you create sin(100Hz)*cos(2Hz), you will hear four beats of 100Hz per second, and not 2. The reason is that cos flips between +1 and negative 1, but the amplitude of the signal you are hearing is really the absolute value of it, thus the beating of amplitude is really |cos(2Hz)|, which has a wavelength of 4Hz (but is not shaped like a cosine). - Rainwarrior 17:52, 16 August 2006 (UTC)
Brain wave stuff good
editAs someone who works on how the brain processes sound, I found this section pretty unbelievable, and hence added a "Disputed" tag. For example, you would not hear a mix of 100 Hz and 102 Hz as "101 Hz", because your brain effectively performs a Fourier transform on the sound you hear. Now, it is true that you could not properly distinguish the tones; they would probably just sound slightly out-of-tune.
While the article gets it right that epilepsy can be temporarily induced by a periodic stimulus (which is one of the things my research group works on), virtually the entire rest of the article is based on pseudoscience or misunderstandings.
I vote that the whole section be axed.
And yes, the beat frequency is . (This was a question on an exam I was marking last semester!)
- I very much disagree with: "your brain effectively performs a Fourier transform on the sound you hear". There is much, much more going on than a Fourier decomposition in the brain. If you work with computer sound recognition (you may, you didn't say exactly), you should know just how inadequate Fourier analysis is on its own for that purpose. Now, I don't know what this bit about "brain wave frequencies" is all about, perhaps it should be removed, but binaural beats are a well known and easily demonstratable process. I would agree that the language is inaccurate, though. The heterodyning principle is that the signal produced by mixing 100Hz and 102Hz is equivalent to the amplitude modulation of 101Hz with 1Hz (percieved as a 2Hz beating). It is not so much that the brain will percieve one or the other, but that the perception of it is ambiguous between the two, mostly (but not entirely) resolved by which parts of the signal lie in the audible range. - Rainwarrior 17:04, 16 August 2006 (UTC)
- Also, binaural beating is generally a quieter effect than the beating of a mixed signal. So, 100 in one hear with 102 in the other ear is going to have a stronger perception as an unmixed signal than as as the 101Hz * 1Hz idea. Quickly doing the experiment on myself, I would say that I hear a tone that is wavering in pitch (up and down) with slight amplitude changes, rather than two distinct pitches. If actually mixed, I definiftely hear is as 101Hz with a 1Hz amplitude modulation. But, if I take two tones that are more distinct, and gradually slide them into 100 and 102Hz, they are felt as more distinct (in the binaural test), but in the mixed test the distinction seems to be lost at some point. - Rainwarrior 17:49, 16 August 2006 (UTC)
An Oscilloscope and spectrum analyzer show it all.
editMy experience with audio and using an oscilloscope leads me to these conclusions on heterodyning. Any one with a waveform generator and oscilloscope can do the experiments and prove it to themselves.
When 2 audio (or r.f.) signals are added together, the higher frequency "rides" on top of the lower frequency. The peak to peak amplitude of the higher frequency does not change, however it's amplitude in reference to 0v. rises and falls at the frequency of the lower signals frequency. A speaker would produce this rising and falling at the frequency of the lower frequency signal. A frequency spectrum analyzer will show only the 2 original signals. There is no mysterious beat frequency...only the rising and falling of the higher frequency at the lower frequency's rate. .........I assume that when 2 independently created audio signal come into the ear, the results are the same. What the brain does with them I do not know.
If signals are mixed nonlinearly 4 signals result: The 2 original and the sum and difference of those originals. The trignometric product of 2 sine waves verifies this...hence nonlinear mixing is often referred to as the "multipling" the 2 original sine wave signals together. To produce this multiplication effect with 2 original sine waves, the higher frequency is usually modulated with the lower frequency. This process, done electronically, involves changing the peak to peak amplitude of the higher frequency at the lower frequency rate. The peak to peak amplitude of the higher frequecy is never higher then the original peak to peak value, but it varies at the rate of the lower frequency. An oscilliscope very clearly shows this waveform, and a spectrum analyzer will show 4 frequencies present in the waveform.
I do not know how this nonlinear mixing takes place in nature, but if it does then 4 distinct signals should result. GregFiore 19:08, 16 August 2006 (UTC) Greg Fiore
- I'm not sure what you're calling nonlinear mixing... but because of the trig identity you've discussed, the amplitude modulation (nonlinear mixing?) of two sine waves is indistinguishable from the addition (linear mixing?) of two corresponding sine waves. If doing amplitude modulation, it doesn't matter which is the "modulator" and which is the "carrier", because the result either way is identical. Whether or not you get 4 frequencies in your analysis usually depends on whether the outer two tones are withing the detectable range (frequently they are not), though the two original "additive" tones usuall come up stronger, and the two heterodyned frequencies may have additional harmonics. - Rainwarrior 03:55, 17 August 2006 (UTC)
Mr GregFiore, dunno if you are still reading this 7 years later, but you have gained totally the wrong impression in your first comment. But rather than simply name calling, why don't I give you an example. Consider a twin-engined aircraft, with the engines not quite in sync. You will hear - and no doubt have heard, many times - the resultant beat. It is very easily audible.
Old_Wombat (talk) 07:49, 10 June 2013 (UTC)
Heterodyne Harmonics
editI think missing here is the acknowledgment that mixing can occur through any non-linear device/function. So we're not generalizing enough to say that there are x number of frequencies generated, frequency y is here, frequency z is here, etc. The implication in most of these examples, I think, is that a quadratic is used, but this is only a commonly used approximation for some mixers, notably diode mixers. I think it would be more appropriate to indicate that the harmonics that are generated are determined by the function that governs the device's behavior, and that generally at least a difference term is produced. To some degree the method of mixing is also important to the harmonic production. Bolometric mixers, for instance, act like capacitive envelope detectors and inherently filter out higher harmonics. I know it's a fine point because it's only briefly discussed in the first paragraph, but the topic seems to be causing confusion in the discussion.--Jschultz 01:42, 23 October 2006 (UTC)
- Many different frequencies can be generated, but in the usual case it is the difference frequency, much lower than the inputs that is important. Everything else is filtered out. Worst case could be twice the difference frequency, which would be slightly difficult to filter. But not that much harder. Gah4 (talk) 12:09, 11 June 2024 (UTC)
Modifying Example in Header
editI'm making a tweak in the example in the header that cites a 3000 Hz and a 3001 Hz frequency to produce an "audible beat frequency of 1 hertz" (emphasis added) and changing it to a 3000Hz/3100Hz/100Hz example because 1Hz is not audible to humans (see Human_hearing#Localization_of_sound_by_humans:_a_brain_circuit).
If anyone disagrees, please discuss.
Need different definition
editThe definition is not consistent with "beating", which is the article's favorite example. If beating is truly an example of heterodyning, then the definition is wrong. For example: 1) The definition says "...in a nonlinear device". However, beating is a linear phenomenon that occurs in linear devices and mediums (it's just addition!). The definition should be expanded to include linear phenomenon also... if beating is to be included. 2) "...the generation of new frequencies..." - beating doesn't really create new frequencies. A spectrum of the beating signal would actually show the original two frequencies unchanged. The beating frequency wouldn't show up anywhere. By contrast, multiplication actually creates REAL signals at the sum and difference frequencies. Instead of saying "new frequencies" maybe "new frequencies (real or apparent)" 3) "mixing of two frequencies". Using the word "mixing" would be ok in general usage, but in this field it has a technical meaning which would exclude beating (for example, see wikipedia for "Frequency Mixer"). I suggest using a word that doesn't have the same technical association for example: "combining" instead of "mixing". It's great to use the word mixing elsewhere in the article as an example, but not in the definition where it excludes other types of heterodyning. Catapultsam 22:59, 14 April 2007 (UTC)
- Speaking of definitions, I didn't see this discussed already, but if we're talking about mixers and audio in one direction, saying that what a radio engineer calls a mixer, an audio engineer calls a ring modulator, I wonder if we ought to talk about it in the other direction as well, and say that the audio device called a "mixer" is an X to radio engineers. Adder? I don't know what to call it. Obviously not really directly related to heterodynes, but probably helpful to the inexperienced reader when an article is discussing both radio and audio, and using the word "mixer." When I first started working with radio, this tripped me up.
- This is tricky. In the usual case, beating requires something non-linear. In the audio case, it is non-linearities in the ear that allow us to hear a difference. A perfectly linear ear would not hear one. Gah4 (talk) 12:13, 11 June 2024 (UTC)
Mathematical Treatment/Derivation?
editI *believe* I once read a mathematical derivation of how heterodyning works in a book by Donald Wehner.. it may have been the 2nd Ed. of "High Resolution Radar", but my point is that heterodyning made ALL THE SENSE IN THE WORLD to me at the time I read that derivation. When I read this Wiki article, heterodyning seems very vague and mysterious. Perhaps a mathematical treatment could illuminate things... I recall it being a very simple derivation. —Preceding unsigned comment added by 69.183.146.246 (talk) 04:38, 7 February 2008 (UTC)
Recall that the product of two complex numbers is a complex number whose argument is the sum of their arguments, and whose magnitude is the product of their magnitudes. Therefore, if f(t) = A cis(ut) and g(t) = B cis(vt), f(t)*g(t) = A*B cis (ut + vt) = A*B cis ((u + v)t). The frequency of a sinusoid is the coefficient on t inside it (or that coefficient over 2*pi, depending on your convention), so we multiplied a sinusoid with frequency u by one with frequency v, and got one with frequency u+v. I'm eliding some complexity, here, specifically aliasing and phase, but I hope that gives you some intuition for what's going on.
- The "Mathematical principle" section has a derivation using ordinary trig functions. Your complex exponentials is an interesting alternative approach, though. --ChetvornoTALK 19:57, 15 September 2020 (UTC)
Important reference
editPlease see the section on Frequency Counter, and Prescaler. —Preceding unsigned comment added by 70.177.16.209 (talk) 17:05, 3 June 2008 (UTC)
Name change
editI disagree with the unilateral name change of this article. I think "heterodyne" is the much more recognisable term than the verb form. A quick google seems to indicate it is not just me: ghits for "heterodyne" = 14.6 million, for "heterodyning" = 60,800. I propose to move it back to heterodyne. SpinningSpark 19:52, 5 October 2008 (UTC)
- Agree.--79.74.165.39 (talk) 19:11, 10 October 2008 (UTC)
It may be clearer with graphics
editIt may be clearear if it is shown the graphics with the original sine waveform and the mixed ones, or a screenshot of an osciloscope...
- I agree that a signal diagram would be useful. I think it is most illustrative to show what's happening in the frequency domain. Time-domain representation is not particularly insightful. --Kvng (talk) 14:29, 14 September 2009 (UTC)
Jargon? What jargon?
editAs a matter of fact, this article contains very little jargon. Perhaps this was not the case when the notation was added, but as it stands today, there is no further need of jargon removal/definitions.
While many of the terms used in the article are unfamiliar to the unwashed masses, they are fairly ordinary words, often used for varied purposes having nothing to do with radio, or electronics. We aren't expected to dumb down articles on technical subjects to the point where functional illiterates can read them, are we? If one hasn't the vocabulary to understand this article, this article will remain undecipherable, whether definitions/explanations are given, or not. Eudaemonic Plague (talk) 21:01, 20 July 2009 (UTC)
- As the only two comments here show no need for the Jargon tag, I am removing it. -tonsofpcs (Talk) 01:00, 9 August 2009 (UTC)
Merger proposal
editUpconverting and downconverting currently have two separate pages, but both are really just the application of the heterodyne process. I believe they should be merged into a new section of this article. Dsmouse (talk) 05:47, 14 September 2009 (UTC)
- Ugh. I just noticed there is also Radio frequency downconverter which seems to mirror upconverter. Dsmouse (talk) 06:16, 17 September 2009 (UTC)
- I agree with the proposal. I'm not sure the diagrams in the Downconverter article are useful for the Heterodyne article. It looks like there's a technical misunderstanding there (see Talk:Downconverter) --Kvng (talk) 14:26, 14 September 2009 (UTC)
I would say keep it separate, the upconverter should have a bit more on the real world application and be expanded. rather than merged. Graeme Bartlett (talk) 21:36, 14 September 2009 (UTC)
- Are you in favor of merging the Downconverter article? --Kvng (talk) 17:03, 15 September 2009 (UTC)
- Weakly In favor of both mergers. --ChetvornoTALK 09:23, 18 October 2009 (UTC)
- I think there is a place for these articles. Firstly, it is helpful to readers who want a clear description of what these devices are all about without the in-depth technical descriptions of the heterodyne process. Secondly, heterodyning is not the only method of achieving a frequency shift - although, admittedly, it is ubiquitous. Technical errors in the existing articles are a non-reason for merging, editing is the answer to that. I do, however, think that the upconverter and downconverter articles could usefully be merged with each other. SpinningSpark 09:45, 4 April 2010 (UTC)
- Agree; there is next to no content in the upconverter and downconverter articles, and the reader would therefore be better served by being redirected to the Applications section on this article. If the merge into this article does not reach consensus, then I think there's no question that at the very least, the two stub articles should be merged into one. Oli Filth(talk|contribs) 14:47, 4 April 2010 (UTC)
- Merged. Cutting and pasting from one article to another then inverting a few words doesn't really make the encyclopedia stronger. --Wtshymanski (talk) 20:46, 25 August 2010 (UTC)
Thanks for taking care of that. --Kvng (talk) 21:50, 25 August 2010 (UTC)
Heterodyning and modulation
editI recently added this sentence to the introduction: "It [heterodyning] is used for moving signals carrying information from one frequency to another, and for modulation and demodulation." This was reverted with the comment: "rewrite changes meaning. hetrodyne does not directly modulate or demodulate." I think this is erroneous. Although the term "heterodyne" is not often used to describe modulation, heterodyning (mixing two frequencies to generate new frequencies) is the process behind AM and several other types of modulation. In AM modulation, two frequencies, the carrier and the modulation frequency, are mixed in a nonlinear mixer, resulting in sum and difference frequencies (the sidebands). It's just a matter of terminology whether it's called modulation or heterodyning. I think the sentence should be put back. --ChetvornoTALK 07:01, 22 December 2010 (UTC)
- Can you find a reference to support this? I agree that a frequency mixer is used for AM and technically that's the same operation as heterodyning but I'm not convinced it is appropriate to apply the heterodyne term to this process. --Kvng (talk) 18:59, 23 December 2010 (UTC)
- I would offer that mod/demod is indeed a fundamental application of heterodyning (c.f. superhet receiver etc.). Whether or not it's appropriate to mention this so early on in the article is another matter. Oli Filth(talk|contribs) 19:28, 23 December 2010 (UTC)
- "A mixer can be used as a modulator, and vice versa. The language you use depends on whether you are using it to translate a low frequency "baseband" of information up to high frequencies (in which case you call it a "modulator") or translating a modulated RF band down to baseband, or perhaps an intermediate (IF) band...(in which case you call it a "mixer")" Paul Horowitz (2006) The Art of Electronics, p.897
- "The process of combining two or more frequencies in a nonlinear device and producing new frequencies is called mixing, modulating, heterodyning, beating, or frequency conversion." Basic electronics (1973) U. S. Bureau of Naval Personnel, p.338
- --ChetvornoTALK 20:20, 23 December 2010 (UTC)
The "Beat" Receptor
edit144px|thumb|right|Instrument to receive radio waves of 1896-1899 structure.
One of the simplest devices I used in my experiments between my laboratory on South Fifth Avenue and the Gerlach Hotel, and other places in and outside the city, was an instrument constructed in 1896 with a magnet which sometimes was so designed as to give me a very intense magnetic field up to 20,000 lines per square centimeter. In this I placed a conductor, a wire or a coil, and then I would get a note which I amplified and intensified in many ways. From the characteristics of the audible note, I would immediately judge the quality of my apparatus.
When I speak of an audible note, I mean a note audible in a telephone as produced by the diaphragm of a telephone, or by a vibrating wire within the range of audibility.
[The figure] shows the general arrangement of the apparatus. Two condensers are the boxes at each end, and in the center a coil, or two coils, according to necessity, with which I produced a strong magnetic field and in it a wire. These condensers and the wire form a circuit which I tune. The condensers are of comparatively large capacity because my conductor is so short. I usually would transform the current in the receiving circuit and make as close a connection as possible and then tune the circuit to the vibrations. I would also mechanically tune the wire, according to the frequency, to the same note or to a fundamental.
This machine was suitable for transportation. I could put it under my arm with a couple of batteries. I had relays, which were very big, in which I produced (for stationary work) a very intense magnetic field so as to affect the conductor by the feeblest current. Furthermore, I used these relays particularly in connection with beats. When the frequencies were very high, I combined two frequencies very nearly alike. That gave me a low beat. One of the frequencies I sometimes produced at the receiving station, and at other times at both the receiving and transmitting stations. This always gave me the means of producing an audible note. I used machines of this character from 1892, but this specific instrument in my laboratory on Houston Street.
This instrument comprising a magnet and chord or coil in the magnetic field -- I mean a wire or coil in the magnetic field. . . . It was very convenient for producing audible effects because, if I used other forms of a receiver, I had a reading which was not at once translatable. If I listened to a note, I could immediately tell the quality of the transmission. For instance, I would tune a circuit in my laboratory, take it out to another building, and I would receive the signals; and from the quality of the signals I would see how I was progressing.
Counsel
In the experiments that you have spoken of with the instrument of which the picture is shown, what were the distances between the transmitting and receiving stations?
Tesla
The distance at that time, and I think the greatest distance at which I ever received signals from the Houston Street laboratory, was from the Houston laboratory to West Point. That is, I think, a distance of about 30 miles. This was prior to 1897 when Lord Kelvin came to my laboratory. In 1898 I made certain demonstrations before the Examiner-in-Chief of the Patent Office, Mr. Seeley, and it was upon showing him the practicability of the transmission that patents were granted to me. . . .
Schematic
edit144px|thumb|right|Ways of receiving practiced in 1898-1900.
[This] illustrates a device which has already been discussed. I have used it very frequently. This is also a drawing of the period of 1898 to 1900 and illustrates a way of producing audible notes by reaction of the received impulses upon a magnetic field. Here is a transmitter, diagrammatically represented, with an arrangement for varying the intensity of the waves emitted, and on the receiver side I have, as you see, a grounded antenna. [The] secondary [has a conductor under tension in] a very powerful magnetic field, and [the reaction of] this conductor, traversed by the received currents in the field, causes the conductor to emit audible notes.
I had several magnets of various forms, like this, and employed a cord in the field, which, when the current traversed it, vibrated and established a contact. Or, I used a small coil like this one here through which the current was passed, and which by its vibrations produced the signal, an audible note, or anything else I desired.
Counsel
Where was that patent drawing published?
Tesla
That drawing was not published. It is exactly the same thing as the other, but in my writings, which I have before commented upon, I had already shown the reaction of the high frequency and low frequency currents on magnetic fields, and had specified the frequencies within which one has to keep in order to receive efficiently audible notes.
Counsel
This drawing, however, was actually made about when?
Tesla
From 1899 to 1900. The date of these drawings we can easily locate from the bills I have received.
Counsel
And the devices shown here, were they ever used by you?
Tesla
I used them, of course, frequently. That is, in fact, one of the best forms, but as far as the principle of the employment of the magnet is concerned, it is not novel. The only novel thing was that I used my own discovery, which I had made known in my writings before. I was the first to use high frequency currents reacting on a magnetic field and producing audible notes through the reaction. I employed here only what I described in my lectures. It was a logical application of the principles which I then set forth.
Local oscillator
edit144px|thumb|right|AC Generator/Local Oscillator.
[This] oscillator was one of high frequency for isochronous work, and I used it in many ways. The machine, you see, comprised a magnetic frame. The energizing coil, which is removed, produced a strong magnetic field in this region. I calculated the dimensions of the field to make it as intense as possible. There was a powerful tongue of steel which carried a conductor at the extreme end. When it was vibrated, it generated oscillations in the wire. The tongue was so rigid that a special arrangement was provided for giving it a blow; then it would start, and the air pressure would keep it going. The vibrating mechanical system would fall into synchronism with the electrical, and I would get isochronous currents from it. That was a machine of high frequency that emitted a note about like a mosquito. It was something like 4,000 or 5,000. It gave a pitch nearly that of my alternator of the type which I have described.
Of course this device was not intended for a big output, but simply to give me, when operating in connection with receiving circuits, isochronous currents. The excursions of the tongue were so small that one could not see it oscillate, but when the finger was pressed against it the vibration was felt. . . .
Regards,
GPeterson (talk) 23:57, 16 May 2011 (UTC)
- I assume you are posting a primary source so we can interpret it and reach a conclusion. Unfortunately we can't: "Policy:--> Any interpretation of primary source material requires a reliable secondary source for that interpretation." WP:PRIMARY. Its pretty cut and dry. Fountains of Bryn Mawr (talk) 13:49, 17 May 2011 (UTC)
- The text is from a transcript of a pre-hearing interview with Tesla by his legal councel in 1916 and therefore is a primary source. It has not been posted for interpretation.
- GPeterson (talk) 18:48, 21 May 2011 (UTC)
- The text is from a transcript of a pre-hearing interview with Tesla by his legal councel in 1916 and therefore is a primary source. It has not been posted for interpretation.
"Dr. Nikola Tesla and His Achievements" (excerpt)
editBy Samuel D. Cohen
The Electrical Experimenter, February, 1917
PERHAPS the ever-broadening field of invention has never known a genius more successful in developing far-reaching and original inventions than Dr. Nikola Tesla, whose name is known in every corner of the globe for his scientific achievements. . . .
Dr. Tesla's most important work at the end of the nineteenth century was his original system of transmission of energy by wireless. In 1900 Tesla obtained his two fundamental patents on the transmission of true wireless energy covering both methods and apparatus and involving the use of four tuned circuits. He also obtained a number of other patents at the same time, describing many other improvements. Among these may be mentioned his application of refrigeration and the oscillatory systems with which he obtained remarkable results in his well equipt laboratory on Houston Street, New York City.
In 1901 and 1902 several patents were granted to him describing a number of improvements, among which two have assumed great importance in the radio art; one of these is known under the name of the “tone wheel” and the other the “tikker.” Others are making claim to these inventions, but Tesla was far ahead of any of them.
Fig. 5. The First “Beat” Receptor for Radio-Telegraphy Invented by Tesla, Which Foreshadowed the “Heterodyne.”
At a little later date Tesla secured two patents on what he termed the principle of individualization, involving the use of more than one oscillation for the operation of the receiver. This property is now known under the commercial name of beat receptors. In long protracted interference proceedings carried on in 1903, however, Tesla has been accorded full and undisputed priority over Fessenden and other claimants. His first beat receiver is shown in Fig. 5, which consisted of a steel band stretched above a powerful electro-magnet excited by a high frequency current, causing the steel band to vibrate at an enormous rate. A small sensitive electro-magnet is placed in proximity to the band, in which is produced an alternating e.m.f., and this is acted upon by the received wave.
The apparatus is timed by adjusting the periodicity of the band until the received wave is made audible. The large electromagnet was usually excited by means of an alternating current generator, and this is illustrated in Fig. 6. Like all Tesla inventions, the construction of this oscillator is very unique, consisting of two chambers in the center of which is placed a vibrating membrane. This is inclosed in a magnetic field, consisting of a powerful coil encircling the device as seen and which was excited by a direct current. The membrane was caused to vibrate by passing interrupted, comprest air thru the two chambers by the inlet pipes as indicated. In the process of vibration, an E.M.F. is produced in a coil secured to the vibrating disc.
The above analysis of Tesla's work concludes that Tesla's ""Beat" Receptor" anticipated the Fessenden's radiotelegraphic heterodyne receiver.
GPeterson (talk) 18:48, 21 May 2011 (UTC)
- For there to be any case for "heterodyne technique was first used by Tesla around 1896" or "Heterodyning was invented by Tesla" you would have to show where the above author says "Tesla's ""Beat" Receptor" anticipated the Fessenden's radiotelegraphic heterodyne receiver" and then show some version of that statement many more times from other reliable sources that are mainstream, i.e. easily found here or here. Again it is pretty cut and dry per Wikipedia policy. Fountains of Bryn Mawr (talk) 02:16, 22 May 2011 (UTC) (reworded for typo and clarity Fountains of Bryn Mawr (talk) 17:36, 22 May 2011 (UTC))
- @Fountains of Bryn Mawr: Peterson created an article for this device, Beat receptor, inadequately sourced as usual, which makes the same WP:REDFLAG claims. I don't know what should be done with this article, whether it should be rewritten or merged, would appreciate some input. I put my criticisms on the Talk page. --ChetvornoTALK 20:24, 4 October 2017 (UTC)
- Hard to find any sources on it that postdate Calvin Coolidge, I found one[4] (calls it a "beat receiver" or "oscillating detector"). A pre-Taft source attributes it to Fessenden[5]. I do not see any sources that attribute it to Tesla. I would say, if this is not a Beat frequency oscillator, then crop the article back to available RS. Fountains of Bryn Mawr (talk) 21:26, 4 October 2017 (UTC)
Picture worth a thousand words?
editI wrote a simple BASIC program to graphically show this principle. I could easily grab some screen captures of various frequency pairs (eg, two very similar frequencies, two very different frequencies, different amplitude ratios, etc) and put them here. Would that be worthwhile? Old_Wombat (talk) 07:46, 10 June 2013 (UTC)
Recent changes to sentence on "beats"
editI object to the recent change made to the sentence in the introduction about the relation of "beats" to heterodynes. The previous sentence
- "Heterodynes are closely related to the phenomenon of "beats" in acoustics."
was changed to
- "The essential nonlinearity of heterodyning makes it distinct from the linear phenomenon of beats."
The original sentence is more accurate. As Glrx pointed out, beats and heterodynes are related (note the original sentence didn't say they were the same phenomenon); they are duals of each other. They both come from the same trigonometric identity, the prosthaphaeresis identity. The "heterodyne equation is
By changing variables and adding and subtracting, you get from this the "beat" equation
More importantly, however, the word "beat" is often used for heterodyne, for example in the beat frequency oscillator, which actually uses heterodyning, not beating. Both techniques combine two frequencies to produce sum and difference frequencies. The sentence in question is in the introduction, where it will be read by readers who are not experts in the field. To them, the objections that AmarChandra made to the original sentence on Glrx's talk page, that the power in the sum and difference frequencies in the case of beats can be unequal, is simply not important. I think not only should the original accurate sentence be restored to the introduction, but the article needs a section on the relation of beating to heterodyning. --ChetvornoTALK 00:01, 7 September 2013 (UTC)
- I'm late returning here, but I agree with Chetvorno. I reverted on the math identity which does not require any nonlinearity. I haven't had a chance to pull the IEEE article, but I'd disagree with the notion that human hearing is linear. Glrx (talk) 20:38, 17 September 2013 (UTC)
- I am even later replying, but there is still an issue. A non-expert who reads the sentence in question (Heterodynes are closely related to the phenomenon of "beats" in acoustics.) may believe that beats are caused when signals are combined nonlinearly. Adding a section on the relation to beats is the best way to clear this up. Short of this, the sentence in question should be changed to note that a different, although related, phenomenon causes beats. 96.27.143.2 14:57, 3 October 2017 (UTC)
- I agree that it would be good to add a section on the difference between "beats" and heterodyning. I will write one when I get the time. However, regardless, I don't think the sentence in the introduction should be changed. The sentence is accurate; it does not say that the two processes are the same. --ChetvornoTALK 20:00, 4 October 2017 (UTC)
- I am even later replying, but there is still an issue. A non-expert who reads the sentence in question (Heterodynes are closely related to the phenomenon of "beats" in acoustics.) may believe that beats are caused when signals are combined nonlinearly. Adding a section on the relation to beats is the best way to clear this up. Short of this, the sentence in question should be changed to note that a different, although related, phenomenon causes beats. 96.27.143.2 14:57, 3 October 2017 (UTC)
- I just re implemented changes the user made to that sentence. Although they are not necessary, they increase clarity without bogging the sentence down. The value they provide (tipping the non-expert that there is something different going on with beats) is worth the extra 3 syllables. The original leaves a little room for ambiguity - this clears that up. -hm39 — Preceding unsigned comment added by 2602:306:B8BC:3300:11A9:F1AE:4DF3:6D36 (talk) 23:02, 23 October 2017 (UTC)
- If this sentence contains too much information, another alternative is "Heterodynes are closely related to, but distinct from, the phenomenon of "beats" in acoustics.", or "Heterodynes are related to the phenomenon of "beats" in acoustics, but do not cause it.". -hm39
- It seems to me that heterodyne and beat are closer than the above suggests. If the ear was a perfectly linear detector of audio waves, we would not hear beats, but instead the two separate frequencies. The ear needs to be able to do time dependence of its input, and so can't just do the (infinite time Fourier) spectral response. The non-linearities are what gives us beats. Gah4 (talk) 19:43, 16 September 2019 (UTC)
Tesla wp:redflag
editI have removed this paragraph because I do not see this claim in any mainstream sources. It is a "surprising or apparently important claim not covered by multiple mainstream sources" and a "claim that is contradicted by the prevailing view within the relevant community," i.e. two points of WP:REDFLAG. The Leland I Anderson source does not mention Heterodyne[6] and the Samuel D. Cohen source is almost 100 years old and offers no detail for the claim (only appearing in a caption) "The First “Beat” Receptor for Radio-Telegraphy Invented by Tesla, Which Foreshadowed the “Heterodyne.”". We are way short of the "multiple high-quality sources" required at WP:REDFLAG. Fountains of Bryn Mawr (talk) 00:01, 8 October 2014 (UTC)
Merger proposal: Heterodyne detection
edit- The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
- As there was two supporting statements and no opposition in 3 months, the result of this discussion was to merge. I performed merger 16 May 2017. ChetvornoTALK 21:23, 16 May 2017 (UTC)
I propose that the short article Heterodyne detection be merged into this one. That content all seems relevant here (if it isn't here already). Layzeeboi (talk) 08:45, 11 February 2017 (UTC)
- Agree; there is absolutely no need for that article. --ChetvornoTALK 11:51, 11 February 2017 (UTC)
Requested info: Accidental heterodyning? (e.g. Tenerife airport disaster)
editThe Tenerife airport disaster article links to this page in the context of two simultaneous radio broadcasts heterodyning into illegibility. Can someone more knowledgeable than me maybe add a section or subsection discussing this phenomenon? Ninjalectual (talk) 02:13, 19 December 2018 (UTC)
- Even though I remember it, I had to read the whole Tenerife airport disaster article before replying. I am assuming that the radios in question use amplitude modulation (AM), otherwise the statements don't make sense. Two AM stations transmitting on exactly the same frequency just add, such that you should hear both conversations. It is my understanding that ATC uses AM for just this reason. Broadcast AM stations (in the US) have a 20Hz tolerance. In that case, two conversations won't exactly overlap, but it should be obvious that it is two. The carrier of each will look like modulation to the other, at the frequency of the difference between the two. That is, pretty much, the meaning of heterodyne here. At 20Hz, it is close to inaudible, but with AGC will reduce the amplitude of each other. However, the article says 3-second-long shrill sound. That implies a carrier difference much more than 20Hz. Shrill implies a much larger frequency difference. Do you know the tolerance for ATC radio frequencies? Gah4 (talk) 08:08, 24 August 2019 (UTC)
- Unsourced speculation/extrapolation/synthesis is poor argument. Broadcast AM stations are 540–1600 kHz and are nowhere close to aircraft VHF.
- Frequency tolerance for the 108–136 MHz aircraft band is 20 ppm. See 47 CFR 87.133.
- Each transmitter may be off by 20 ppm × 100MHz = 2,000 Hz.
- Glrx (talk) 18:20, 16 September 2019 (UTC)
- Thanks. I tried to find a reference, but didn't find one. I didn't guess that Cornell would have it. 2kHz is probably enough to qualify as shrill. What are pilots supposed to do in that case? Gah4 (talk) 19:32, 16 September 2019 (UTC)
"Heterodynes" vs "Heterodyne Frequency"
editThis article uses the term "heterodynes" throughout in place of "heterodyne frequency". Heterodyne, and all its forms, are either an adjective or a verb. There is no recognized noun form of the word that I can find from any dictionary source in print or online. As we all know, words matter. The proper syntax for the two new frequencies created by mixing two original frequencies in a non-linear device is "Heterodyne Frequencys". In the words of Steven Crowder, "Change my Mind". — Preceding [[Wikipedia:Signatures|131.22.200.58 (talk) 21:10, 19 December 2018 (UTC)]] comment added by 131.22.200.56 (talk) 16:22, 19 December 2018 (UTC)
- There is a saying from a famous scientist: Any noun can be verbed. Can we also say that any verb can be nouned? Gah4 (talk) 08:10, 24 August 2019 (UTC)
- There were three, one of which is the plural of superheterodyne (receivers). I removed one, is it already was followed by frequency. Now they aren't throughout, but only one. Maybe one isn't too many. Gah4 (talk) 19:45, 16 September 2019 (UTC)
AM
editThe article says: CW Morse code signals are not amplitude modulated. It seems to me that they are amplitude modulated, with the amplitudes being either zero or full power. The exact sound depends on the shape of the on-off and off-on transitions. But yes, spark gaps generate a wide spectrum of frequencies, including audio frequencies, and CW transmitters mostly don't do that. Gah4 (talk) 05:09, 5 August 2019 (UTC)
- I wouldn't call the standard method of transmitting Morse AM; it is OOK. (Compare WWVB's ASK.) There's FCC terminology such as A1 (Morse) and A3 (voice). Morse can be sent as a tone modulating an AM or FM carrier, but such usage does not have the benefit of concentrating the transmitter power into a narrow band. Glrx (talk) 20:50, 23 August 2019 (UTC)
- on-off keying claims to be a form of Amplitude-shift_keying which claims to be a form of amplitude modulation. Using transitivity, then, OOK is a form of AM. The OOK article mentions minimizing bandwidth, which I believe means making sure that the transitions aren't too fast. If one just keys the carrier (besides the safety problems at high power), one will get sharp transients and large bandwidth. I think that makes it more like AM, with a band-limited modulation signal. Gah4 (talk) 07:20, 24 August 2019 (UTC)
Principle
editA recent edit changes difference to sum, in the generation of sum and difference signals. And that is correct, for the equation given. But why is the cos() identity used, and not the sin() identity? (We say sinusoid, not cosinusoid.) If the sin() identity is used, then it is the difference. And since it really doesn't matter, there is no need to keep sum/difference apart. Gah4 (talk) 12:15, 11 June 2024 (UTC)
simultaneous tuning?
editThe article says: the tuned radio frequency receiver (TRF), all of the receiver stages had to be simultaneously tuned. As far as I know, that isn't quite true. There were separate tuning knobs for each stage, maybe three or four. I suspect what happens, is that one finds the right tuning for each station, and then marks each dial. Ideally, they are simultaneous, but in actual fact, likely not. Gah4 (talk) 05:03, 10 October 2024 (UTC)