Wikipedia:Reference desk/Archives/Science/2011 November 1

Science desk
< October 31 << Oct | November | Dec >> November 2 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


November 1

edit

HOW DOS MOVE SOLAR WIND IN INTERSTELLAR SPACE

edit

The solar wind direction and model of movement depends on some vectors of forces as below : 1)the magnetic field acceleration in sun corona 2)solar rotation round its axis 3)first direction of cinematic movement of any particle 4)the effect of gravity field of sun 5)turblent accured in planets orbitals. my calculations shows that the particles in solar wind combination never leave solar system and it slag down at outer distances .what do you think about ?.--Akbarmohammadzade (talk) 08:24, 1 November 2011 (UTC)[reply]

You mean kinetic, not cinematic, I think. Wnt (talk) 15:48, 1 November 2011 (UTC)[reply]
The solar wind will be moving from a higher density region to a lower density region. Inertia and magnetic fields will keep it moving in circles. The high temperature of the corona is enough to overcome the sun's gravity. Graeme Bartlett (talk) 09:11, 1 November 2011 (UTC)[reply]

The velocity of wind is super sonic 250~800 Km/s with density 8atoms per cube centimeter .of course the corona accelerates it further than escape velocity from sun gravity field (V>180 Km/s) main movement of any particle is spiral , general wind shape is ballerinas skirt model , and it seems to any curvature which rotates counter clockwise . Main particle in it's combination is proton(95%)and it has ionized isotopes .radiating alpha and gamma rays .when it slows down it will rotate round sun .WHERE? MAY AFTER JUPITER ORBITAL :5AU.we are calculating yet Akbarmohammadzade (talk) 10:34, 1 November 2011 (UTC)[reply]

You might want to look at our article Heliosphere, with particular attention to the section 'Heliopause.' {The poster formerly known as 87.81.230.195} 90.197.66.254 (talk) 13:47, 1 November 2011 (UTC)[reply]
 kinetic energy  ......cinematic movement --78.38.28.3 (talk) 16:49, 1 November 2011 (UTC)[reply]

Technological Singularity, if rewritten in sci-fi franchises.

edit

Simple question this time: How would the Star Trek franchise be rewritten if the Technological Singularity was incorporated somewhere into the plot? --70.179.164.113 (talk) 13:25, 1 November 2011 (UTC)[reply]

I think that's the Borg part of the Star Trek canon. Beyond that, how would the account of a speculative future change if additional speculative futures were incorporated? Speculate! But there are no possible definitive answers. — Lomn 13:37, 1 November 2011 (UTC)[reply]
Q (Star Trek), surely? Or was there a Borg-Q connection, I forget.  Card Zero  (talk) 13:58, 1 November 2011 (UTC)[reply]
There was no connection between the Borg and the Q, no. And the Q was as unpredictable as it gets. --Ouro (blah blah) 15:20, 1 November 2011 (UTC)[reply]
(edit conflict) The OP may be interested in several fictional concepts, that is in concepts used in the writing of fiction, which may help answer the question:
  1. The Deus ex machina is a common plot device where some part of the plot is advanced by some impossible, magical, miraculous, or otherwise unpredictable event. This, in sci-fi, is very closely related to...
  2. Clarke's third law, one of Clarke's three laws; as devised by Arthur C. Clarke, is essentially a statement of Deus ex machina as it applies to science fiction. The law states basically that any technology which is necessary to advance the plot of a sci-fi story is necessarily non-scientific, and thus is basically "magic" or a Deus ex machnia device. It is, of course, technically possible to write a science fiction story which never uses anything non-scientific in it. It just has almost never happened yet that anyone has written a good science fiction story which does it.
  3. The Macguffin, which is a plot device which is used to advance the plot of the story, but which itself is inconsequential to the reader. That is, it is something which matters to the characters of the story, but does not necessarily matter to the reader what it is. The test of a MacGuffin is if the MacGuffin could be changed out for a different thing, and ultimately the story remains roughly the same. A "technological singularity" could be like this, the idea that a superintelligent technology would affect the plot of the story may be vital, but the nature of the superintelligent technology may be unknown, ambiguous, and ultimately doesn't matter. Think of the monolith builders in 2001...
I hope those concepts help. --Jayron32 15:28, 1 November 2011 (UTC)[reply]
That's not what Clarke's Third Law says, although it is a reason to invoke Clarke's Third Law when writing speculative futures. Clarke's Third Law is much simpler than that, and it's more straightforward to quote it than 'explain' it: "Any sufficiently advanced technology is indistinguishable from magic". That I am writing this reply on a piece of glowing plastic smaller than my hand, communicating with people from around the world, demonstrates the non-fictional truth of it. Writing advanced science without adding a made-up explanation is present in a lot of good scifi, and that doesn't mean it is using something "non-scientific". I am genuinely confused by what you mean here. Why would any technology necessary to advance the plot be necessarily non-scientific? If I need my characters to communicate over distances, so they use phones to advance the plot, do those phones necessarily have to be non-scientific? Just, what? 86.163.1.168 (talk) 16:13, 1 November 2011 (UTC)[reply]
Not any technology, just some technology which is nonscientific is often needed in science fiction works: the standard is faster-than-light travel, or at the least faster-than-light communication. Most works of science fiction (at least, the good ones which revolve around character development and plot and things that make reading interesting) use FTL travel as a deus ex machina development. That is, characters need to interact with one another, or events need to occur in a causitive manner that allows the plot to move along. Insofar as the setting is a multi-world setting, spread across an entire galaxy, some basic violations of known physics are required to move the plot along. You do this by inventing a warp drive or hyperspace or something like that. Asimov invents the "positronic brain" and the "laws of robitics" as Clarke's-third-law devices which don't have any basis in science (which is not to say one could not make such devices obey the known laws of physics, just that it isn't necessary to advance the plot of the stories, so there is no need). These sorts of things are Macguffins in the sense that the existance of them is required to advance the plot of the story, but the specific way in which they work is inconsequential. One can invent a pseudoscientific explanation which sounds plausible to the general public (but which is basically bullshit), or one can just leave it unexplained; FTL travel is a needed plot element, or sentient robots are a needed plot element, or some similar bit. The actual manner in which the world allows for such things is not, in itself, important to advance the story, so the author will, at their own discretion, either explain it or not, for their own ends to make the story "believable", but it doesn't make it scientificly sound. --Jayron32 17:47, 1 November 2011 (UTC)[reply]
This might be easier to discuss on the Entertainment desk, where ideas involving nanides that do not obey the laws of thermodynamics get a better reception. Dualus (talk) 15:46, 1 November 2011 (UTC)[reply]

Physical Geologic Driver

edit

I would like a critic on submitting this article into your encyclopedia. Physical Geologic Driver is most conveniently accessible on user:morbas Physical Geologic Driver. The ICS and circa 1982 Geologic Time Scale are analyzed for covariant (equal) variance intervals and 417Ma is solid within the Phanerozoic Eon. The 417Ma math is allowed under wiki rules as the numbers represent the same thing. All wiki articles are a grouping of papers, this is the same because sentences referenced to published papers. To say this is original research would include extinction event and others.
Morbas (talk) 13:48, 1 November 2011 (UTC)[reply]

The article in question is User:Morbas#Physical Geologic Driver and a related discussion is at Talk:Extinction event#Physical Geologic Driver. -- 110.49.227.102 (talk) 15:17, 1 November 2011 (UTC)[reply]
Firstly, the article is so badly written that it is difficult to see what it is about at all. But yes, in as much as I understand what you are saying, it looks like original research, and is therefore unsuitable for Wikipedia. Find published reliable sources that discuss the topic itself (intelligibly) and maybe an article is a possibility. AndyTheGrump (talk) 14:19, 1 November 2011 (UTC)[reply]

Estimation of covariance matrices

My initial impressions as a decided non-expert in this area (though a former encyclopaedia and science textbook editor) are:
The piece is written too technically for a general encyclopaedia readership; in particular a simpler lede (summarising first paragraph, akin to an abstract) is badly needed.
The style of writing is over-concise, obscuring your meaning; in part this may be because your are (I suspect) not a native English speaker, and you are not deploying entirely correct and clear grammar.
The article lacks cohesion, too many of the successive sentences seem on first reading to be unconnected, and their relevance to the overall subject obscure.
Too many of the statements appear to be unsupported assertions; they may be justifiable, but this needs to be demonstrated by more references and by explicit explanations.
The relevance of some of the wiklinks is not obvious; for example, ""covariance" is linked to an article that at first glance does not even include the word, and certainly does not explain what it means in this context.
While you may be making some references to published papers, I would consider your synthesis of their content and your apparently novel analysis of data to produce an apparently new hypothesis to indeed constitute Original Research; this material should therefore not be a candidate for inclusion in Wikipedia until it has been published in peer-reviewed scientific journals.
The above is just my initial impression. Hopefully others with more knowledge of the subject area will weigh in more constructively. {The poster formerly known as 87.81.230.195} 90.197.66.254 (talk) 14:20, 1 November 2011 (UTC)[reply]
Reply:
Obvious Relevance question...covariance is mentioned as a link Estimation of covariance matrices, pertains to variance statistics, here it is a special equal covariance case.
Lacks Cohesion Sentences 2 & 3 are mathematical statements; Period/Stage transitions are paired and subtracted. Note 417Ma are equal case covariances. Sentences 4 & 5 talk about covariant pairs that include Pre-Cambrian Periods.
constitute Original Research Further metaphores needed to help me understand your assertion. The Extinction Event would be a good common discussion backdrop. In that article I see multiple references, with the totality of the article is a unique assembly of references. This to me makes inferences about extinction periodicity. In my view of Physical Geologic Driver, because every sentence is referenced, Extinction Event is an equivalent presentation. The reference subjects are shortened to single sentences. The last few sentences are taken from the summaries of multiple references. All statements are in context with the referenced materials. Personnaly (IMHO), the Extinction Event article organization is poor. By contrast,Physical Geologic Driver article is tightly organized. Now, you may be sensing a (new) Paradigm; and I suspect this by the non native (alien) language comment. Now don't take my criticism of extinction event personnaly, as I have edited more than one section.
Morbas (talk) 23:14, 1 November 2011 (UTC)[reply]
Exactly: "you may be sensing a (new) Paradigm". We are. New paradigms are original research, by definition. As for the extinction event article, this isn't an appropriate place to raise the matter if you think there is WP:OR involved. Either do that at the article talk page, or at WP:NOR/N. AndyTheGrump (talk) 23:19, 1 November 2011 (UTC)[reply]
And what paradigm did Physical Geologic Driver introduce? Only interdiscipline organization was used. Some structure and summary remarks are from interdisciplinary books, Evolution on Planet Earth and GAIA and others. This is not orginal research.
Morbas (talk) 01:23, 2 November 2011 (UTC)[reply]
Unless you can find an article in a published reliable source that discusses the Physical Geologic Driver it is original research to suggest such a topic exists. You have apparently been repeatedly told this, and I'm not interested in further discussion. Find somewhere else to promote your theories. AndyTheGrump (talk) 03:30, 2 November 2011 (UTC)[reply]

I get one principle here: 1) If math and data can be conceived as representing a new paradigm, it is not allowed.
Wiki Contradiction: Definitionencyclopedic dictionary: The question of how to structure the entries, and how much information to include, are among the core issues in organizing reference books...An encyclopedia also often includes many maps and illustrations, as well as bibliography and statistics.
Morbas (talk) 14:47, 2 November 2011 (UTC)[reply]

Health statistics

edit

I'm having trouble finding a few health statistics. I ask for South Carolina, but I'd be happy just finding the statistics for the United States.

  1. What percent of South Carolina has chronic heart failure?
  2. What percent of South Carolina has coronary artery disease?
  3. What percent of South Carolina has hyperlipidemia or hypercholesterolemia?

I'm looking through CDC files, but I can't find clear statistics, just very general measurements like "many". -- kainaw 14:06, 1 November 2011 (UTC)[reply]

Try PMID 21982674 and/or PMID 16704179. Dualus (talk) 15:44, 1 November 2011 (UTC)[reply]
Heehee! The authors of the first paper are the ones that I'm doing this research for. Maybe the authors of the second one have the answers - and since they are just down the hall, I can go ask in person. Thanks. -- kainaw 16:15, 1 November 2011 (UTC)[reply]
Yep. They had it. 2%, 5%, and 43%. -- kainaw 17:11, 1 November 2011 (UTC)[reply]
\o/ Dualus (talk) 01:42, 2 November 2011 (UTC)[reply]

Decay

edit

This is really a host of questions, I suspect. I'm curious about decay regarding (1) molecules, (2) atoms, and (3) particles. When these systems are unstable, what is it that makes them unstable, and what causes to decay? I know it's a statistical/probability thing, but what processes are going on to make that happen? Is there a good way to visualize it? --Goodbye Galaxy (talk) 14:52, 1 November 2011 (UTC)[reply]

Instability of a system is the same as saying the system has a high amount of potential energy. That is, a stable system is low in potential energy, while system with more potential energy is less stable. All systems have a spontaneous "drive" to move from higher potential energy to lower potential energy; for example a chemical reaction will (all other things being equal) tend to be spontaneous if it is exothermic, that is if it releases potential energy as heat. The same principles hold true for decay of subatomic particles, for example nuclear decay. Protons are the baryons with the lowest potential energy, so under the Standard model (the currently accepted "best" model we have), proton decay should be impossible because there is nothing with less potential energy for the proton to decay into (assuming some of the assumptions of the standard model are correct; there are other models which allow proton decay, but as yet such an event has never been observed). Another baryon, the neutron does decay, via beta decay, into a proton, an electron, and a neutrino; this is allowed and observable because the neutron has more potential energy than does a proton. --Jayron32 15:10, 1 November 2011 (UTC)[reply]
Jayron answered your first question "what is it that makes them unstable?" correctly. Some energy that needs to be shed makes them unstable. The answer to your second question "what causes (them) to decay?" is that the decays are spontaneous (Meaning: there isn't a cause. They happen spontaneously). Dauto (talk) 15:33, 1 November 2011 (UTC)[reply]
Another way to look at it: we define "energy" to be "some mysterious thing that we can quantitatively measure, whose property affects the stability of a dynamic system." If a system has two possible states, and it prefers to be in State A instead of State B, we can say that State A has more less energy. (Thanks for the correction below, Jayron). We can measure the rate of change from State A to B, and use that to quantitatively measure the energy. This is the foundation of the entropy formulation of statistical mechanics, where we rigorously define the system states in order to calculate the available degrees of freedom (the entropy). In conventional classical systems, we think of energy in terms of, say, kinetic-energy and gravitational potential energy; but these concepts can be reconstructed in the context of a phase-space system description, with velocity as an available state; and the tendency to exchange velocity and position in a gravitational potential well is one particular set of state transforms; these obey the formulas that you are familiar with (kinetic energy and gravitational potential energy). We can say that a ball at the top of a hill prefers to decay into a ball at the bottom of the hill (eventually dissipating its energy as heat, through the processes of dynamic friction). Molecules, atomic nuclei, and elementary particles are no different - they decay into different system states, but they follow more complicated transformational rules that define their energy. I will again refer the interested reader to Stowe's text on statistical mechanics, (as I've said often before, it really is the only physics book you need - all other physics is just a special-case of thermodynamics). There are entire chapters devoted to the conceptual and quantitative study of system stability. Nimur (talk) 17:12, 1 November 2011 (UTC)[reply]
Just to nitpick, but if it prefers to be in State A, then state A should have less energy; energy and stability are inversely related, assuming you are refering to potential energy... --Jayron32 17:37, 1 November 2011 (UTC)[reply]
My error. I got lost in conceptualization-language. In almost every context, lower energy is defined as the "preferred" state. Comment: this is purely a sign convention, albeit a very consistent one; it has no profound physical interpretation. Nimur (talk) 18:26, 1 November 2011 (UTC)[reply]
As for the lack of a trigger of spontaneous decay, I suspect that there really is a trigger, we just don't know what it is. Perhaps something like a quark popping in from a parallel dimension for a nanosecond. This also seems closely related to the question "is anything truly random" ? While quantum mechanics hold that there are truly random events, another possibility is that things which appear truly random are merely following patterns too complex for us to comprehend. StuRat (talk) 18:30, 1 November 2011 (UTC)[reply]
Not necessarily. To paraphrase Donald Rumsfeld, This is a known unknown, or more precisely, it is known that there is no causitive agent. Well, in the sense that anything is known or knowable at all. The assumptions of quantum theory is not that there is some technological limitation to our knowing the cause; it is that there truly is no cause. You are, of course, free to "believe" anything you wish, but there's nothing in the theory which requires a "trigger", as you put it. It is truly a spontaneous event without antecedent. As soon as you try to work back farther, such as the "quark popping in from a parallel dimension", you don't actually solve the problem of a trigger; instead you merely move the trigger back one event. What trigger caused the quark to "pop in"? At some point, you reach the problem known philosophically as the unmoved mover; quantum mechanics reaches its own conclusions regarding this in that it is pretty conclusive that individual decay events are "unmoved", i.e. truly spontaneous and lacking in any antecedent. Which is not to say that, would such a trigger be found experimentally, it would be rejected out-of-hand because the current theory says it shouldn't exist. Anything is possible, and we should always be both prepared for such eventualities, but also be able to work within the current models insofar as they work. And since they do, there's no need to invent fictions to make them work, as yet... --Jayron32 18:41, 1 November 2011 (UTC)[reply]
(EC)You just replaced a random and spontaneous decay with a random and spontaneous quark popping from other dimension which is still random and spontaneous with the added disadvantage of requiring an unobserved event for no good reason. The intrinsic randomness of quantum mechanics has made may good physicists quite uncomfortable (including Einstein who famously said that God doesn't play with dice). But all the alternatives are either inconsistent with observations (Hidden variable theories) or have some even more distasteful features (Non-locality and many universe interpretation). For the interested reader, Schroedinger's cat is a good place to start. Dauto (talk) 18:58, 1 November 2011 (UTC)[reply]
I don't find the many universe interpretation distasteful, at least not as distasteful as actions without causes. To me that's just as bad as saying "God must have done it" whenever we encounter something we don't understand. StuRat (talk) 19:48, 1 November 2011 (UTC)[reply]
Yes, but you have not answered the two most fundemental questions 1) what is the trigger of the "quark popping in from another universe" and 2) what is the evidence that such an event has occured. If you cannot answer 1), then your "solution" does not actually solve your "problem", because you haven't actually found the cause, you've merely moved the cause back one step, and 2) is of course vital, because anyone can propose anything, evidence is a nice way to seperate the good proposals from the bad. --Jayron32 19:54, 1 November 2011 (UTC)[reply]
Well, there will always be unknown "deeper causes", the question is whether we should assume there is no cause. For comparison, should we assume that nothing existed before the Big Bang just because we don't know what might have existed then ? How is the assumption of "nothingness" more valid than the assumption of "something unknown" ? And, of course, the history of science is chock full of examples of where previously unknown and unseen causes are later identified, such as with dark energy and dark matter, which, of course, must then have their own causes. StuRat (talk) 21:13, 1 November 2011 (UTC)[reply]
It is good to see that you have changed your mind and you are finally agreeing with me! The whole point is that you don't put more assumptions into a situation than you can; assuming that some cause exists where none is shown to isn't consistant with good scientific practice, as is the idea, which you will note I already stated clearly, that science is perfectly flexible enough to accept new data once it is availible. The whole point, which you seem to have finally accepted, is to not invent explanations where none exist, to accept given models as adequate until such time as evidence shows them to be inadequate, and to be willing to alter or create new models when and if such evidence does exist. Well done StuRat, it takes a strong person to abandon their original incorrect assumptions! --Jayron32 22:35, 1 November 2011 (UTC)[reply]
And are you willing to accept that the assumption that there is no cause is just as unwarranted ? StuRat (talk) 22:47, 1 November 2011 (UTC)[reply]
Why do you feel the need to have a cause without evidence of one? --Jayron32 23:04, 1 November 2011 (UTC)[reply]
Because that's how every other field of science works. Why do you feel the need to not have a cause, without evidence that there is none ? StuRat (talk) 23:10, 1 November 2011 (UTC)[reply]
All the evidence points to the fact that the decay is completely spontaneous. Not only that, the spontaneous decay is consistent with (and predicted by) quantum mechanics which happens to be one of the most well tested theories of all times. Dauto (talk) 02:40, 2 November 2011 (UTC)[reply]
How can there be evidence that there is no cause ? After all "absence of evidence is not evidence of absence". StuRat (talk) 03:59, 2 November 2011 (UTC)[reply]
I was going to answer that question but then I saw Mr 98's excellent explanation at the bottom of the thread and realized how hard it would be for me to give a better explanation than he did. The short answer is that the lack of causes for decays is a necessary consequence of Quantum Mechanics therefore any evidence that QM is correct (and there are too many to count) is also evidence for the absence of cause. Of course there are non-standard interpretations of quantum mechanics that attempt to get rid of its randomness (such as the many world interpretation) But even those interpretations don't get rid of causelessness and they have, as I pointed, some other distasteful features. It's important to understand that alternative interpretations of quantum mechanics differ from quantum mechanics only in interpretation but are identical to it as far as the dynamics of the theory are concerned and that's why they will not introduce causation in the theory. The only way to introduce causation in the theory is to embrace some form hidden variable theory (your quark popping out of nowhere is included here), but Bell has shown that all hidden variable theories violate his inequality and experiments have already been performed to test his inequality and have confirmed QM. (I ended up attempting a partial explanation after all). Dauto (talk) 16:52, 2 November 2011 (UTC)[reply]
@StuRat: Name one other field of science which, as a standard practice, invents "causes" out of whole cloth merely because people feel uncomfortable about not having one? Because that's all I can see of what you have done here. I know of no field of science which accepts untested hypotheses as reasonable explanations of events, and all you've done is propose an untested hypothesis because the actual, tested theory makes you uncomfortable... --Jayron32 02:46, 2 November 2011 (UTC)[reply]
You seem to have taken my "quark popping in from a parallel universe" comment as if it were a genuine theory, when it was just meant as one of an endless number of possible causes. For an example from another field, take seismology. If a quake occurs where they have no record of a fault, they don't just say "well, there must not be a cause, then", they start looking for the fault or other cause. And even if they never could determine the cause, they still wouldn't assume that there isn't one. Or, for a real example, take the Tunguska event. Nobody quite knows what the cause was, but this doesn't lead scientists to propose that there wasn't a cause. StuRat (talk) 03:57, 2 November 2011 (UTC)[reply]
That's a false analogy. Earthquakes as a concept are known to be caused by a certain process. That an individual earthquake doesn't have a specific known cause in one instance doesn't mean that no earthquakes do. In the cause of particle decay, not only does the overarching theory, quantum theory, explain that such events are expected to be truly spontaneous, but experimental evidence has confirmed (insofar as confirmed means, not contradicted, like ever, so far) that understanding to be accurate. Again, its not like some individual atom decays and we don't know what caused that atom to decay, but we know what caused the rest to decay. That's what your earthquake analogy makes it sound like here. Also, note what Mr. 98 says below; it is quite salient. You are not the first person to be made uncomfortable by this. Einstein himself didn't like the probabalistic aspects of quantum theory, and he famously said "God does not play dice with the universe." Quantum theory creates all sorts of apparent paradoxes when one attempts to understand quantum theory in the manner in which one understands macroscopic, human-scale events to occur. You simply cannot draw an analogy to anything you, as a person, may have an experience with and have it be valid. That's the whole point of the Schroedinger's Cat exercise: when you take quantum-level phenomena (like nuclear decay) and attempt to understand it as one would understand an earthquake or the Tunguska effect, you are basically "doing it wrong". This makes a great many people uncomfortable. However, being uncomfortable is not the sole sufficient criteria for declaring a well organized and well tested scientific principle to be unsound. --Jayron32 04:11, 2 November 2011 (UTC)[reply]
Actually, quakes have many causes (transit of tectonic plates, movement of magma, gravitational tides, water mass accumulating in a basin, etc.), and we probably haven't yet identified them all. And what you said near the end confirms my statement that quantum mechanics stands alone in science as the only field that doesn't believe that all events have causes. StuRat (talk) 04:22, 2 November 2011 (UTC)[reply]
You're probably right about that. QM is different than other scientific theories in which it proposes that some events don't have a cause. But also note that QM is one of the best well tested scientific theories. It is shocking for most people that the world should behave like that, but experiments - not peoples uneasiness - determines whether a theory lives or dies and experiments have confirmed QM over and over again. Dauto (talk) 18:22, 2 November 2011 (UTC)[reply]
Right, and I'm sure that it's calculations of the probabilities of events are accurate, but how does that prove that there are no underlying causes of those events ? StuRat (talk) 01:00, 3 November 2011 (UTC)[reply]
See my comment at the bottom of the thread
Just to add on to what Dauto wrote — the problem of abandoning causality in the area of decays in particular has been one that has troubled physicists for decades, and fairly high-grade ones at that! If it could be waved away with a simple "well, maybe it's causal and we don't understand it," they'd have done so by now. Any answer that is better than "it's actual acausal" has got to be cleverer than anything Einstein could come up with. Which isn't to say it's impossible; it's just to say, don't expect that you can come up with a clever way around it unless you're pretty well-versed in the subject matter, and have some reason to suspect you might be cleverer than Einstein. --Mr.98 (talk) 19:09, 1 November 2011 (UTC)[reply]
Since I've got my own little space down here, I just want to elaborate a little bit. Do you find thinking about the relation of QM to other sciences difficult? Good! Do you find the idea of acausality sickening and unusual? Good! Do you find it hard to swallow seemingly ridiculous assertions like, "causality doesn't work here, and only here"? Good! You're beginning to understand why quantum mechanics was just a big thing when it came out, why the Schroedinger's cat thought experiment was invented (not as an illustration of how cool QM was, but as an argument meant to show how absurd and objectionable QM was), why Einstein said God doesn't play dice, and why both the Nazis and the Soviets looked into persecuting it (until they realized it was necessary for atomic bombs). This is the root of Bohr's famous quote that "those who are not shocked when they first come across quantum theory cannot possibly have understood it." It's weird, it's shocking, it's illogical (by macroscopic standards). And yet... it works. Tremendously well. And yes, those assumptions about causality, those assumptions about knowability, those assumptions about superimposed states, there are ways to test for that sort of thing. (Bell's theorem gets quite deep on these points, and the experimental results allow one to say that Einstein was definitely wrong, and Bohr was some form of right.) Does this mean that QM obeys different rules that the rest of science? Yes, it does. That's entirely the point. QM posits that at the quantum scale, the world doesn't work the way it does at larger scales. There must be correspondence, in the end (QM behavior must result in macroscopic behavior that is consistent with our experiments), but that's actually a very low bar for a scientific theory (Einstein would have raised the bar higher: he wanted the quantum world to also make sense, not just work out experimentally). QM says that logic about causality doesn't work the same at quantum scales. It says that knowledge itself doesn't work the same at quantum scales. This should be deeply disturbing when you are learning about it. We wash away that disturbance by clothing it in a lot of rote laws and rules, and if you do anything with that level of physics on a regular basis, you quickly learn to see the QM logic as being completely sensible for that scale. That's because the experiments bear it out and it works pretty wonderfully, not because it makes a lot of intuitive sense. (As none other than Feynman put it, "I think I can safely say that nobody understands quantum mechanics." He doesn't mean that QM doesn't work or isn't a good theory; he means, at a deep level, it gets pretty hard to wrap your macroscopic brain around.)
If I can add a little historical insight, here, it's usually engineers in particular who find modern physics the most problematic. It's usually engineers who propose ways of getting rid of QM or relativity and going back to some sort of modified classical mechanics. Engineers are the ones who get up in arms about Einstein and Bohr. Why? A theory: engineers deal primarily with the world on classical (non-quantum, non-relativistic) terms, and their worldview is one of total functional understanding, not backing up and saying, "well, in this area, you just can't know anything, at all." They have enough math and enough physics to be dangerous, not enough to understand the physics deeply. (Which is no slander — I don't understand it deeply myself.) They are precisely the right demographic to be horrified by modern physics, because they aren't indoctrinated to its logics, and they aren't willing (like most non-scientists) to just pass the buck to the hardcore physicists. Again, I don't mean this as any slander, but it's an interesting trend, historically speaking. Does it make the engineers correct? Alas, no, not at all. But I understand the impulse! --Mr.98 (talk) 12:22, 2 November 2011 (UTC)[reply]
We don't know what selects between the various experimental outcomes allowed by quantum theory. We certainly don't know that it's "truly random", whatever that even means. Chaotic deterministic systems produce high-quality pseudorandom numbers from simple inputs. We're totally incapable of distinguishing the output of such a system from "true" randomness unless we know the seed/key. Chaotic systems are ubiquitous in the real world. Even a fairly simple constraint on the initial and final state of the universe would probably lead to random-looking behavior in between.
I'm not trying to argue that quantum randomness isn't "truly random". I'm just saying that it's naive to pretend that we have any idea what's going on here. -- BenRG (talk) 22:52, 2 November 2011 (UTC)[reply]
The problem is that Chaotic deterministic systems fall within the hidden variable category of theories that have been experimentally ruled out. You must read (and understand) Bell's theorem if you really want to understand why QM wave function collapse is considered truly random. Dauto (talk) 04:15, 3 November 2011 (UTC)[reply]
Bell's theorem rules out a large class of local hidden variable theories, some of which have intrinsic randomness and some of which don't. It leaves a lot of other things, some with intrinsic randomness and some without. It isn't a result in favor of randomness, as you and Mr.98 both seem to believe. -- BenRG (talk) 07:00, 3 November 2011 (UTC)[reply]
What I'm saying is that experimental evidence for Bell's inequality violation tells us that either the wave-function collapse is a random event or it is a non-local event and many people find the latter even more distasteful than the former. Dauto (talk) 13:49, 3 November 2011 (UTC)[reply]
Bell's theorem has as a premise that there are sources of randomness available at both detectors so that they can be reconfigured on the fly independently of the generation of the Bell pair. The theorem is used in the contrapositive: the conclusion is wrong, so some premise must be wrong. This means, roughly, that the world must be non-local or non-random. Technically, any deterministic theory evades Bell's theorem. I happen to think that deterministic chaotic systems do produce good random numbers, but nobody can prove that. In any case, your "non-local or random" above is wrong. -- BenRG (talk) 18:24, 3 November 2011 (UTC)[reply]
That's not how I interpret Bell's theorem. At any rate, I concede that I was using the world random when I really should've been using the word non-deterministic or acausal. Dauto (talk) 03:16, 4 November 2011 (UTC)[reply]
What BenRG points to is the superdeterminism loophole. It's not taken very serious, but recently, 't Hooft has argued that in favor of it, see here. Count Iblis (talk) 17:45, 4 November 2011 (UTC)[reply]
I suppose that's a possible alternative. I'm going to file it in the folder labeled "Even more distasteful alternatives". I guess the alternatives are
  1. get rid of determinism and allow for acausal events such as radioactive decays.
  2. Super determinism where everything is written in the stars and there is no room for free will
  3. Allow for Superluminal communications (non-locality) in which case the decay may be caused by some unsuspected even somewhere in the other side of the Universe.
Hard to tell which one is the least distasteful possibility, but I like the first (Copenhagen interpretation). Dauto (talk) 18:19, 4 November 2011 (UTC)[reply]

Thanks for the answers and discussion, folks. Both were very insightful and fascinating! Goodbye Galaxy (talk) 14:03, 2 November 2011 (UTC)[reply]

Minimum recovery time for a gunshot wound?

edit

So say we have an individual who is shot in the stomach/abdomen area with a small-calibre pistol. He receives treatment for shock and blood loss five to ten minutes afterwards, and is properly transported to a medical facility shortly afterwards. Assuming no injury to the spine, what would you say is the minimum amount of time that realistically would need to pass before he's capable of getting up and walking? - And presuming he insists on doing so, what kind of precautions would you expect a medical professional to take so that he doesn't cause further injury to himself? --Brasswatchman (talk) 17:16, 1 November 2011 (UTC)[reply]

The reference desk will not answer (and will usually remove) questions that require medical diagnosis or request medical opinions, or seek guidance on legal matters. Such questions should be directed to an appropriate professional, or brought to an internet site dedicated to medical or legal questions.

Okay, disclaimer, then - this is for a fiction piece, and I'm just trying to gauge what medical professionals would expect. Does that fit under the Reference Desk's aegis? --Brasswatchman (talk) 17:19, 1 November 2011 (UTC)[reply]
In fiction, people get shot in the stomach all the time. If they are not a primary character, it means instant death. If they are a primary character, they can keep walking around instantly, but form a slight limp. It is very important to note that the blood is always hidden behind a jacket so nobody actually knows the person has been shot until much later. -- kainaw 17:25, 1 November 2011 (UTC)[reply]
Sure, but we all know that's completely unrealistic. I'm trying to avoid that - at least in this story. --Brasswatchman (talk) 17:36, 1 November 2011 (UTC)[reply]
I would say a week at the absolute minimum. Even if the bullet does not seriously damage anything essential, this person will have peritonitis, infection of the gut cavity, which requires treatment with massive antibiotics. The main precaution would be to move very carefully and avoid anything that might cause a wound to reopen. Looie496 (talk) 17:30, 1 November 2011 (UTC)[reply]
Okay. So we're probably talking a wheelchair - plus maybe an IV drip with antibiotics? With a blood transfusion on standby, just in case? --Brasswatchman (talk) 17:34, 1 November 2011 (UTC)[reply]
Due to legal issues, it is standard practice in U.S. hospitals to have ALL in-patient patients in assisted mobility (a rolling bed or wheelchair) until completely out of the hospital. That is why you often see nurses wheel perfectly fine patients out of the hospital when they leave. -- kainaw 17:41, 1 November 2011 (UTC)[reply]
I've heard this story before but I have no reason to believe it's uniformly true. I've seen many instances of inpatient patients leaving hospitals on their own two feet. I suspect this is a movie abstraction, or perhaps something some hospitals do. Certainly not all. Shadowjams (talk) 00:12, 3 November 2011 (UTC)[reply]
Having been in hospitals several times due to sports-related injuries (from karate, basketball, etc.), I can tell you from personal experience that this depends on the seriousness of the injury more than anything else. There was one time they wheeled me out in a wheelchair (after being treated for a sprained knee), but most other times they let me walk out. Within the hospital, though, it's a different story -- once you sign in and they get your vitals, they put you on a gurney and wheel you around at least until your examination and initial treatment (if required) is completed. But then again, most of my injuries were relatively slight, so an extended hospital stay was never necessary in my case. FWIW 67.169.177.176 (talk) 04:51, 3 November 2011 (UTC)[reply]
Right. But I'm asking about in general, not just within the US. --Brasswatchman (talk) 00:22, 2 November 2011 (UTC)[reply]
He would be encouraged to 'mobilise' as soon as possible - even in the US they have been aware since the 1960s that patients recover much faster if they move about a bit. Given that a woman who has delivered by Caesarian section, and who therefore has a 10" wound on her lower abdomen, is expected to get out of bed and move about within 24hrs (and I have done this 3 times, so believe me I know), I would expect your gunshot victim to be gently mobilising, and very possibly have been sent home, depending on where in the world he is. The stitches for a C-section come out after 7 days - unless the surgeon had to really dig around for the bullet, your victim will have a much smaller wound which should be well on the way to healing. He will probably have been given antibiotics, since any perforation of the stomach or gut is likely to lead to peritonitis, but by the end of the week this should have been discontinued, or be tablets only. But gently is the word here. He won't be able to leap tall buildings at a single bound, and will probably have been advised not to drive a car for six weeks. --Elen of the Roads (talk) 01:07, 2 November 2011 (UTC)[reply]
You do realize the problem of this comparison, do you? The primary trauma with the c-section is from the (single) surgery itself, which is (in comparison) a very controlled trauma... When considering a gunshot wound, the primary trauma comes from the actual penetration of tissue by the bullet, which is neither controllable nor really predictable, and neither is the resulting damage. The additional trauma from operations (and it is likely that hes going to have MULTIPLE surgeries) only adds up to that. Phebus333 (talk) 23:47, 3 November 2011 (UTC)[reply]
Depends on the individual. John Wayne and other good white-hat guys et. al., were always getting shot in the shoulder (why not else where?). Apart from a be brief grimace at the time of receiving the slug, it didn't seam to bother them at all. They even had to ware an arm-sling to remind people to say “Oh Gee, You've being shot!” Also, did you notice that the wagon-train camp fires always had a barrel next to it, so to hide the propane gas cylinders? --Aspro (talk) 01:02, 4 November 2011 (UTC)[reply]
"John Wayne and other good white-hat guys et. al., were always getting shot in the shoulder (why not else where?)" -- Maybe because Joan of Arc was shot in that same spot?  :-/ 67.169.177.176 (talk) 06:09, 4 November 2011 (UTC)[reply]

Point of greatest gravity

edit

Hello

I am not even an amateur on these fields, but I just had a slight thought: For, say, a football near a celestial object like Earth, gravity does not work only to the center of the earth. Instead, its pull must be distributed across all matter that composes it, dependent on the distribution of mass, right? This pull is naturally greater to the center (or the DIRECTION of the center), but how long does that center-pull last? Read on, I'll attempt to elaborate.

In the magical hollow room at the center of the earth, such a football would -- to the best of my understanding -- 'float'. Is its gravitational weight thus 0, because it has no movement in any direction? If it is, does there not exist a point (outside or within the earth) where the football enjoys a maximum of newton in any particular direction? Surely someone's had this idea and written loads of papers on it, and consequently there's a Wikipedia article on it??

Thank you! 88.91.87.141 (talk) 18:57, 1 November 2011 (UTC)[reply]

 
Maximum acceleration of 10.66 m/s² (1.09 g) at 0.5463 Earth radii, a significant deviation from the linear reduction experienced with a sphere of uniform density.

— Preceding unsigned comment added by 110.49.227.102 (talk) 01:08, 2 November 2011 (UTC)[reply]

yes! the relevant article is Shell theorem. Dauto (talk) 19:14, 1 November 2011 (UTC
And you might find Gravity of Earth interesting as well. Richard Avery (talk) 19:17, 1 November 2011 (UTC)[reply]
And, specifically, the picture on the right under Gravity_of_Earth#Depth states "Earth's Gravity according to the Preliminary Reference Earth Model (PREM). The acceleration has its maximum at 0.5463 Earth radii and a value of 10.66 m/s²." That's about 1.09 g. StuRat (talk) 21:06, 1 November 2011 (UTC)[reply]
You can use Gauss's law for gravity to compute the net gravitational force, given any arbitrary distribution of mass. Nimur (talk) 21:15, 1 November 2011 (UTC)[reply]
To the OP: the graph confirms that, on the simplifying assumption that the Earth's density is uniform, the point of maximum gravity is at the surface. This follows from the shell theorem mentioned above. --Heron (talk) 18:38, 2 November 2011 (UTC)[reply]

Quantum computers

edit

How close is current technology to achieving practical quantum computers? --207.160.233.153 (talk) 20:18, 1 November 2011 (UTC)[reply]

Not very close at all. There are no hardware vendors producing anything like a quantum computing core; there are no integrated systems that package quantum computers as fully-functional appliances; and there is no software that is currently available to run on (or operate) a quantum computing device. At present, quantum computing falls solidly in the realm of research. Various agencies of the United States Federal Government (especially those responsible for long-range technology planning) use the formal method called technology readiness assessment; I think that you can find that most quantum-computing-related research falls into the TRL 1 through 3 category; in other words, ranging from "keep dreaming" to "don't count on getting much funding this year." Nimur (talk) 20:59, 1 November 2011 (UTC)[reply]
Just so you know I'm not blowing hot air on account of my own personal cynicism, other reputable sources agree: quantum computing is far off in the future (at best). Here's a report, An Assessment of NASA's Pioneering Revolutionary Technology Program, from the United States National Research Council for the National Academies which states, in no uncertain terms, that quantum computing is unlikely to have any impact on technology or technology policy for at least the next ten years. Here's a review from Johns Hopkins University's Applied Physics Lab, Semiconductor Devices..., that states "... a considerable amount of technological development is required in order to bring quantum-computing systems to the marketplace." These sentiments are typical of the responses I expect from experts in the fields of computing and physics. Nimur (talk) 21:09, 1 November 2011 (UTC)[reply]
I agree. Dualus (talk) 01:46, 2 November 2011 (UTC)[reply]
They can demonstrate the very basic concept; I think they have made a very basic "core" that can juggle two qubits. That's pretty cool, but still a long way off from making a machine that's going to do anything computationally interesting. I will also note, just as a caveat to what Nimur says, that 10 years is not very much time technologically (or in any other sense). In fact, I might also point out, that the classified world is usually about 10 years ahead of the open world (at least it was with nuclear technology, quantum electronics, and computing). Which isn't to say that NSA's got some big working quantum computer somewhere behind the scenes, but I wouldn't be surprised if they had produced significantly more advanced results than what are being reported as achieved in universities. We know the NSA is going to be all over this sort of technology and pumping a lot of money into it; it stands to reason that they've probably supported it a lot more than the "public" (as in non-classified) sector has. Whether the NAS report would have access to that sort of information when making their assessment is unknown to me, but I doubt it. This is an entirely different question than the question of when such things might end up in the marketplace, or in non-NSA hands. I would reiterate that in general the classified world is only about 10 years ahead of the unclassified world — it's not usually 20 or 30 years ahead — so that's some margin for speculation, but not infinite margin for speculation. --Mr.98 (talk) 12:43, 2 November 2011 (UTC)[reply]
Actually, the NSA has a large program for funding academic research into quantum computing, including a ~$6M / year partnership with Univ. of Maryland (or at least they did as of a few years ago). Which tells me that A) significant money does exist for basic research into this technology, and B) the classified world probably isn't very far ahead of the rest of us right now, since if they really knew the answers they would probably stop giving much money to unclassified research. Dragons flight (talk) 12:58, 2 November 2011 (UTC)[reply]
Not necessarily, actually! There's always the chance that the unclassified folks will come up with something clever, and they likely are interested in what a bunch of unclassified clunkers are going to be able to come up with on their own. The analogous case I'm thinking of is laser fusion, where the US gov't did fund a lot of civilian laser fusion work while they simultaneously had their own, larger, secret program going on in parallel, a program that was indeed about a decade ahead of the open world. But anyway, this is all speculation. I'm just saying that I wouldn't be surprised if they were a decade ahead, with the caveat that "a decade ahead" isn't that much, and it might mean, "we know it is incredibly, perhaps impractically hard" (which was also the case with regards to laser fusion — the "secret" researchers had a better understanding than the "open" researchers of the difficulty that was involved in trying to actually pull it of). (I will also note that $6M/year is not very much for a serious government research effort. That's peanuts in DARPA terms, where hundreds of millions are the norm.) --Mr.98 (talk) 14:44, 2 November 2011 (UTC)[reply]
Does "practical" mean a working prototype in a lab or a cheap box at Wal-Mart? I think the former is much more important than the latter. The killer app for quantum computers is breaking public-key cryptography. As soon as anyone in the world can (publicly) do that, everyone will have to stop using public-key cryptography and the ability to break it will become useless. What's left is fast simulation of quantum systems, and some problems in graph theory, and that's about it, I think. There may never be cheap Wal-Mart quantum computers because it's not clear what they'd be useful for.
Also, I don't think researchers are making progress toward a useful quantum computer. They are stuck and awaiting a breakthrough. How many years until fusion power is practical? How many years until we figure out quantum gravity? Could be six months, could be literally never. -- BenRG (talk) 22:31, 2 November 2011 (UTC)[reply]

2002 PT Cruiser shuts off on its own, but the dealers can't find diagnostics. Help?

edit

So on Saturday, I drove an automatic-transmission Chrysler PT Cruiser, when going down a street, the engine just turned off by itself. I pulled to a curbside, and called my dealer with whom I have a 3-year warranty. I didn't get any closer to the cause with a salesman who picked up the phone at the end of the day. I attempted to start it several times; the key would turn, but no cranking would occur.

Then 15 minutes later, I managed to get it started again.

I did errands for a few more hours, then at a parking lot, I started the car again, and I heard some hesitation/"roughness" in the engine. It then turned itself off, and I couldn't get it started again once more. This time, I had it towed to the dealership.

When the dealership opened Monday, they attempted to investigate. However, the car ran just fine and because their Ford-specific diagnostic computers could not do good at finding problems in a Chrysler, they took it to Ed Schram Dodge, where they would have more compatible diagnostic devices. (I bought the Chrysler certified pre-owned at a Ford-Lincoln-Mercury dealership, so I have Ford's warranty plan.)

I called them later that day, and they say that the car runs just fine and the diagnostics are finding no codes; he also said that they're trying to reproduce the problem so until they successfully do, there's just no way they can narrow it down.

I described him what happened to me, and he couldn't narrow down what would've caused it.

However, would you happen to narrow down the causes that he couldn't? Why do I somehow get the problems that mechanics can't reproduce, and aren't still there when they look at it? It's as if the car gained an unfriendly, temperamental self-awareness that wants to cause me trouble, but puts on a "showcase cover" for the mechanics so they can't find a thing wrong with it. This is just adding more trouble to my week than necessary. Thanks kindly in advance for getting me closer to a resolution, --70.179.164.113 (talk) 21:49, 1 November 2011 (UTC)[reply]

It's definitely an electrical problem, that much is obvious. Perhaps, it is loose wiring in the ignition or the interface between the computer and the ignition, an engine swithes off when the keys are removed. Plasmic Physics (talk) 22:18, 1 November 2011 (UTC)[reply]
By any chance is this a repossessed vehicle ? If so, it sounds like a dealer cut-off switch has been activated. However, this doesn't match it running roughly or running again after it has been "killed". Perhaps there's some other electrical problem, particularly in the ignition system. It might be a good idea to trace the wires in the ignition system and look for any signs of damage or water incursion (anything which could cause an intermittant short circuit or open circuit). StuRat (talk) 22:20, 1 November 2011 (UTC)[reply]
Temperature fluctuations can cause metal conections to expand and contract, contributing to the fickleness of a loose connection. Plasmic Physics (talk) 22:34, 1 November 2011 (UTC)[reply]
  • It's probably not something as simple as a loose connection on the wiring loom or water in the electrics (I owned a Ford Sierra once, where the rear lamp housings used to fill up with water. That was always fun). I would guess you have a faulty sensor in your engine management system. It often seems to be the temperature sensors that go, and do things like tell the fuel injection system that the engine is warmed up when it's not (had that all through one cold winter) or tell the alert system that the exhaust manifold is blocked when it's not (had that in the car I currently drive, puts up a 'do not drive on pain of death' warning on the dash, v. helpful when in the middle of nowhere). Put up with one for a long time that intermittently told me that the anti-lock brakes were not working, usually just as I heard them come on. The problem with the sensors is that they have chips in them, so they will keep trying to think....and when they fail they do so unpredictably, randomly, and are impossible to diagnose until they burn out. Elen of the Roads (talk) 23:43, 1 November 2011 (UTC)[reply]

Why mechanics cant reproduce the problem is pretty simple: your car is a machine. A Machine is a device that is capable of performing a certain task under very specific conditions. If just one of these conditions isn`t fulfilled, then the machine will fail. The specific conditions that caused the failure are unknown to the mechanic, so he has to use trial and error to find the cause of the problem... and this can take a lot of time, and certain conditions may simply be impractical to reproduce economically. I generally prefer technical equipment (like cars, cellphones, computers) to be as simple as possible (and to include as few electronics as possible) in order to minimize the risk of failure. Unfortunately, most people do not realize this and demand items that are more convenient to use (while they ARE working), meaning they will be more complicated and thus more prone to failure :( Phebus333 (talk) 02:22, 2 November 2011 (UTC)[reply]

Like Fly-by-wire systems in some Airbus jets? If it ain't Boeing I ain't going... 67.169.177.176 (talk) 03:59, 2 November 2011 (UTC)[reply]
That does seem questionable from a safety POV, both because any newer technology is less thoroughly tested, and because electrical systems seem to be inherently less reliable than mechanical systems. For example, an EMP isn't going to knock out a purely mechanical system. I would be far more comfortable with a fly-by-wire system that also has mechanical backups. I have similar thoughts on power windows in cars. It would be nice to be able to crank them down, even if the wires were shorted out, when driving through your favorite lake. :-) StuRat (talk) 04:48, 2 November 2011 (UTC)[reply]
There's a saying that "The science of electronics is a study of [loose] electrical connections." :-) 67.169.177.176 (talk) 05:18, 2 November 2011 (UTC)[reply]

The questoner said that (1) the engine stops when not expected to, and (2) he experienced a time when turning the key did not result in cranking. Therefore it is not a problem with sensors or water in the wiring. None of these will prevent cranking without other symptoms or trouble codes. Look for loose conections in 12V and ignitions switch wiring. "Bash test" the ignition switch. The first place I'd look are the connections on the battery terminals. You might think a trained dealer mechanic would spot such a simple fault, but in my experience that is not necessarily so. These days they all get trained in using the computer to tell them what to do, and don't have so much of the commonsense and logical thought processes of old-style "greasy hands" mechanics. These days you get a lot of "no trouble codes in the computer, therefore there is no fault / I don't know what to do" - Keith *********** — Preceding unsigned comment added by 121.215.50.144 (talk) 05:58, 2 November 2011 (UTC)[reply]

This is a long shot, but is it possible that, only in the vicinity where the failures occurred, there might be some form of radio-frequency interference that jiggered the car's electronics?
I've occasionally read in the press of cases where the emissions from something like airport radar, or from a damaged domestic device (here is a recent example from my neck of the woods), have prevented some nearby cars from unlocking, starting or running properly: it's only some because only a few models are going to be vunerable to the specific rogue "signal", because the signal strength might be sufficient over only a very small area, and because the faulty device (say) might only be operated occasionally. To diagnose such a problem you'd have to find other cases that correlated to the same location, or repeatedly test the car or an identical model in the suspect location.
Anyway, you have my sympathy because intermittent problems can be a nightmare. On one car I used to own, one of the rear lights would randomly fail, but would sometimes come on again after a firm thump on the bodywork next to it. Despite repeated attempts by the authorised dealership where I bought the car and had it serviced, they were unable to trace and fix the fault, and I was stopped a couple of times by the police for being improperly lit (though thankfully they let me off when I was able to thump-fix the light and explain). {The poster formerly known as 87.81.230.196} 90.197.66.254 (talk) 15:38, 2 November 2011 (UTC)[reply]