Wikipedia:Reference desk/Archives/Science/2013 October 16

Science desk
< October 15 << Sep | October | Nov >> October 17 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 16

edit

Trackside thing

edit
 
What is this?

Over the past few years, thousands of these things have appeared alongside UK railways. What are they? They are about five feet tall and in groups of maybe 30-60 spaced about 20 feet apart. This one was quickly snapped on my phone near Milton Keynes.--Shantavira|feed me 08:02, 16 October 2013 (UTC)[reply]

They are trackside lights for night inspections. SourceMike (talk) 13:53, 16 October 2013 (UTC)[reply]
Ah, that's interesting. Thanks.--Shantavira|feed me 15:53, 16 October 2013 (UTC)[reply]
No problem! I found it just as interesting! Mike (talk) 15:54, 16 October 2013 (UTC)[reply]

Children crying because they can't do something?

edit

Is it common for children to cry because they find something difficult? I recall crying about age 10 because I found it too hard to join up my writing (my cursive is now pretty neat IMO) which seems a strange thing to get upset about. --129.215.47.59 (talk) 10:43, 16 October 2013 (UTC)[reply]

The Wikipedia article titled Crying states "Crying is believed to be an outlet or a result of a burst of intense emotional sensations". The inability to complete a task can bring on intense stress, which would be an "intense emotional sensation". --Jayron32 10:48, 16 October 2013 (UTC)[reply]
If people around you are doing something effortlessly, and being praised for it, but you can't do it yourself, that's pretty frustrating. Happens to adults too. I'm in my sixties and still can't do joined up writing, not legibly anyway. Fortunately one develops a sense of perspective with time.--Shantavira|feed me 11:06, 16 October 2013 (UTC)[reply]
A child might well cry because he does not understand some homework requirement such as finding the common denominator in order to add fractions, or writing a book report. Crying can also be a learned response, or a form of manipulation, to get a parent to do the homework for him, or to just avoid doing it at all. I did not see in our article about crying mention of crying as a learned or intentional tactic, or a form of manipulation,or a for of emotional blackmail to get something or to avoid something, although books say it sometimes is. Edison (talk) 21:48, 16 October 2013 (UTC)[reply]
I suggest that you pretend to cry, record yourself, and play it back to see how well you did. My guess is that you, like almost everybody else, would be laughably bad at it. --Bowlhover (talk) 05:10, 17 October 2013 (UTC)[reply]
I can cry on command, it's very lifelike and I doubt anyone who didn't know me extremely well would be aware it was fake. A lot of my ex-girlfriends, and a few ex-boyfriends (women seem better at this), could do the same with varying degrees of realism- but all on the believeable end of the spectrum. Then again, I know a lot of actors, and while not all of them were, this may bias things a bit. At any rate, it isn't that difficult to do consciously, and if it is a learned response, it might not even be being done consciously- in other words, it's neither implausible nor infeasible.Phoenixia1177 (talk) 06:47, 17 October 2013 (UTC)[reply]
When I was four the ice cream truck came around, and when I asked if I could have some I was told to get the money off the counter, which I then handed to my dad, since I had never bought it before myself. He told me to go to the truck and buy it. I said I thought he was going to help me, and burst out crying. It was more traumatic than having stuck my fingers in an electric socket and than having sat on a fire ant nest, neither of which seemed like a betrayal. I don't believe I ever cried like that again, although a few years later I did throw up when we had a substitute teacher who was administering a spelling test without giving us enough time to write down the answers. μηδείς (talk) 02:59, 18 October 2013 (UTC)[reply]

Power used by mobile phone and radio towers

edit

When a load is connected to the secondary of a transformer, the power drawn by the primary from the source increases because of the magnetic coupling. Will something similiar happen when a mobile phone is switched on? Suppose a thousand phones get switched on (and hence get 'connected' to the tower), will the tower use more power? The same question applies to radio transmission towers. Does a radio tower consume a more power when radios get tuned to it's frequency (and hence gets 'coupled')? - WikiCheng | Talk 10:53, 16 October 2013 (UTC)[reply]

From first principles of physics, we know that a transmitting antenna's effective impedance does change due to the presence of a receiving antenna, even if the receiver is many miles away. But that effect is tiny - you can do the math to verify. In the case of ordinary telecommunications, radio antennas operate in the far field (as opposed to near field). Definitionally, this means that the effects of the receiving antenna are too far away to matter.
A much more prominent effect is that modern digital telephones use a bidirectional protocol. Telephone transmitters are not broadcast towers: they are nodes in a many-node, asymmetric full-duplex communication. The transmitter has more work to do when multiple devices are attached. Perhaps the easiest protocol to intuitively understand is time-division multiplexing; adding more telephones would require a higher duty cycle; in other words, the transmitter is on for a longer part of each time interval, and therefore uses a higher average power.
Depending on where you are, and which company runs your mobile telephones, time-division multiplexing might be supplanted by more advanced digital communication protocols; but in principle, whichever scheme they choose will have the same general relationship between number of users and total transmitter power usage. (Thanks to the rule-of-thumb about circuit design, the gain-bandwidth product, we can relate the engineering tradeoffs between time- and frequency-multiplexing of the transmitter design back to first principles of physics, and the conservation of energy, and so on). Whether you spend the power over a wider frequency-spectrum during a short interval (typically, using complex digital codings); or if you use a narrow spectrum for a longer interval (using time-division scheduling); the same power-bandwidth product gives the same signal-to-noise ratio. In actual designs, practical details may shift the optimal choice in one direction or the other. So, this gives the engineers who design radio protocols a little room for flexibility, and lets them pick the best-available scheme that is implementable in today's electronics technology.
In closing, I should mention that the concepts of base load and variable load also apply to transmitters; it is plausible that for a large transmitter, the base load is so close to the variable-load that the transmitter's power supply cannot reasonably switch modes, or otherwise deliver a variable quantity of power. Such large power-supplies are difficult to design efficiently, and this is an active area for new engineering research and development. Now that software can switch transmitters on and off as fast as, say, once per millisecond (!), power supplies need to be designed that can toggle between peak and idle at rates very close to those software latencies. This seems trivial to the engineers with backgrounds in software and digital systems, but as the power supply designers need to build capacitors and inductors and so forth, they are constrained by device size and switching time. So, while controlling a digital signal at two gigahertz is very easy using today's computers, swinging a couple hundred kilowatts on and off at even one kilohertz is quite difficult. Compound this difficulty by the fact that your cellular tower is sometimes in a remote area that might not connect to a utility electric grid: it might have its own diesel engine or gas turbine... Nimur (talk) 13:37, 16 October 2013 (UTC)[reply]

Thank you - WikiCheng | Talk 06:03, 21 October 2013 (UTC)[reply]

A geometrical analyze of camera capture blur?

edit

If one takes a picture with a camera like the Canon EOS 5D Mark III which has a image sensor with a size of 36 x 24 mm and 5760 x 3840 pixels. That captures an object that moves 0.178 meter sideways during the exposure time at a distance of 180 meters from the photographer. How many mm or pixels will the light from the object traverse during the image exposure? I suspect the distance from the middle of the lens to the sensor plays role here but don't find any data to calculate with. Electron9 (talk) 12:15, 16 October 2013 (UTC)[reply]

You have not supplied one vital piece of information. You need the focal length of the lens in use. The pixels traversed will obviously be greater if the lense is set for a higher zoom-in. 120.145.145.144 (talk) 12:43, 16 October 2013 (UTC)[reply]
I updated the distance to 180 m. And found this EXIF info which I hope completes the input data set:
Parameter Value
ApertureValue 7,00 EV (f/11,3)
FocalLength 120,0 mm
FocalPlaneResolutionUnit Inch
FocalPlaneXResolution 3942,5051
FocalPlaneYResolution 3950,617
XResolution 300/1
YResolution 300/1
Perhaps the EXIF "FocalLength" is another type of focal length ? Electron9 (talk) 14:51, 16 October 2013 (UTC)[reply]
Assuming that focal length is correct, the object's image would have moved 120(.178/180)=.1187 mm during the exposure time, which at 160 pixels/mm amounts to 19 pixels. The 36 x 24 mm sensor size is the same as the standard 35mm image size, so at least there's no distinction here between the real focal length and the 35 mm equivalent focal length. Red Act (talk) 17:16, 16 October 2013 (UTC)[reply]
It's occurred to me that your phrase "another type of focal length" might possibly be due to your finding the comma in "120,0" to be confusing, because it looks like two numbers separated by a comma, or a number in the thousands with an inadequate number of zeroes after the comma. It actually just means 120.0 ; a comma is used instead of a decimal point in many parts of the world, including most of Europe and South America. See Decimal mark. Red Act (talk) 21:01, 16 October 2013 (UTC)[reply]
Considering the camera dimensions are 152 x 116 x 76 mm, I find the 120 mm focal length rather large. Electron9 (talk) 06:10, 17 October 2013 (UTC)[reply]
In the picture in the link you gave, it looks like the camera's largest dimension is along the optical axis. And the dimensions don't appear to be for just the body of the camera without the lens, because 76mm would be an extraordinarily thick camera body. So the 120mm focal length seems quite plausible to me, because it's 32mm less than the 152mm total size of the camera along the optical axis. Red Act (talk) 11:57, 17 October 2013 (UTC)[reply]

Can natural remedies also be synthetic?

edit

Among the natural occurring substances sold as pharmaceutical drugs (take for example, melatonin or 5htp), could it be that you find several synthesize substances? That would be funny, since some people take them because they want to avoid artificial substances. OsmanRF34 (talk) 15:34, 16 October 2013 (UTC)[reply]

Your terminology is wrong -- in the US those things are sold as dietary supplements, not as pharmaceutical drugs. My understanding is that in the US, if a chemical is synthesized, it is treated as a pharmaceutical drug, and the manufacturer has to provide proof of safety before it can be sold. If it is derived from a plant or animal, it is treated as a dietary supplement, and the burden of proof is in the other direction. Looie496 (talk) 15:50, 16 October 2013 (UTC)[reply]
I doubt that's correct Looie. There are synthesized "artificial" products that are not drugs, such as many "artificial flavors." There are specific definitions of "drug" and so I can't speak to them all, but at least some of those do not turn on whether a product is synthesized. I would need to see a good cite before I believed Looie's comment. Shadowjams (talk) 23:07, 16 October 2013 (UTC)[reply]
See the section about the DSHEA in [1], it discusses the shift in burden of proof to the FDA- the initial segment discusses that drugs must undergo stringent testing. Here's the actual text of the act (DSHEA), [2], see section 4 about the burden of proof issues; this, also from the FDA, [3] mentions that the FDA is required to take action after it is on the market. This from the FDA, [4], states that supplements do not require approval. For an FDA def. of a drug, see section 321(g)(1) of [5]- in that section, it mentions a distinction between drug and supplement relating to section 343(r), which can be found in this document, [6]Phoenixia1177 (talk) 09:08, 17 October 2013 (UTC)[reply]
By the way, this is in relation to the burden of proof claim- natural -vs- synthetic does not appear to enter into the issue, it appears to be related to what is being claimed. I haven't done the research on that specific aspect, but nothing I've read seems to draw a distinction, it is not directly in the definitions- indeed, the linked ones would appear to refute the claim, unless they were amended somewhere.Phoenixia1177 (talk) 10:09, 17 October 2013 (UTC)[reply]
But the lines gets blurry when the same chemical can either be produced by nature or in a lab. StuRat (talk) 19:57, 16 October 2013 (UTC)[reply]
This makes me think of 1080 poison, it is used to eradicate possums, which an introduced pest in New Zealand. If I'm not mistaken, being sodium fluoroacetate, it is a salt of a "natural substance", isolable from certain plants. So 'natural/organic/non-synthetic' are utterly useless terms when determining the toxicity of a substance. Plasmic Physics (talk) 00:29, 17 October 2013 (UTC)[reply]
If the absolute structure of a synthetic substance is exactly the same as that of the naturaly derived substance then it is considered bioidentical and as long as it is very very pure then it is as "good" as and as "healthy" as the purified "natural" one. This is essentially an economic desision, is it cheaper to extract from the natural source e.g. morphine or synthesize e.g. ephedrine and pseudoephedrine. Die Antwoorde (talk) 08:07, 17 October 2013 (UTC)[reply]
<rant>I find the "natural = good for you, artificial = bad for you" philosophy one of the strangest perversions of logic of modern times. Firstly an artificial substance that is chemically identical to a natural one is indistinguishable from it - it doesn't remember how it is made, it is, literally, the same thing. Secondly the statement is patently, demonstrably ridiculous - arsenic, mercury, uranium, snake venoms, hemlock, amatoxin, and all infectious diseases are entirely "natural", and not one of them is good for you. Paracetemol, aspirin and hundreds of other drugs are entirely synthesised and, if taken appropriately, cause little harm compared to the amount of good they do. Thirdly it is not even applied consistently - I have often seen vaccines disparaged because they are "artificial" when they are in fact one of the most natural therapies I can think of - priming the immune system to respond to previously encountered antigen is entirely in harmony with how the immune system naturally works. It is certainly more natural than many alternative therapies, take accupuncture - in what situation in nature are needles inserted in extremely specific locations on the body? This is not even getting into the strangeness of the philosophical position that regards humans and what they make and do as "not part of nature".</rant> Equisetum (talk | contributions) 11:24, 17 October 2013 (UTC)[reply]
For natural vs nature identical)The difference resides in the impurities present!
For compouds that are "human designed"?)It is mainly a hstory of safe use thing. And it is not like we even know all of whats naturally in everything or if cretain "human designed" desigen compounds naturally occur in something! Die Antwoorde (talk) 12:55, 17 October 2013 (UTC)[reply]
Fair enough - if the impurities make a difference then they make a difference, but then the issue is the impurities, not the natural vs. artificial origin (if you could replicate the impurities as well the artificial stuff would be just like the natural). Likewise with the history of safe use - the issue is the history of safe use, not the origins, a newly discovered natural substance is more risky than an artificial compound with a long history of safe use. I accept that natural vs artificial can often broadly correlate with both these things, but I find the masking of the real issues behind the facade of natural vs artificial to be unhelpful at best and profoundly damaging to people's ability to make rational decisions at worst. Note that I'm not accusing anyone in this thread of this at all - in my opinion it's predominantly the mainstream media and the food and supplement industries that are responsible (along with the generally abysmal standard of school education in critical thinking and the assessment of evidence). Equisetum (talk | contributions) 14:40, 17 October 2013 (UTC)[reply]
What about enviromental etc differences between extraction of natural vs synthetic not just the bottom line $$. 122.111.240.138 (talk) 15:37, 17 October 2013 (UTC)[reply]
Yes, agreed - where it is environmentally better to naturally extract a compound that should be taken into account (and often isn't - I didn't claim that I'm in favour of pharma and agritech's approaches to the issue either, quite the opposite!), however where it is environmentally better to artificially synthesise a compound (such as where extraction by e.g. distillation uses a lot of energy, or where the substance originates from an endangered species), that should also be taken into account. Again, all I am saying is that natural vs artificial is a poor proxy for the real issues such as this. Equisetum (talk | contributions) 16:12, 17 October 2013 (UTC)[reply]
But, wouldn't the probabilities of impurities be lower in the case of artificial compounds? It seems intuitively easier to synthesize a pure compound than extract a pure one from some organic matter. OsmanRF34 (talk) 15:03, 17 October 2013 (UTC)[reply]
Yer but the impurities will likely be not natural stuff with no history of safe use.
A fair point - now we are getting to real issues - I am indeed willing to accept that in the specific case of a substance which has long been safely used in a natural preparation, the risk of using a new artificial preparation is possibly elevated due to the new impurities (assuming that the preparation can't be demonstrated e.g. by mass spectrometry to be effectively impurity free). It may also be less effective due to a missing beneficial impurity. However, this is equally true of a new artificial synthesis method which replaces an older one - it's about the change of synthesis method, not natural vs artificial, the trouble is, lots of thinking on this is clouded by the fact that most older, more well established preparations happen to be natural due to the relative youth of the field of synthetic chemistry. Equisetum (talk | contributions) 16:01, 17 October 2013 (UTC)[reply]
Yes, this is what I mean about general, broad correlates. It's important to bear in mind though that a) artificially produced substances will have impurities at some level as well, such as reaction intermediates and degradation products, they will just be different ones to the natural products (and they can be in any amount - it entirely depends on the degree of purification, it may be easier to get a pure compound artificially, but the purity you end up with still depends on how much effort you put in to purification) and b) impurities, whether natural or artificial are probably about as likely to be bad for you as good for you, and many are going to be completely neutral (I don't have any source for this, it's just naive logic - I would be interested to hear if anyone has done studies on this point). Equisetum (talk | contributions) 16:01, 17 October 2013 (UTC)[reply]
There is also the possibility of introducing new impurities (even into the natural products} through the purification processes. How pure does the synthetic caffine that goes into coke have to be I wonder U.S.P ?. — Preceding unsigned comment added by 122.111.240.138 (talk) 16:28, 17 October 2013 (UTC)[reply]
Caffeine has to be at least 98.5% pure to meet the USP standard apparently [7], so still some room for impurities there, but not that much. The added impurities thing was what happened to Dasani, a UK brand of bottled "purified" water produced by Coca Cola - they took municipal tap water and ran it through a purification process which turned out to oxidise the bromide in the water to potentially harmful levels of bromate. That was a fairly depressing episode - they took a perfectly good product (tap water) and turned it into an unnecessary, environmentally damaging, harmfully contaminated one before selling it at a profit. Equisetum (talk | contributions) 16:51, 17 October 2013 (UTC)[reply]
  • Looie and Phoenixia gave the correct answers. The "distinction" here is an arbitrary one dreamt up by legislators and lobbyists. There is no such thing as a "natural" chemical which can't be synthesized--cost is the only real problem (An exception was Cannabinol, a complex molecule which it was yet unknown how to synthesize when I was an undergraduate.) The notion peddled to the FDA in the nineties was that if a substance could be concentrated from a living organism and wasn't marketed as a drug it should be legal to sell without a prescription. This is a legal distinction, not a scientific one. μηδείς (talk) 20:10, 17 October 2013 (UTC)[reply]

Antibiotics in routine lab work

edit

I was wondering how/why certain antibiotics are selected for use in lab work. Why penicillin and streptomycin and ampicillin and kanomycin? Is their use or disuse in medicine a consideration? --129.215.47.59 (talk) 16:17, 16 October 2013 (UTC)[reply]

I'm not sure what you mean by "lab work", but generally speaking the antibiotic effects of these drugs are a consequence of their chemical properties, which can make them useful in other contexts. For example, penicillin breaks down a component of the bacterial cell wall -- it also causes epileptic activity when applied to brain tissue in high concentrations. Looie496 (talk) 16:27, 16 October 2013 (UTC)[reply]
"Lab work" meant such work in a life science research facility, forensics lab or a myriad of other establishments employing such techniques as bacterial transformation and/or culture for production of plasmid DNA, among other things. --2.97.26.56 (talk) 21:39, 16 October 2013 (UTC)[reply]
Almost certainly price and availability factor in greatly, but there is also a lot of mindless rote tradition in biology. If someone does a demonstration once and it works, the next will tend to do the same thing, and the next, and the next... Wnt (talk) 04:11, 17 October 2013 (UTC)[reply]
More mindless than in chemistry and physics? 129.215.47.59 (talk) 10:35, 17 October 2013 (UTC)[reply]
I'd agree there is a lot of rote tradition in biology - but it isn't exactly mindless most of the time - if something works well you stick with it unless you have a good reason not to. This is for at least two very good reasons 1) It lets you more easily compare your experiments with others both in the same lab and outside it 2) you don't generally have to spend nearly so much time optimising protocols if they are already well used and characterised. I quite deliberately try to use "standard" techniques when I can because of this. It does often become mindless though when something doesn't work well for a particular system and people stick with it anyway because it is "what everyone does". With the antibiotics specifically, they are often used as a selection agent when transforming bacteria etc. (i.e. you include a antibiotic resistance gene in your construct so that you can select for those bacteria that have taken it up by plating them on antibiotic media). For this you need to use one which has a cloned, characterised and readily available resistance gene. In practice, since people don't tend to design their own vectors if they can use one "off the shelf" the choice is usually made for you when you choose the vector. I have never come across medical use as a specific contraindication for using an antibiotic (I presume you are thinking that you don't want to go around playing with resistance genes for medically essential antibiotics in case of accidental release). If you are using proper biosafety procedures it should not be to much of an issue in any case. I'm still not sure that I would be comfortable using a "last ditch" antibiotic such as vancomycin in the lab, although I have come across one protocol in the literature which used it (as a "quick and dirty" way to eliminate the gut microbiome of a mouse when you don't have access to germ free and gnotobiotic mice). Equisetum (talk | contributions) 10:58, 17 October 2013 (UTC)[reply]
I used to have a construct that used chloramphenicol as a control. It was the horrible cut-and-paste vector that had been spliced together from a number of different commercial plasmids, and passed down from lab to lab. No one even had a map of the whole vector, and it had duplicate restriction sites in completely illogical places, including several in the middle of the antibiotic resistance gene! Completely mind boggling that anyone used it. Part of why I'm glad I don't do much of any molecular biology these days.(+)H3N-Protein\Chemist-CO2(-) 11:15, 17 October 2013 (UTC)[reply]
Yes - that'll be the mindless part alright (not on your part - you recognised it was bad) - jesus, I can imagine using that, grudgingly, if I absolutely had to, but an incompletely mapped vector is entirely too much flying blind for my taste. Another thing I particularly "like" about molecular biology is the fact that no-one at all follows the protocols in the standard reference book (Molecular Cloning, A laboratory manual) because a PhD would take half your life if you did, but there is no standard reference for the quicker protocols and shortcuts that everyone actually uses! Equisetum (talk | contributions) 12:10, 17 October 2013 (UTC)[reply]
(edit conflict)What Equisetum said, more or less. - It's because you're using commercial vectors, so you pretty much have to use whatever antibiotic resistance is already built into the plasmid. Cloning out and replacing the antibiotic resistance gene in a bacterial expression vector is not particularly convenient, since the more useful restriction sites (places that restriction enzymes can cut the DNA) are near the multiple cloning site (where you put the gene), making it harder to actually excise the antibiotic resistance gene without accidentally ruining the construct. As a result, most people just stick with commercial pET vectors for expressing stuff in E.Coli, which means sticking with the more common antibiotics. If you accidentally damage the part of the plasmid with antibiotic resistance, then even the successfully transformed bacteria will die when you try to grow them on antibiotics. The whole reason for using these genes is that it forces your bugs (bacteria) to keep the plasmid you've given them or die, that way as long as you keep them on media with the appropriate antibiotic they have to express your protein of interest. (+)H3N-Protein\Chemist-CO2(-) 11:09, 17 October 2013 (UTC)[reply]
It's kind of funny how it works. There's been no shortage of people making up their own vectors for this and that, even publishing them here and there. The choke point is that people see an article about a clever new technique, and then they think, yeah, but is it REALLY going to work? There's a crisis of "peer review" currently ongoing, where conniving private companies have three people glance over a paper and claim they should be rewarded by controlling the results of publicly funded research forever. We need a mechanism whereby people do a real 'peer review' where they actually try out the technique and candidly describe their trials and travails getting it going. Wnt (talk) 19:43, 21 October 2013 (UTC)[reply]

Bupropion: Elontril and Wellbutrin

edit

Why has Glaxosmithkline two names for the same drug? I understand that Fluoxetine can be called Prozac or by other names, but these are from different companies. — Preceding unsigned comment added by 80.58.250.84 (talk) 17:24, 16 October 2013 (UTC)[reply]

It may be region specific. While GSK "owns" the drug it gets marketed by different partners which may offer it under different brand names. Mike (talk) 17:50, 16 October 2013 (UTC)[reply]
SourceMike (talk) 17:50, 16 October 2013 (UTC)[reply]
Right, it's region-specific. The name Wellbutrin is used in the US; Elontril is used in Europe. It might be worth noting though that Bupropion is actually sold by GSK under two different names even in the US alone: Wellbutrin and Zyban. The main difference is that the Wellbutrin formulation is intended as an antidepressant, whereas the Zyban formulation is intended to treat nicotine cravings. Looie496 (talk) 18:19, 16 October 2013 (UTC)[reply]

Lucid dreams

edit

I'm quite interested in these sorts of things. Are there any particularly notable studies and/or papers I could read on lucid dreaming? Thanks! --.Yellow1996.(ЬMИED¡) 18:21, 16 October 2013 (UTC)[reply]

Not exactly a reliable source however does have some good information: [8]Mike (talk) 18:48, 16 October 2013 (UTC)[reply]
Looks great (and anyone else is welcome to add what they find); thanks! :) --.Yellow1996.(ЬMИED¡) 19:02, 16 October 2013 (UTC)[reply]
Omni Magazine published a Survey of lucid dreaming in April 1987 (Archived issue). This link summarises the results: [9]. --Auric talk 19:17, 16 October 2013 (UTC)[reply]
Cool - that one is really interesting. I'll read it in it's entirety soon. --.Yellow1996.(ЬMИED¡) 19:28, 16 October 2013 (UTC)[reply]
While not about lucid dreaming this is a good article about sleep. Bus stop (talk) 20:21, 20 October 2013 (UTC)[reply]

is it true you can peer into the distant past?

edit

is it true you can peer into the distant past by looking up at the stars? what does that mean? how far in the past? 212.96.61.236 (talk) 21:42, 16 October 2013 (UTC)[reply]

I don't even know how to answer this...no sorry! Mike (talk) 21:48, 16 October 2013 (UTC)[reply]
The speed of light is finite so when you look at the nearest stars you see them as they were about four years ago, and distant galaxies can be seen as they were billions of years ago.
What would be much more interesting is if they had huge mirrors and you could see what happened on earth eight or more years ago. They don't so we can't but hopefully aliens have recorded the Jack Benny Show or I Love Lucy or so we can enjoy them again. Even now aliens are recording and treasuring the The Rush Limbaugh Show or studying it in their equivalent media studies at university ;-) Dmcq (talk) 22:07, 16 October 2013 (UTC)[reply]
We recorded those shows ourselves, you know. StuRat (talk) 14:36, 17 October 2013 (UTC) [reply]
I hope they liked Doctor Who, there are still a few 'missing episodes'. 220 of Borg 04:33, 19 October 2013 (UTC)[reply]
  • We know the distance to various stars and galaxies in light years. If Sirius is 12 light-years away it takes its light twelve years to get here. So when you look at it you are seeing it as it would have appeared 12 years ago to somebody in the same solar system as it. This applies to the Alpha Centauri system, which is just over 4 light years away, to the Andromeda Galaxy, which is 2.5 million light years away. Betelgeuse, which is in the process of dying, is 642 light years away. It may actually already have gone nova, but we just haven't seen it yet. μηδείς (talk) 22:14, 16 October 2013 (UTC)[reply]
You can see into the very recent past by looking at your hand...the light that reaches your eyes is a couple of nanoseconds old - so you're seeing your hand as it was a teeny-tiny fraction of a second ago - not as it is "now". By extension, everything we see is somewhat delayed due to the time it takes the light from that object to reach us. Our other senses are delayed by even more than that. Sound waves travel at around 700 miles per hour - so if you can hear something happening a mile away, you're hearing it from about 5 seconds into the past. When the island of Krakatoa exploded, it was heard 3,000 miles away - and those people heard an event that had already happened four hours in their past! So yes, when you look out at the sky, you can see into the past. To pick a concrete example - on a dark night and with the naked eye, you can just about see the Crab supernova - and what you see is what was happening there 6,500 years ago when the pyramids were being build in Egypt. Nobody knows for sure what it looks like right now, and we won't know that for another 6,500 years. SteveBaker (talk) 22:43, 16 October 2013 (UTC)[reply]
More prosaically, every time lightning flashes, you see it nearly instantaneously (some very small fraction of a second), while you hear the thunder a few seconds later - a handy gauge for estimating how far away the lightning is. And if you're some distance from a ball game, it's almost unnerving to see the batter hit the ball soundlessly, and then hear the crack of the bat as he's running toward first base. Seeing into the past, hearing from the past. ←Baseball Bugs What's up, Doc? carrots03:02, 17 October 2013 (UTC)[reply]
You may want to try to spot M81 with the naked eye see here for directions. You will then look 11.8 million years back in time with the naked eye. Count Iblis (talk) 23:36, 16 October 2013 (UTC)[reply]
Although you can't see it with the naked eye, the cosmic microwave background radiation was emitted around 380,000 years after the Big Bang. We study it to find out what was happening in the universe 13.8 billion years ago, before any stars or galaxies had formed. It's not possible to directly look back any farther than that, because the universe was not transparent at earlier times. --Amble (talk) 00:13, 17 October 2013 (UTC)[reply]
Can things interact faster than the speed of light? If not, does "now" have any real meaning for distant objects?  Card Zero  (talk) 02:43, 17 October 2013 (UTC)[reply]
That's kind of the "God viewpoint", i.e. somehow being in more than one place at once. But speaking in mortal terms, if light took 4 years to get from Alpha Centauri to us, then we're seeing it as it was 4 years ago... and conversely, if some cognizant being is there and can see our sun, they're seeing it as it was 4 years ago. Barring some catastrophe in the interim, their "now" should be just as meaningful to them as our "now" is to us. Would they be exactly the same "now"? If you could have magically plunked down a pair of clocks, set to the same time, in both places 4 years ago, and then magically retrieved them from both places now, would they still be in sync? Maybe, maybe not. But they should be close enough for government work. ←Baseball Bugs What's up, Doc? carrots02:55, 17 October 2013 (UTC)[reply]
Andromeda paradox. Count Iblis (talk) 03:14, 17 October 2013 (UTC)[reply]
At least under the special theory of relativity, the concept of now (ie, events happening at the same instant of time) makes sense in a fixed inertial frame of reference. However observers in different frames of reference need not agree on whether two spatially separated events occur simultaneously or not; so the concept of now is not absolute. See relativity of simultaneity for further details. Abecedare (talk) 03:16, 17 October 2013 (UTC)[reply]
That's an oversimplification though. If one has the relative accelerations one can choose a frame of reference and calculate a now relative to it. Of course you won't be similtaneously aware of things outside you light cone. But existence and awareness are two different things. μηδείς (talk) 03:48, 17 October 2013 (UTC)[reply]
Yes, the universe is expanding. The rate of expansion and the speed of light is a Doppler phenomenon and is partly explained by the Hubble flow of the universe. Three dimensions is hard to visualize so a simpler model is to use a 2D model of the surface of a perfectly spherical balloon. If you image the light path between objects to great circles on the balloon, you will see that every point on the surface moves away from every other point as it inflates. The further the object, the faster it moves. As light has a constant velocity, this movement is reflected in a Doppler shift or commonly called the red shift. This is the way we measure distance to galaxies and stars as the frequency emmision of elements with zero relative velocity is known. The more the galaxy shifts to to longer wavelengths indicates that it is far away and moving away faster than nearer objects. The farther away it is, the older the system we see and therefore the farther back in time. It's fallacy, though to say we are "looking back in time" because there is no universal frame of reference (which point on the surface of the perfectly spherical balloon is the center?) We will never look back in time to our own sun, rather it is a measure of distance. It's 8 light-minutes away. For far away galaxies, it's more like saying the time difference and the spatial difference do not vary by the same amount through theories of relativity. They are moving away in time from the big bang just as we are but the speed and distance makes it look like they are closer to the big bang than we are., --DHeyward (talk) 09:55, 17 October 2013 (UTC)[reply]
If something happens on the sun, we see it 8 minutes later. Logic says we're looking back in time, i.e. we're seeing something that happened 8 minutes ago. ←Baseball Bugs What's up, Doc? carrots13:14, 17 October 2013 (UTC)[reply]
Ah, that's presuming you can be on the sun and the earth and observe both emission and observation and measure it. You cannot. Really there is only your "now" and everything else is a distance to you. Now matter how far you look, you cannot see yourself yesterday. I understand we separate time and space as a perception but for the most part, when we look at far away galaxies, we are looking at the newest parts of the universe as it expands. Our frame of reference is that we are always the oldest frame in the universe. It's like asking whether the earth is rotating around the sun or moving in a straight line in a curved space. We appear to pass the same point every 365 days but is that true? --DHeyward (talk) 16:21, 17 October 2013 (UTC)[reply]
I can't see myself yesterday, but someone else could, if they are one light-day away and have a sufficiently good telescope. I cannot literally be in both places at once, but I can imagine it, and that's sufficient. I can't literally be at my next-door neighbor's house and my house at the same time, either - yet there's no issue about their "now" vs. my "now". ←Baseball Bugs What's up, Doc? carrots16:47, 17 October 2013 (UTC)[reply]
If you are one light-day away, they are seeing their own "now" and you as being one light-day in distance. If you use time as the reference, both observers looking at each other would say they are seeing the other one day in the past. How can both observers claim to be one day ahead of each other? These space separations of observers makes past/present meaningless terms. For far away objects where we only observe photons, it is light-like interval with the light cone propagating from the emission, then it's partly correct as distance and time are the same and we exist in the causal light cone that can be traced pastward to the orgin. --DHeyward (talk) 20:16, 17 October 2013 (UTC)[reply]
Who says both observers claim to be a day ahead of each other? No, they're in sync, and presumably astute enough to realize that, just as we are. It's effectively no different from seeing a tape-delayed ball game broadcast, 24 hours after the actual game was played. ←Baseball Bugs What's up, Doc? carrots22:24, 17 October 2013 (UTC)[reply]
Not quite. Space-time interval#Space-like interval --DHeyward (talk) 00:38, 18 October 2013 (UTC)[reply]
You lost me at the bakery. ←Baseball Bugs What's up, Doc? carrots00:55, 18 October 2013 (UTC)[reply]
Yes, it seems semantical, but it's really not. I understand the thought experiment that both are simultaneous and we vary in time through distance. But at a fundamental level it's simply not the case. There is really only one of an infinite number of inertial frames of reference that an event occurs at the same time. Whence time isn't the metric used, distance is. A simpler example is here: Relativity of simultaneity. So as a thought experiment, a supernova that is 4 light years away, the distance is accurate but no inertial frame of reference will agree at the time of the event. The problem is accepting that there is no absolute frame of reference. No one can tell you if that supernova happened before, after or during an event on Earth with any certainty. They are not causally dependant on each other. It's as perplexing as understanding whether the Heisenberg uncertainty principle is a description of the nature of the universe or the nature of measuring the universe. It's not very satisfying to not know what you are measuring. Even Einstein understood the dilemna and couldn't reconcile the distinction that measurement and reality where hopelessly intertwined and it doesn't matter whether the measurement affects the result or whether the result is randomly determined. God does not play dice with the universe but the reality is we cannot distinguish the difference. It's wholly unsatisfying as an intellectual exercise. But your bakery reference is an easier point to make: How far across the universe would I have to look to see you in the bakery yesterday? I can't. Our frames of reference are too close. Possible a supermassive black hole could bend your light cone so it appears I am seeing you in the past. Really though, I am only measuring a distance, not time.--DHeyward (talk) 05:04, 18 October 2013 (UTC)[reply]

DHeyward, I think you are mistakenly interpreting "time-intervals are not absolute" to something akin to "there are no such thing as time intervals". Under the special theory of relativity, at least, as long as one uses a fixed inertial frame of reference both time intervals and distances are well-defined and (in principle) measurable quantities. For example, I don't know what your statement, "a supernova that is 4 light years away, the distance is accurate but no inertial frame of reference will agree at the time of the event." even means, since (1) unless you have already (implicitly or explicitly) fixed an inertial frame, saying that the supernova is "4 light years away" is meaningless, and (2) once one has fixed the frame of reference, one can certainly (again, in principle) uniquely determine when, and how far, the event occurred in that inertial frame's timeline. Abecedare (talk) 05:26, 18 October 2013 (UTC)[reply]

Again, supposing some guy (I'll call him Dr. Ort) happens to be somewhere about 1 light day's distance away from earth, and has an incredibly good telescope that can see minute details from many billions of miles away. Today (in my time frame) I paint a sign on my bakery roof that says, "Greetings, Star Gazer!" About 24 hours later, Dr. Ort sees my message. Ergo, he's seeing me yesterday (in my time frame), i.e. as I was yesterday. Hard telling where I'll be 24 hours after painting the sign when Dr. Ort sees it, but that has no bearing on it. At any given point in time, Dr. Ort's "now" is effectively the same as my "now", he just can't see it until 24 hours later (and vice versa). ←Baseball Bugs What's up, Doc? carrots05:55, 18 October 2013 (UTC)[reply]
Right now, Hubble's Ultra Deep Field IR can see as far as 480 million years after the Big Bang. source Ssscienccce (talk) 12:51, 18 October 2013 (UTC)[reply]
There is no such thing as "now". It is a concept. But it has limited applicability. Strictly speaking it has no applicability, because as SteveBaker points out above "you're seeing your hand as it was a teeny-tiny fraction of a second ago - not as it is 'now'." Bus stop (talk) 20:42, 20 October 2013 (UTC)[reply]

Quantum Relative Entropy

edit

Is the quantum relative entropy between two pure states always either zero or infinity? — Preceding unsigned comment added by 81.155.161.54 (talk) 21:58, 16 October 2013 (UTC)[reply]

Yes, if I'm understanding Quantum relative entropy#Non-finite relative entropy correctly. Red Act (talk) 00:00, 17 October 2013 (UTC)[reply]