Wikipedia:Reference desk/Archives/Science/2007 March 29
Science desk | ||
---|---|---|
< March 28 | << Feb | March | Apr >> | March 30 > |
Welcome to the Wikipedia Science Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
March 29
editFemale Vs. Male Orgasm
editWhat is it about the female body that allows it to experience longer orgasms ( average: 20 seconds) While males only experience an average of a 5 second orgasm? —The preceding unsigned comment was added by 68.7.0.44 (talk) 00:41, 29 March 2007 (UTC).
- This was already answered when you asked on WP:RD/M. -- mattb
@ 2007-03-29T00:47Z
- But there was no answer! It was probably better asked on the science desk anyway. Our article on ejaculation suggests a few seconds higher, 10 to 15 contractions, with an average of 0.6 s per, and increasing about 0.1 s per. I don't know why either. It's not their bodies, it's their brain. [Mαc Δαvιs] (How's my driving?) ❖ 01:33, 29 March 2007 (UTC)
- It certainly is our bodies! The brain just takes us to the cusp. − Twas Now ( {{talk • contribs • e-mail ) 15:23, 29 March 2007 (UTC)
- The brain is the most powerful sex organ.--Ķĩřβȳ♥♥♥ŤįɱéØ 16:58, 29 March 2007 (UTC)
- Good. But the orgasm is a physiological phenomenon, not merely something that occurs in your mind. That is why you feel it mostly in your groin, and diffusely throughout your entire body (assuming it is a good orgasm). − Twas Now ( talk • contribs • e-mail ) 22:09, 29 March 2007 (UTC)
- The brain is the most powerful sex organ.--Ķĩřβȳ♥♥♥ŤįɱéØ 16:58, 29 March 2007 (UTC)
- It certainly is our bodies! The brain just takes us to the cusp. − Twas Now ( {{talk • contribs • e-mail ) 15:23, 29 March 2007 (UTC)
- But there was no answer! It was probably better asked on the science desk anyway. Our article on ejaculation suggests a few seconds higher, 10 to 15 contractions, with an average of 0.6 s per, and increasing about 0.1 s per. I don't know why either. It's not their bodies, it's their brain. [Mαc Δαvιs] (How's my driving?) ❖ 01:33, 29 March 2007 (UTC)
self propulsing water vortex
editwould it be possible to create a water vortex that sustains itself by aving a very large cylindrycal container with a conical botom with a hole at its tip and ave a tube going from that hole to the top of the container and ave the presure from the weight of the water pump enough water to the top of the container and direct it in the right manner to ave a self sustaining vortex?
also would aving a swirling patern in the conical part of the container help in guiding the water to promote a swirling motion?
i am looking in to the fact that to create a vortex , the larger the amount of water you ave proportionaly the smaler the percentage of that amount as to come in motion to create a vortex
clockwork fromage —The preceding unsigned comment was added by 216.113.96.184 (talk) 00:41, 29 March 2007 (UTC).
- What you propose sounds like a perpetual motion device, which is not possible. So, you will need a pump in the process to start and keep the water moving. A swirling pattern would help, but would need to be a very specific pattern, not just any swirl would work. If the pattern goes in the wrong direction for your hemisphere or at a pitch which doesn't match the speed of the water, it would be counter productive. Adding the water at the top of the water edge entering approximately tangent to the waterline is perhaps an easier way to encourage a whirlpool to form. However, if the depth is correct for the amount of water flowing out the hole in the bottom, a vortex should form without any additional effort on your part. Incidentally, do you by any chance 'ave a Cockney accent ? StuRat 00:59, 29 March 2007 (UTC)
i dont think this could be called a perpetual motion since all the motion is produced by gravity and nor perpetuating itself , tthe question is could enough water be pushed by gravity all the way to the top to generate a vortex , and no i dont ave a Cockney accent, i live in quebec clockwork fromage
- Well, all that would happen is that the water would rise in the tube to the same level as in the cylinder, after some dynamic oscillation where it would go a lot higher, then a lot lower, than a little higher, then a little lower, and eventually end up at the exact same height as in the cylinder. An exception is if you had an extremely narrow tube, then it would permanent go higher, due to capillary action. If you don't have a Cockney accent, what's with all the missing leading "h"'s ? StuRat 02:06, 29 March 2007 (UTC)
are you sure this would happen in a very large tank with ,lets say more than 100 ton of water ? and woudnt it be possible to concentrate the presure of the weight of the water by having a conical botom in the tank?
- No, it doesn't work that way, the pressure is exactly the same at the bottom of the huge tank, regardless of the shape of the bottom, and that will exactly equal the pressure of the tube, once the tube contains the same depth of water. Thus, an equilibrium is maintained at that point. The pressure is solely dependent on the depth of the water, not the quantity. StuRat 03:32, 29 March 2007 (UTC)
i supose it woudnt work eighter if you ad the tube go above the water line then come back in the water than filled it up completely and then creating a vortex, the action of the vortex woudnt be enough to ave water go trough the system
- It sounds like you are talking about using suction. While that can be used to raise water above the initial level, this is only true if it leaves the tube into a tank with a lower level than the tank where it enters, like this:
Flow direction -> _____ / \ |~~/~~| | \ | | | |~~\~~| +-----+ +-----+
- It doesn't matter that gravity is involved - such a device would still violate at least one of the laws of thermodynamics - so it's not going to happen. Whatever your device does, there is internal friction within it that requires energy to overcome - and which dissipates as low grade heat. So energy continually 'leaks' out of the system and that has to be replenished to keep it running - hence if you don't feed some kind of energy into it to keep it swirling it'll slowly run down and eventually stop. With a nicely shaped chamber and a lot of water, it might take a long time because there isn't a lot of internal friction - but it'll stop swirling after some number of hours. A common mistake of amateur perpetual motion 'inventors' is to assume that gravity (or more often magnetism) is a source of infinite energy - gravity is a force - but forces and energy are not the same thing. SteveBaker 16:25, 29 March 2007 (UTC)
Wouldn't air pressure have an effect, as the surface of the container is likely to be a lot more than that of the tube :) HS7 18:57, 29 March 2007 (UTC)
microprocessor heat
editHello, I am wondering how the heat in microprocessors is generated, I can't seem to find it in any article. I entered a local science fair with a microprocessor related project recently and I described transistor leakage to one of my judges. However, one of the criticisms I had was that I did not point out the other reasons for microprocessor heat or give a chart/figures as to how much heat each reason generates. Do you know a source that could point me to something like this? I am looking for how heat production is rising due to clock cycle and things other than transistor leakage/power density I may have missed. Thank you. —The preceding unsigned comment was added by 24.231.205.94 (talk) 01:45, 29 March 2007 (UTC).
- How about plain old electrical resistance ? Unless you use a superconductor, that's always a factor, I believe. StuRat 02:12, 29 March 2007 (UTC)
- (edit conflict):I don't know about any raw data to give you, but I'm pretty sure it's just heat from electrical resistance isn't it? Why does it get so much hotter the more you overclock? [Mαc Δαvιs] (How's my driving?) ❖ 02:14, 29 March 2007 (UTC)
- Because the thing is consuming more electricity. --Anon, March 29, 2007, 02:37 (UTC).
- More electric current, to be precise. As a result, you substantially increase the amount of Joule heating, which is the primary thermal loss mechanism in electrical circuits, AFAIK. Titoxd(?!? - cool stuff) 02:39, 29 March 2007 (UTC)
Informally, here's the issue: any electronic gate, no matter how small -- and a microprocessor contains millions of them, albeit all very small -- requires that some number of electrons be added or removed to turn the gate on or off. If you want the gate to turn on or off faster, you've got to transfer those electrons faster. The electrical unit of current -- the ampere -- just counts a flow of electrons per second. (Strictly speaking it's coulombs per second, of course, but a coulomb is just a certain number of electrons.) So to switch a gate twice as fast means transferring the same number of electrons in half the time, which means twice the current. But any time you run a current through a wire, and unless the wire is a superconductor, there's at least some resistance. When you pass a current through a resistance, a certain amount of energy is dissipated, usually as heat. (This is the Joule heating that User:Titoxd was referring to.) The amount of energy (heat) dissipated is in fact I²R, where I is the current and R is the resistance. So if you double the speed of a processor, to a first approximation you quadruple the amount of heat it's going to have to dissipate. If the heat sink isn't big enough to dissipate that heat fast enough, the chip gets hotter. (There are other factors involved, too, but I think this one is the most basic.) —Steve Summit (talk) 03:48, 29 March 2007 (UTC)
- On a quantum level the heat is due to electrons losing kinetic energy in collisions with lattice atoms and other electrons, releasing quasiparticles called phonons which are basically heat. There really isn't much more to be said about the source of heat in simple non-optoelectronic semiconductor devices than has already been said... Current zipping through transistors (and to a lesser extent, interconnects) dissipates power in the form of heat. Incidentally, this is one of many reasons large-scale VLSI transistors keep getting smaller; smaller transistors have smaller gates which require less charge to form a channel which in turn requires less driving current thereby allowing quicker mode switching with less current (and less heat). Of course this is a very simple view, but it's one aspect of thermal scaling. -- mattb
@ 2007-03-29T03:59Z
- Overclocking usually increases the voltage and decreases the current. However, the Power (P = V * I) is overall increased, due to more wasted power at higher voltage ("the voltage goes up more than the current goes down"). There are many reasons for this; you might read up on CMOS or MOSFETs to learn about their operating principles. —The preceding unsigned comment was added by Nimur (talk • contribs) 16:54, 29 March 2007 (UTC).
- Edit: Okay, in fairness it's not EXACTLY why, but you could make that argument. Anyway, I'm not sure it's totally accurate to say that the power is wasted. Higher FET saturation current reduces the gate delay product (I think that's what they call CV/I.. been awhile since I've done VLSI). Of course you also see increased heat generation from the higher drift velocity, more hot carrier effects in gates, etc which may have been what you meant anyway (forgive me if I misinterpreted you). -- mattb
@ 2007-03-29T22:26Z
- Edit: Okay, in fairness it's not EXACTLY why, but you could make that argument. Anyway, I'm not sure it's totally accurate to say that the power is wasted. Higher FET saturation current reduces the gate delay product (I think that's what they call CV/I.. been awhile since I've done VLSI). Of course you also see increased heat generation from the higher drift velocity, more hot carrier effects in gates, etc which may have been what you meant anyway (forgive me if I misinterpreted you). -- mattb
- Take a look at a datasheet for any standard processor or microprocessor. Increasing frequency requires higher voltage. Increasing voltage decreases current consumption, as a general rule of thumb. Here, for example, is a M16C Microcontroller or this Pentium datasheet. For a given frequency, higher V means lower I. Power always gets wasted. How much? Depends on the processor. It's too complicated of a device to use the transistor-level current relationships to determine it. Nimur 22:42, 29 March 2007 (UTC)
- Higher voltage is required for higher clock frequency because you potentially need to reduce gate delay product to satisfy the tighter timing requirements. Boosting rail voltage accomplishes this by increasing gate saturation current per the various MOSFET models (square law, bulk charge, etc). I don't contest this fact, and it is supported by your data sheets. Let's move on.
- Your latter claims I still disagree with and I see no support for them in either of the linked data sheets (can you cite a page rather than making me go through the trouble of looking up every usage of "current" and "power"?). What is this "general rule of thumb"? I explained why increasing supply voltage to a CMOS structure will increase the drain current, what is your explanation for your claim?
- I didn't mean to imply that power is not wasted in ICs. If I gave that impression I'm sorry for being misleading. I was only pointing out that you seemed to be looking to the wrong sources for wasted power. It's an unavoidable truth that transistors require energy to operate, and there are better parasitics to blame for "wasted power" than the primary transistor operation. You can chastise me for over-simplification, but I am loath to point out that you originally used the formula for DC power dissipated by a purely resistive load as part of your explanation of (admittedly complex) IC thermal issues. Sure, I'm only considering dynamic power dissipation using the simple CV2f type model, which doesn't include non-ideal current components like hot electron effects and Fowler-Nordheim tunneling, nor does it include power losses in interconnects and contacts. However, I didn't think I'd be criticized for using a common first-order approximation that is a rule of thumb used by most of the VLSI people I know.
- I don't accept an explanation to the tune "it's too complicated to explain, here are some data sheets". If you feel I'm wrong, please don't hold back the details. I'm happy to admit my shortcoming if I've made a mistake, but I'd like a better explanation than P=VI and a couple of random data sheets. -- mattb
@ 2007-03-30T02:38Z
- I don't accept an explanation to the tune "it's too complicated to explain, here are some data sheets". If you feel I'm wrong, please don't hold back the details. I'm happy to admit my shortcoming if I've made a mistake, but I'd like a better explanation than P=VI and a couple of random data sheets. -- mattb
- Maybe you can think about this another way, using simple formulas if you'll allow me that liberty. The charge stored on a capacitor is by definition Q=CV. Consider the capacitor that forms up the gate of a MOSFET. If one increases the rail voltage, the amount of charge stored on each gate connected to VDD must increase as well. Getting more charge onto the gate requires either more current or more time to supply. We've already agreed upon the fact that increasing the rail voltage for a CMOS device allows for higher clock speeds, so I hope you'll allow me to be so bold as to rule out the "more time" possibility without much further explanation. This leaves us with the first option, more current. Increasing voltage requires increased current to charge the FET gates, but you still end up with lowered propagation delay (τ=CV/I, recall) since the drive current of the previous transistor increases proportional to the square of gate voltage per transistor I-V models (assuming that the current logic stage is being driven by another gate, as is typical). This is my very simple first-order explanation of the matter, do you see a hole in this logic? I'm not a CMOS person by any means, so if I missed something obvious in thinking up this explanation please point it out. -- mattb
@ 2007-03-30T03:00Z
- The explaination which makes sense to me in addition to conductor resistance is that electrons have mass. The more frequently you switch a gate that separates electrons with different voltage potentials which causes them to start or to stop flowing from a higher voltage potential to a lower voltage potential the more heat that will be generated from a change in the electron's momentum which is a product of mass and velocity of the electron. Do this using photons (which have no mass) rather than with electrons which do have mass and you may not have this problem. Hence the interest in Optical computers. Nebraska bob 14:13, 2 April 2007 (UTC)
- Photon-semiconductor interactions still produce quantized heat in the form of phonons, so you don't really dodge thermal issues just by switching your signal carrier to photons (photons still have energy). I think the interest in optical computing is more for the higher ease of transmitting signals longer distances and the more complicated quantum interactions that may be utilized for computation (though this gets into quantum computing). See also nonlinear optics. -- mattb
@ 2007-04-02T17:12Z
- Photon-semiconductor interactions still produce quantized heat in the form of phonons, so you don't really dodge thermal issues just by switching your signal carrier to photons (photons still have energy). I think the interest in optical computing is more for the higher ease of transmitting signals longer distances and the more complicated quantum interactions that may be utilized for computation (though this gets into quantum computing). See also nonlinear optics. -- mattb
Domestic Mains LED
editMy socket has an LED which connects drectly to the mains and it glows to indicate potential.But I thought that LED are to be run in reversed bias mode and need a DC supply to glow.Is this a new type of LED???210.212.194.209
- LEDs emit light most efficiently when forward biased. The LED on your socket is connected to some additional circuitry to reduce the voltage and limit current flow. There's no reason you couldn't run an LED off the AC mains waveform, but if it weren't rectified the LED would blink at the rate of the line frequency. -- mattb
@ 2007-03-29T05:59Z
- Most efficiently? LED are forward biased. You can damage the LED if you reverse bias it, depending on the voltage. --Wirbelwindヴィルヴェルヴィント (talk) 06:41, 29 March 2007 (UTC)
- Yes, most efficiently. There will still be spontaneous emission from a reverse biased LED, though the injection efficiency is practically zero and therefore the internal quantum efficiency is terrible. Heck, there is spontaneous emission of photons in Si CMOS devices, but the efficiency is poor (though this is compounded by the E-k band structure in addition to carrier populations that are non-ideal for spontaneous emission). Reverse biasing an LED will not damage it so long as you don't allow the reverse bias current to exceed a safe threshold corresponding with the device power rating. You're probably thinking of the avalanche and Zener breakdown regimes in reverse bias, but despite the name these are not inherently destructive so long as (again) current is externally limited. -- mattb
@ 2007-03-29T12:19Z
- Yes, most efficiently. There will still be spontaneous emission from a reverse biased LED, though the injection efficiency is practically zero and therefore the internal quantum efficiency is terrible. Heck, there is spontaneous emission of photons in Si CMOS devices, but the efficiency is poor (though this is compounded by the E-k band structure in addition to carrier populations that are non-ideal for spontaneous emission). Reverse biasing an LED will not damage it so long as you don't allow the reverse bias current to exceed a safe threshold corresponding with the device power rating. You're probably thinking of the avalanche and Zener breakdown regimes in reverse bias, but despite the name these are not inherently destructive so long as (again) current is externally limited. -- mattb
- The single LED device may in fact have a pair of LEDs inside oriented oppositely. Check using an ohmmeter after disconnecting from the mains. -Arch dude 16:39, 29 March 2007 (UTC)
Over here the frequency of AC currents is around 50Hz, which I suspect would be difficult to see, if the light was flashing that fast, but then I would expect rapid flashing like that to damage the light, and most people wouldn't design a circuit that damages equipment in it :) HS7 17:14, 29 March 2007 (UTC)
- You can buy "bicolour" LEDs like that. They shine red when biassed one way and green the other. If your light is orange or yellow you can bet that's what they did because there are no natural yellow or orange LEDs. You can't see them flickering red/green/red/green at even 10Hz - let alone 50 to 60Hz. LEDs can flash on and off millions of times per second without any damage whatever - they don't heat up and cool down like lightbulbs! Many old-fashioned LED pocket calculators (for example) would turn the LED display on and off again briefly about 20 to 30 times per second to save battery life - so this is a well-known, well-understood technique. SteveBaker 17:33, 29 March 2007 (UTC)
- Not trying to be overly nitpicky, but there are yellow and orange LEDs; you can accomplish both colors with InP and GaP heterosystems. -- mattb
@ 2007-03-29T18:32Z
- Not trying to be overly nitpicky, but there are yellow and orange LEDs; you can accomplish both colors with InP and GaP heterosystems. -- mattb
- There's no reason that sort of operation would damage an LED. The light emission mechanism is very different from the incandescent process used by lamps. -- mattb
@ 2007-03-29T17:28Z
- Just to drive mattb's point home: LEDs are perfectly happy running on pulsing current, even current of quite high frequencies. Most automobile LED tail-lights operate in exactly this fashion: they pulse-width modulate the current through the LEDs to produce the two intensities ("tail light" and "brake light" intensities) required. And if you sweep your eyes across the average LED tail light, you'll see the "break-up" that is caused by the operation of the PWM.
- 50 Hz flashing is definitely visible, especially if your eyes are in motion or you're observing the light with your peripheral vision. 60 Hz flashing is usually visible. LED Christmas lights are very noticeable because of their flickering. On the other hand, if the LED lamp has the two diodes in anti-parallel (so the overall light flashes at 100 or 120 Hz), nobody is likely to notice the flickering of the light except through stroboscopic effects.
- Yes indeed, though LEDs have a cutoff frequency just as all other junction devices. Especially when you consider the case of cycling between reverse bias (carrier depletion near the junction) and forward bias (heavy carrier injection at the junction), it takes some time for the charges to accumulate/deplete. This time is given by the carrier lifetime, and is the primary limit on high frequency operation. You can reduce this time constant, of course, but it has its trade-offs (and the easiest ways to reduce the lifetime are also very detrimental to the LED quantum efficiency, so I imagine very high speed LEDs are rare). -- mattb
@ 2007-03-30T18:34Z
- Yes indeed, though LEDs have a cutoff frequency just as all other junction devices. Especially when you consider the case of cycling between reverse bias (carrier depletion near the junction) and forward bias (heavy carrier injection at the junction), it takes some time for the charges to accumulate/deplete. This time is given by the carrier lifetime, and is the primary limit on high frequency operation. You can reduce this time constant, of course, but it has its trade-offs (and the easiest ways to reduce the lifetime are also very detrimental to the LED quantum efficiency, so I imagine very high speed LEDs are rare). -- mattb
- Sure, but the cutoff frequency for small LEDs is somewhere up in the megahertz range. I once did a design that pulsed a "bus activity" LED at about a megahertz (though integrated somewhat by the FCC-mandated RFI filtering) and it worked fine.
- Mid megahertz range sounds about right... In my mind I was thinking of certain types of diodes that are intentionally doped with deep level trap state creators to make them perform up to extremely high frequencies. I'd imagine a detailed data sheet for most LEDs would tell you what the cutoff frequency under various conditions is (obviously it's higher for small signal forward-bias-only operation).-- mattb
@ 2007-03-30T23:36Z
- Mid megahertz range sounds about right... In my mind I was thinking of certain types of diodes that are intentionally doped with deep level trap state creators to make them perform up to extremely high frequencies. I'd imagine a detailed data sheet for most LEDs would tell you what the cutoff frequency under various conditions is (obviously it's higher for small signal forward-bias-only operation).-- mattb
- Funny you mention data sheets ;-) -- I just happened to be looking at one the other day. The stated characteristics of this entirely-typical-appearing family of Hewlett-Packard/Avago Technologies LEDs varied quite a bit by color, but the ranges were as follows:
- Response time: 15 ns (Standard Red) - 3100 ns (Emerald Green)
- Capacitance: 4 pF (Orange) -- 100 pF (Standard Red)
- I wonder if that's junction or diffusion capacitance. The latter dominates under forward bias while the former dominates under reverse bias. I would guess it's diffusion capacitance. -- mattb
@ 2007-04-02T12:48Z
- I wonder if that's junction or diffusion capacitance. The latter dominates under forward bias while the former dominates under reverse bias. I would guess it's diffusion capacitance. -- mattb
- Note that, in doing so, you may inadvertently have created a security hole. (Of course, in most cases an attacker with line-of-sight access to a network activity LED probably has other, more direct opportunities for intrusion, but people do occasionally do things like placing their computer in front of a window...) —Ilmari Karonen (talk) 22:55, 2 April 2007 (UTC)
- (Assuming you're speaking to me...) No, knowledge of exactly when the processor in question was doing instruction fetches would not be of any use in penetrating the system. This was in no way analagous to the possibility that you could remotely read the data from a modem (for example) by watching the data LEDs.
- Actually, if the processor was doing something like encryption, a clever cryptologist might well find the LED a most useful side channel. After all, if folks can extract useful information about encryption keys from such meager data as processor power consumption, I'm sure instruction fetch timings would be quite helpful. I do agree, though, that even if the chip in question was doing something like that, this would still be a much more difficult and limited attack than simply pointing a photodiode at a LED through a window and reading Ethernet packets off it. —Ilmari Karonen (talk) 16:30, 3 April 2007 (UTC)
- We're talking about a multiprogramming system here; it's doing quite a lot of things and not spending any substantial portion of its time doing encryption. This is quite different than the case where, for example, smart cards can be externally studied by measuring their current consumption. But if you feel like trying (to break into one of these systems by staring at the LED), please feel free ;-) !
Linear and squared heat dissipation
editHow come an overclocked processor's heat output increases linearly with the clock frequency and exponentially with the voltage increase? The voltage makes sense to me because of how power is defined in regards to current squared times resistance, but I do not understand the linear increase. —The preceding unsigned comment was added by 24.231.205.94 (talk) 05:36, 29 March 2007 (UTC).
- Assuming that transistors only consume power while changing modes (not really true, but let's assume), the more times you cause synchronous circuits to switch per second, the more power they dissipate. It's a linear correlation. -- mattb
@ 2007-03-29T06:01Z
- Current in the microprocessor is dissipated in discrete impulses (for simplicity's sake, let's say one impulse per clock cycle). As the clock rate goes up, the voltage and the amount of charge per impulse (Ii) remain the same so the current consumption is roughly Ii*F, linearly proportional to the the frequency F.
- By comparison, raising the voltage also raises the current per impulse so that has the classical V2 relationship.
piston analysis details..
editthe piston i have chosen s of a 4-stroke two wheeler of about 100cc engine using petrol as the fuel. now can i know the magnitude and types of forces actin on the piston and also the temparature in the piston surroundings under working conditions. can you plese help us.
waiting for the reply likhith and sagar
- See internal combustion engine, two-stroke engine. The primary force acting on it is pressure due to combustion of the gasoline (petrol). This must work against the drive-chain load attached to the engine (e.g. the weight and dynamics of the vehicle are connected through gearings, etc). A 100 cc engine has a power of ~ 10 - 15 horsepower; you can calculate force from power and velocity of the piston ( P = F V). Nimur 17:04, 29 March 2007 (UTC)
- Didn't they say it was a four-stroke engine ? For the temp, I would suggest using a thermometer with a probe at the end of a wire. You can even mount the display on the handlebars, if you wish. StuRat 18:03, 29 March 2007 (UTC)
Breathylizers & Smoking
editA cop friend of mine once commented that he could smell the liquor on my breathe, then he went on to explain that smokers like me are easy to pick out when we've imbibed. My Q then is this... Would the fact that I do smoke cigarrettes have any effect on the reading from a breathylizer? It makes sense that we as smokers are easy to detect since we inhale more deeply then non smokers, but is it neccessarily indicative of our blood alcohol level? —The preceding unsigned comment was added by Rana sylvatica (talk • contribs) 10:26, 29 March 2007 (UTC).
- When using a breathalyzer, you are required to blow out for so long that it doesn't matter how deep you normally inhale. As for the effect of smoking on a breathalyzer, there is none to speak of. Again, the person taking the test has to inhale deeply and blow out for a long time. Your friend was likely pointing out that the smell of a smoker's breath causes non-smokers to quickly focus on the source of the odor - which would also quickly focus in on the smell of alcohol. In my opinion, it is not possible in any way for a smoker to comprehend this concept. I've tried to explain to smokers that the smell of smoke follows them everywhere they go, but they simply claim that they don't smell it. --Kainaw (talk) 13:41, 29 March 2007 (UTC)
- They're probably not lying. Smoking impairs the ability to identify odors, and can even cause anosmia. So smokers are relatively insensitive to smells, and are notoriously bad judges of how they themselves smell. - Nunh-huh 14:34, 29 March 2007 (UTC)
And of course if you are around a smell for a long time, you tend to stop noticing it :) HS7 17:09, 29 March 2007 (UTC)
water in a container for a long time
editWhy water put in a container for a long time (a few weks maybe) could cause the container surface slippery? What really caused it? roscoe_x 11:41, 29 March 2007 (UTC)
- Probably a microbial agent growing in the container. -- mattb
@ 2007-03-29T12:22Z
- Per Matt, see biofilm. TenOfAllTrades(talk) 17:41, 29 March 2007 (UTC)
This wouldn't happen with pure water in a sealed container. However, there will always be some impurities, like minerals and a few bacteria. That alone may be enough for them to grow. In home canning, the contents are always submerged in boiling water to sterilize them, before storage, to stop anything from growing in them. The good news is that the things which grow in a water container usually aren't dangerous, so very few, if any, people die each year from that cause. StuRat 17:59, 29 March 2007 (UTC)
- Hi StuRat. Could you share your source for that advice with us? I am concerned that (unsterilized) containers – especially those which are exposed to warmth and light – may well harbor harmful organisms. These may not kill individuals who have a healthy immune system, but may still cause disease and discomfort. TenOfAllTrades(talk) 18:10, 29 March 2007 (UTC)
- It's more in the form of never hearing of a case of drinkable water, after being placed in a sealed container, becoming toxic from bacterial action. If you've heard of any instances of this, I'd be glad to retract my statement. I have heard of numerous other forms of water contamination, such as E. coli contamination, in well water, from dairy farm run-off. StuRat 18:21, 29 March 2007 (UTC)
- I heard somone call into a radio show saying that they had gotten sick from bottled water they had been storing in the hot trunk of their car for some time. I can't remember the particular microorganism that was responsible. I also don't know if it was bottled tap water or commercially bottled. Chlorination probably goes a long way in preventing such a situation. -- Diletante
- The heat element makes me think it's more likely they suffered a reaction to chemicals in the plastic container which leached into the water. This happens slowly normally, but is greatly accelerated at high temps. StuRat 00:21, 31 March 2007 (UTC)
- If it's exposed to direct sunlight (as opposed to car-trunk heating) the UV from the sun will actually kill off the microbes. I believe there was a development program that taught people to bottle their water and put it in the sun before drinking it in areas where water quality was poor, and less people got sick as a result. I wish there was a wikipedia article I could refer to on this, but there is an article on reuse of water bottles. —Pengo 01:12, 31 March 2007 (UTC)
How can I stop my macaw going broody this year?
editIt's getting to that time of year again where my hyacinth macaw starts going into nest buiding, mating and egg-laying mode. She gets a lot noisier, more aggressive and starts chewing holes in anything she can chew holes in, pulling wallpaper off, trying to squeeze into any gap she can squeeze into and refusing to come out, regurgitating food, etc. She also seems to take every time I stroke her as some sort of 'come on' (anyone that's owned a parrot knows what I mean with the tail lifting and 'that funny look' in the eyes). It's like her personality changes for two months of the year, like owning a completely different bird. She's laid infertile eggs in the past and I want to avoid it happening this time, as it's not good for her body. Is there any way to prevent her from going broody this year? Something I can add to her water, an injection, or anything really. Thanks, Chris. --84.64.224.134 14:09, 29 March 2007 (UTC)
- I wouldn't recommend messing with your bird's hormones. Why isn't it healthy for a bird to lay an egg? It's completely natural. − Twas Now ( talk • contribs • e-mail ) 15:19, 29 March 2007 (UTC)
- And miss out on a Macaw-egg omelette? Nimur 17:07, 29 March 2007 (UTC)
- Just don't let the bird see this, or you may pull back a bloody stub the next time you pet the bird with your finger. StuRat 17:52, 29 March 2007 (UTC)
- Sounds like a phone call to your local vetenarian would be a good plan at this point. Maybe they can 'fix' parrots like they do dogs and cats? I'm guessing from your description that it's not the egg-laying per-se that's the problem so much as the weird behavioral traits. SteveBaker 17:40, 29 March 2007 (UTC)
- If birds work like many other animals, it's too late once they have gone into "heat". The hormonal changes have already caused permanent changes in their brain. "Fixing" the bird at this point should stop the actual egg-laying, but probably not all of the mating behavior. StuRat 17:52, 29 March 2007 (UTC)
- I don't think that you can have birds 'fixed'. I've kept birds on and off for years and I've never heard of anyone doing that. Anyway, when a female parrot goes into breeding mode, it's best to avoid petting her anywhere except the top of the head. Top of the head = social grooming of the kind that any parrot would do to any other friendly parrot, anywhere else (particularly the back, the undercarriage or under the wings) could and probably will be seen as a sexual advance by her and may precipitate mating behaviour and egg laying. --Kurt Shaped Box 21:40, 29 March 2007 (UTC)
It's mainly the behaviour that's the problem but laying infertile eggs is not good. Considering that she tends to just lay another one if you take the egg away, I'm worried about her becoming egg bound or suffering from bone problems due to low calcium. Childbirth is also natural but it has risks to the mother. --84.64.224.134 19:03, 29 March 2007 (UTC)
Descendants of a common ancestor
editIs there any sort of formula for determining what percentage/number of the descendants of a Common Ancestor have X% or 1/X genetic similarity to the progenitor? I'd like to get the numbers (roughly) for 1/16 or 1/32 after several hundred years, assuming inbreeding is allowed eventually, of course. -- nae'blis 14:50, 29 March 2007 (UTC)
- Depends what you mean by "genetic similarity". If you look at the whole of an organism's DNA then all defendants will be very similar genetically to their ancestors. However, attention is usually focussed on those parts of the genotype where there can be viable variations between individuals - these variations are called alleles. The Hardy-Weinberg principle gives the stable long-term distribution of the varieties of a single allele across a population - it is usually stated in terms of a gene locus that can be occupied by either of two alleles, but it can be generalised to more than two alleles. There are certain assumptions built into this model which may or may not be reasonable in a given real-life scenario. Gandalf61 15:27, 29 March 2007 (UTC)
- In terms of pure gene count, the differences between individuals within one species are a microscopic fraction of the whole. So your X% is never going to get anywhere close to 1/32. The difference in genes between a typical human and a typical chimpanzee (of the same sex) is far less than 1%. Between humans of the same sex it's got to be WAY less than that. I'd be surprised if it was as much as 1/1000 for any two humans picked at random - and for people as closely related as you suggest it's gotta be way less than that. SteveBaker 16:12, 29 March 2007 (UTC)
- Hmmm, good point. I was thinking along the lines of royal blood or blood quantum laws, and shouldn't have used the actual word 'genetics'. I'm looking at lineage. -- nae'blis 16:14, 29 March 2007 (UTC)
- I think we need a better definition of what you are asking. If you mean what I think you mean - then (since I'm British and my wife is French) - my son is 1/2 French. If he marries a British girl, his kids could be said to be 1/4th French, and if they marry Brit's their kids will be 1/8th French and the following generation 1/16th French. Hence the degree of "Frenchness" gets to your 1/16th level in 4 generations. If the average person has kids at age 25 (I have no clue whether that's a good average - but it's probably not far off and it makes the numbers come out nicely!) then it takes something like 100 years for that to happen. But this is an odd idea - "Frenchness" isn't a simple gene...even 'pure' Frenchness is probably not even measurable at a genetic level - it's just another one of those odd human customs to track such fractions of heritage. The same applies to "Royal blood" I guess. SteveBaker 17:20, 29 March 2007 (UTC)
- Sure, that makes sense. Okay, I'll be more concrete: A (fictional) family founds a kingdom without primogeniture, but with a tanistry-based system of rulership. All rulers and high-ranking officials in the kingdom must demonstrate that they are a scion of the original founder (once your lineage is good, it's merit-based). After a dozen generations or so, the family tree of course branches uncontrollably with more people than positions; they begin to institute rules for 'how close' you have to be to the founder's main bloodline. Inbreeding is the best way to do this, so I'm trying to determine if there's any way to discern that ratio, short of wide-scale genealogy. It doesn't have to be particularly feasible with current science; magic or advanced tech would work just as well, so long as the concept holds any validity at all. If I'm completely off-base or the only answer is genealogical analysis, so be it; this is for a story, after all, so it doesn't have to be exact. Thanks for your interest/replies so far. -- nae'blis 18:43, 29 March 2007 (UTC)
- I think we need a better definition of what you are asking. If you mean what I think you mean - then (since I'm British and my wife is French) - my son is 1/2 French. If he marries a British girl, his kids could be said to be 1/4th French, and if they marry Brit's their kids will be 1/8th French and the following generation 1/16th French. Hence the degree of "Frenchness" gets to your 1/16th level in 4 generations. If the average person has kids at age 25 (I have no clue whether that's a good average - but it's probably not far off and it makes the numbers come out nicely!) then it takes something like 100 years for that to happen. But this is an odd idea - "Frenchness" isn't a simple gene...even 'pure' Frenchness is probably not even measurable at a genetic level - it's just another one of those odd human customs to track such fractions of heritage. The same applies to "Royal blood" I guess. SteveBaker 17:20, 29 March 2007 (UTC)
- I think only genealogical analysis will suffice, as multiple lines of descent will occur in the scenario you describe. You might be interested in the coefficient of inbreeding, though (or not: ) - Nunh-huh 21:43, 29 March 2007 (UTC)
- The simplest way is to look back up the generations of your family tree until you reach the generation at which the original 'king' was alive. Suppose you are third generation. You have two parents, four grandparents and eight greatgrandparents. If there has been in-breeding of some kind then some of those eight will turn out to be the same person...right? It might be that one of your mother's grandfather's was the original king - and also that he was also the grandfather of your father. Hence you have 'two copies' of that original king in your lineage - so you are more eligable than someone who only has the king in his lineage once. To be 100% fair and to cope with the problem of generational shift where the king might be both your grandfather on your mother's side and your great-grandfather on your father's side, you'd have to calculate the percentage of your parentage at each generation that is 'the king'. So having the king be your great-great-grandparent is worth 1 point for each time he appears in that generation, having him as your great-grandparent would be 2 points (per time), as grandparent, 4 points, as parent, 8 points. With such a scheme, the son or daughter of the king would always have the most possible points - unless someone was the son of the original king and their mother was the daughter of that same original king (Eeeewww!)....is that clear? SteveBaker 04:34, 30 March 2007 (UTC)
If it's for a story, you can get ideas from other stories, for example lots of relatives of kings &c can do a something that non relatives can't :) HS7 19:00, 29 March 2007 (UTC)
Or if magic is allowed, you could just try Magic :] HS7 20:53, 29 March 2007 (UTC)
- I understand that by relatedness, you mean the relatedness above the baseline (say 99.5 percent or whatever it may be). I would like to point out that you cannot actually determine that number of, or percentage of descendants unless you make an assumption like "each person will have X children which will survive to adulthood and have X children of their own". Regardless, a fine explanation for relatedness is provided in Richard Dawkins' The Selfish Gene (1989 Edition). This approach works forward, finding a percent relatedness from two given relatives; I think you are looking to work backward, finding a certain number of relatives from a given percent relatedness. This means you will have to manipulate the equations to solve for the unknown. Anyway, I will provide some necessary explanation as it is described in Dawkins' fantastic book, The Selfish Gene (I've added the numbers before each paragraph for convenience):
- First identify all the common ancestors of A and B. For instance, the common ancestors of a pair of first cousins are their shared grandfather and grandmother […] we ignore all but the most recent common ancestors. In this sense, first cousins have only two common ancestors. If B is the lineal descendant of A, for instance his great grandson, then A himself is the 'common ancestor' we are looking for.
- [Now] count the generation distance as follows. Starting at A, climb up the family tree until you hit a common ancestor, and then climb down again to B. The total number of steps […] is the generation distance. For instance, if A is B's uncle, the generation distance is 3.
- [Then] multiply ½ by itself for each step of the generation distance. […] If the generation distance via a particular ancestor is equal to g steps, the portion of relatedness due to that ancestor is (½)g.
- [However, if A and B] have more than one common ancestor we have to […] multiply by the number of [common] ancestors. First cousins, for instance, have two common ancestors [their shared grandparents], and the generation distance via each one is 4. Therefore their relatedness is 2 × (½)4 = ⅛. If A is B's great grandchild, the generation distance is 3 and the number of common 'ancestors' is 1 (B himself).
- — Richard Dawkins. The Selfish Gene 1989 Ed. (pp. 91-2).
- I said you will have to manipulate the equations. Whereas these equations tell you Rel = k × (½)g, given k and g, you want to find g = log2 (k/Rel) given Rel and k. (Let Rel represent 'percent relatedness', k represent 'number of common ancestors', and g represent 'generation distance' as it is defined by Dawkins). The article on genetic distance may help, and it offers a different approach (maybe more comprehensive, but I only skimmed). Hope this helps! − Twas Now ( talk • contribs • e-mail ) 02:44, 30 March 2007 (UTC)
- Thank you all; you've given me a lot to work on. -- nae'blis 21:50, 30 March 2007 (UTC)
Viruses
editCurrently, Model_organism#Viruses list viruses under the subsection "Important model organisms". But are viruses organisms? I didn't know whether to ask this question on the Wiki help desk or here, since I want to tag the section with a template explaining that the classification of viruses is controversial, but don't know what to use. But if I ask it there, I won't get the right people I want, as I want those with knowledge in science to answer. Or maybe I'm just confused? Thanks.--Ķĩřβȳ♥♥♥ŤįɱéØ 17:34, 29 March 2007 (UTC)
- There is debate on whether viruses, especially retroviruses, are alive. I would interpret the word "organism" to mean "a living or nonliving entity which is organized", in which case it qualifies. In any case, it's probably "close enough" to being alive, having evolved from things which were alive, to be included in that category. StuRat 17:44, 29 March 2007 (UTC)
- Yes, they are. StuRat 18:16, 29 March 2007 (UTC)
- Mostly viruses are considered not to be alive, although there is a minority that say they are, and both sides understand pretty well that it is not clear, is what I've gotten. [Mαc Δαvιs] (How's my driving?) ❖ 18:45, 29 March 2007 (UTC)
If you have a totally naturalistic world-view, then viruses would certainly be classed under "things that can reproduce themselves and pass on information to descendants", and whether they're any more alive than you are is strictly a linguistic issue similar to drawing the line between a rock and a boulder. --TotoBaggins 20:10, 29 March 2007 (UTC)
Separately, I don't believe tagging the page as controversial would be appropriate here, since that's more for when editors can't agree, and most editors can agree that the borderline of "life" is fuzzy and controversial. Just because the subject is controversial, doesn't mean the article is. --TotoBaggins 20:13, 29 March 2007 (UTC)
- In any case the viruses listed there are often used as classic examples of a "model organism", whether or not they are considered "life" or not. An article on model organisms which did not mention TMV would be ridiculous — TMV is one of the most model organisms and to call it anything else because of pedantic insistance would be counter-productive to say the least. --24.147.86.187 21:37, 29 March 2007 (UTC)
- Basically "alive" is a concept that doesn't particularly help anyone understand viruses. People who worry about whether viruses are alive or not are concerned about their understanding of what "alive" means, not what viruses are. - Nunh-huh 21:40, 29 March 2007 (UTC)
- Thank you Nunh-huh - you get the prize for "the wisest thing I've heard today"! Asking whether a virus is alive is no different than asking whether Pluto is a planet or whether a pickup truck is an automobile. "Alive" is just an arbitary word - which like so many others has vague borders to its definition. Viruses are a bit more alive than (say) a crystal (which has organisation and if "fed" will grow) and a bit less alive than (say) a bacterium (which has a cell wall and can reproduce without the aid of some other organism). A virus can't reproduce itself - it has no DNA and needs a host cell to make copies of itself. I recall cultivating the "Tobacco mosaic virus" in school - it's an infective parasitic agent that infects Tobacco plants just like a bacterial disease - but you can extract the virus and make really pretty crystals out of it that you can keep in a jar where they will last essentially forever until you get them back into a Tobacco plant. When it's an infective agent, it's hard to deny that it's alive...when it's a bunch of crystals - it's hard to think of it as an organism rather than a reasonably complex (but not outrageous) organic chemical. SteveBaker 22:54, 2 April 2007 (UTC)
- I wouldn't use "can't reproduce by itself" as reason to disqualify something from being alive. After all, that's true of many parasites, be they viruses, bacteria, tapeworms, or lawyers. :-) StuRat 01:18, 3 April 2007 (UTC)
How does microwave crisper paper work?
editHow does that silvery paper work in microwaves to toast bread and allow food to get crispy? --24.249.108.133 18:02, 29 March 2007 (UTC)
- I can't believe we don't have an article for Crisping sleeve! In lieu of that, I offer the following, from our article Microwave oven - "Some microwavable packages (notably pies) may contain ceramic or aluminum-flake containing materials which are designed to absorb microwaves and re-radiate them as infrared heat which aids in baking or crust preparation. Such ceramic patches affixed to cardboard are positioned next to the food, and are typically smokey blue or gray in color, usually making them easily identifiable. Microwavable cardboard packaging may also contain overhead ceramic patches which function in the same way. The technical term for such a microwave-absorbing patch is a susceptor." --LarryMac 19:03, 29 March 2007 (UTC)
- Hm, I wonder about the accuracy of the statement in that article as regards infrared. Is the material really facilitating a quantum photon absorption/emission process (plausible; it happens in some solids), or is it simply absorbing the microwave energy and heating up, transferring that heat mostly by conduction/convection? If the latter is the case, this wouldn't be the first time someone has confused infrared light with heat. -- mattb
@ 2007-03-29T19:11Z
- Hm, I wonder about the accuracy of the statement in that article as regards infrared. Is the material really facilitating a quantum photon absorption/emission process (plausible; it happens in some solids), or is it simply absorbing the microwave energy and heating up, transferring that heat mostly by conduction/convection? If the latter is the case, this wouldn't be the first time someone has confused infrared light with heat. -- mattb
- Good catch; the article should probably be fixed. --LarryMac 19:14, 29 March 2007 (UTC)
- I'm not sure if it's an error or not, just something to be suspicious of. -- mattb
@ 2007-03-29T22:16Z
- I'm not sure if it's an error or not, just something to be suspicious of. -- mattb
Glycolipid & Electric Microscopes
editI am very uncertain about the structure of glycolipids. Please provide an example (preferably a diagram) of a glycolipid, and explain the parts of the structure of the glycolipid. I also need to know what makes glycolipid a "lipid". If you can, please tell me if there is anything such as an "electric microscope" (apparently I am not referring to electron microscopes). Thank you very much. --Freiddie 18:25, 29 March 2007 (UTC)
A glycolipid is a lipid with a sugar attached :) But this is an encyclopedia, you can search for answers yourself on it, and in my experience very few people are likely to tell you any more than that :( HS7 18:55, 29 March 2007 (UTC)
- Follow the links from Glycolipid for structural examples. What makes a glycolipid a lipid is the fact that it is derived from one, see the article. The term 'lipid' comprises several structurally different molecules that are relatively insoluble in water, and rich in carbon and hydrogen. Sugars, on the other hand, are rich also in oxygen, and soluble in water. When you join the two together, you get a molecule with a hydrophobic end and a hydrophilic end. Glycolipids are an important part of the cell membrane, see the image in that article, which shows their location in the membrane. The hydrophilic end points towards the aqueous exterior, while the hydrophobic part points towards the central, 'fatty' part of the membrane. Googling for "electric microscope" yields images mostly of ordinary light microscopes. I suppose they are are 'electric' only in the sense that they have a light bulb, which requires electricity. --NorwegianBlue talk 20:48, 29 March 2007 (UTC)
- Thank you all.--Freiddie 10:00, 31 March 2007 (UTC)
assignment help--speed of light/color question
editTrying to find the answer before my student son does...any suggestions or the right answer to this question? Someone out there must want a dad to look good!!!
Speed of light is 70mph. You are approaching a light at 20mph. The police at the light sees the light as red. What color do you see? Wave length of re is 700nm (nanometer)...yellow is 600nm... green is 500nm...blue is 400nm —The preceding unsigned comment was added by Harryprguy (talk • contribs) 20:00, 29 March 2007 (UTC).
??? The Speed of light is approximately 186,000 miles per second. Is this supposed to be a Redshift problem? In that case, approaching a light at 2/7ths the speed of light will cause the light to be blueshifted. I'm not sure to what extent, however.--Ķĩřβȳ♥♥♥ŤįɱéØ 20:18, 29 March 2007 (UTC)
The Speed of light is around a few hundred miles per hour in some places, so it could be 70mph in others :] HS7 20:37, 29 March 2007 (UTC)
- Kirbytime, that's the speed of light in a vacuum. The slowest speed of light I can recall is 17 m/s, which is around 38 mph. — Knowledge Seeker দ 23:36, 29 March 2007 (UTC)
the equation is v=Δλc/λ, which can be rearranged to give Δλ as vλ/c (I think) with v as 20mph and c as 70mph and λ as 7*10^-7m :) HS7 20:41, 29 March 2007 (UTC)
But this give me a Δλ of 31, and therfore blueshifted light with λ of 669, which looks a lot too high, so I think someone with some idea of the subject should check my replies :( HS7 20:48, 29 March 2007 (UTC)
That looks prettier >_>--Ķĩřβȳ♥♥♥ŤįɱéØ 20:52, 29 March 2007 (UTC)
- Use the relativistic redshift formula from the article: . --Tardis 21:15, 29 March 2007 (UTC)
Also, wouldn't there be an error in measurement since the length of the wavelength (in nanometers) isn't reduced as well? Is your son taking a physics course in college?--Ķĩřβȳ♥♥♥ŤįɱéØ 21:23, 29 March 2007 (UTC)
(Off topic) My science teacher has a red bumper sticker hanging in his classroom that reads "If this sticker is blue you're driving too fast" - AMP'd 15:20, 1 April 2007 (UTC)
That wouldn't work as it would be blueshifted if the teacher was reversing, and it would still look red if the teacher was also driving too fast
LM3914/5/6 IC
editHey, I'm not sure whether to ask this here or in the Computing section, but I thought I'd give it a shot. I'm looking to make a circuit using a LM3914, LM3915 or LM3916 IC, a dot/bar display driver. The circuit needs to deal with inputs of 0-3V. However, I'm struggling to find a good guide on the web for how to calibrate the intervals etc. as my electronics knowledge is fairly limited. I'd greatly appreciate a good guide or help on what to do!
Many thanks! --Fadders 20:31, 29 March 2007 (UTC)
- Can you give me more detail on what you're trying to do? Or at least, the intervals you're speaking about? Glancing at the LM3914 datasheet, it says it's linear steps, meaning if you use 3 volts as the Vref, then each step will be 0.3V, assuming 10 steps. In order to achieve this, since the chip has a regulated 1.5 V Vref, you have to create a circuit where if you input 3V, a voltage divider will step it down to 1.5V. This can be done simply with two resistors in series with the same values, and taking the voltage across one of them, so when 3V is put across the resistors, each of them will be 1.5V. Tell me if you need anything else clarified. --Wirbelwindヴィルヴェルヴィント (talk) 05:03, 30 March 2007 (UTC)
- That's making more sense to me than the data sheet did! I'm basically wanting a range of 0-3V, with each bar representing 0.3V. Could you give me advice as to what goes on what pin?
Thanks! --Fadders 16:42, 30 March 2007 (UTC)
- The easiest way is to connect a constant 3V to RHI. Failing that, you can use the RefOut pin to generate the more or less constant 3V. This isn't exactly 3V, but connect a 1K resistor to RefOut (R1 on the datasheet), and then a 2K resistor (R2 on the datasheet) in series with it. Then connect that to ground. Connect RefAdj to where R1 and R2 meet. Now, connect RHI to RefOut. This should give you approximately 3V on RHI. Now you just need to connec VCC, the diodes, and the source. --Wirbelwindヴィルヴェルヴィント (talk) 05:40, 31 March 2007 (UTC)
Does Einstein have a bacon number?
editEinstein must have been featured in a lot of old newsreels with various other famous people. Does he have a Bacon number? I couldn't find anything besides his Erdos number of 2. —dgiestc
Gott spielt nicht dieses Kevin Bacon Spiel. --TotoBaggins 21:48, 29 March 2007 (UTC)
- Roughly translatable as "God doesn't shoot craps with Kevin Bacon", I presume? alteripse 23:56, 29 March 2007 (UTC)
- The Oracle of Bacon gives Einstein a number of infinity[1], meaning that it can't find any link. Laïka 07:22, 30 March 2007 (UTC)
Radiation sickness
editHow can it kill immediately? Do so many ions form thatcompletely remodel the cells?Bastard Soap 22:01, 29 March 2007 (UTC)
- The ionizing radiation article has a pretty good discussion of the effects on animals. -- mattb
@ 2007-03-29T22:14Z
- Short answer is that radiation sickness does not kill "immediately", although our article on radiation poisoning says that the highest levels of acute radiation exposure can cause coma in seconds or minutes and death within a few hours due to total collapse of the nervous system. Gandalf61 11:14, 30 March 2007 (UTC)
i know this is a little late but check out the Neutron bomb article it has something about instant death from radiation. more percise here [2] Maverick423 13:58, 3 April 2007 (UTC)
Allergies and Plasmapheresis
editTheoretically, could allergies be treated temporarily by plasmapheresis since it removes antibodies from blood and replaces them with sterile fluid?
This is a medical question, but I think it is fair game.
~~thanks ip address whatever~~ —The preceding unsigned comment was added by 139.225.242.164 (talk) 22:50, 29 March 2007 (UTC).
- No, because it doesn't completely eliminate them and they can be produced again. However, having said that, there are many case reports of other types of antibody-mediated diseases in a "crisis stage" (e.g., thyroid storm in Graves' disease) being substantially and quickly improved with plasmapheresis. alteripse 23:54, 29 March 2007 (UTC)
Half-life
editStupid question, but does half-life mean that if something has a half-life of say 5 days, is the product entirely gone after 10 days, or is is half of the remaining half gone - 75% after 10 days? Jack Daw 23:24, 29 March 2007 (UTC)
- It's not stupid at all. [Assuming it has a constant half-life,] it means that 1/4 will remain after 10 days, 1/8 after 15 days, 1/16 after 20 days, and so on. Half-life has more. — Knowledge Seeker দ 23:31, 29 March 2007 (UTC)
- Knowledge seeker is right about the strict meaning of the term, especially when used with respect to an idealistic, mathematical model. In such a model, the remaining amount can never reach zero but approaches it asymptotically. Radioactive decay is one of the original uses of this concept and it approximates the ideal. However in many biological contexts (e.g., drug half-life or effect half life) the term is used whenever the initial degradation kinetics resemble an ideal half-life curve, but complete elimination (at least to practical unmeasurability) does occur in a finite time. Not so much nitpicking as elaborating. alteripse 23:50, 29 March 2007 (UTC)
Great question i always wondered this too. so technicly if radio active decay with a half life of 1 year was left alone for 1000 years there would still be a trace of the decay right? this was so confusing in science classes. oo and also to add to this lets say a barrel was filled to the top with the same substance and left for one year when you opened it again would it be half full or does that work diffently?Maverick423 01:06, 30 March 2007 (UTC)
- Yeah, there would be traces of decay, that's why there are still traces of radioactive material on earth even though no new radioactive material has formed for millions of years. In the barrel, well, that might happen if the whole substance was subject to decay but nothing is ever that pure, if it was you would instantly have a nuclear explosion on your hands, this is how an atom bomb works, in reality, most radioactive materials only have a very small component that is radio active, so in reality, even if you left it for a half life or two or three, you wouldn't actually notice any physical difference to the material. For example depleted uranium looks pretty similar to enriched uranium. Vespine
- What a misleading answer. Atomic bombs do not work by radioactive decay at all, they work by nuclear fission. There is a big difference there. The difference between enriched uranium and depleted uranium has nothing to do with decay (enriched does not decay into depleted, or vice versa). A barrel full of any old radioactive material—no matter how radioactive—would never behave like an atomic bomb unless it were one of the specific materials that could undergo fission (and even then it would not likely perform quite like an atomic bomb; it would probably fizzle). --24.147.86.187 13:45, 30 March 2007 (UTC)
- Just to clarify: enriched uranium is uranium with 235 protons and neutrons, and only this isotope is usable for nuclear weapons. Depleted uranium has 238 protons and neutrons; it's still uranium and still radioactive, only with 3 more neutrons than U-235. --Bowlhover 03:53, 1 April 2007 (UTC)
- What a misleading answer. Atomic bombs do not work by radioactive decay at all, they work by nuclear fission. There is a big difference there. The difference between enriched uranium and depleted uranium has nothing to do with decay (enriched does not decay into depleted, or vice versa). A barrel full of any old radioactive material—no matter how radioactive—would never behave like an atomic bomb unless it were one of the specific materials that could undergo fission (and even then it would not likely perform quite like an atomic bomb; it would probably fizzle). --24.147.86.187 13:45, 30 March 2007 (UTC)
Hey thats intresting to know so the old stuff glows the same as the new stuff =D thanks much vespineMaverick423 01:52, 30 March 2007 (UTC)
- haha:) Sorry I don't know if you are kidding or not, but enriched uranium and most radioactive materials don't actually glow. There certinally are radioactive things that glow, but that's usually becuase something else has been done to it. Vespine 04:22, 30 March 2007 (UTC)
- Think of it like this - imagine every atom in the radioactive material was a penny. If the half life is (say) 1 day - then every day you toss all of the coins and remove all of the ones that come up heads. At the first toss you have billions and billions of coins - and you'll be removing almost exactly half of them each day - as the number of coins gets smaller, the fact that things are in fact happening at random starts to become more and more obvious. Once you are down to (say) 8 coins (atoms) that are still showing tails (still radioactive), you'll toss them and the odds are you won't get exactly four heads - you might get more or you might get less - so the amount of coins won't exactly halve each day (it never did EXACTLY halve - but there were so many that the difference was negligable). Once you get down to the last coin, it might show up heads on the next toss - or you might (by chance) have to toss it half a dozen more times until the atom eventually decays (and the coin comes up heads). So for large quantities of atoms (and any amount of substance that's enough to measure contains an ungodly number of atoms), it'll decay by almost exactly half every half-life - but when there is almost no radioactivity left, it might completely decay quickly - or it might take a very long time to completely decay...it's just random at that point. SteveBaker 04:20, 30 March 2007 (UTC)
- Furthermore, you seem to be thinking of a barrel full of 'stuff' decaying down so that after one half-life has gone by the barrel is half empty. That's not how it works.
- (Indeed not. So instead of thinking about flipping bunches of coins and incrementally removing the ones that come up heads, imaging painting them black, and leaving them in the pile, and thereafter, only re-flipping the coins that haven't yet been painted black. —Steve Summit (talk) 00:47, 31 March 2007 (UTC))
- The radioactive atoms don't disappear - they decay into something else. That might be something completely non-radioactive or it might be something that's actually MORE radioactive with a shorter half-life or into something that's less radioactive with a still longer half-life. So you might start with a barrel full of one thing - and after a half-life has elapsed have a bucket that's still full - but it's a 50/50 mixture of the original substance and (say) inert lead or some non-radioactive isotope. The bucket won't ever be gradually getting empty. SteveBaker 04:20, 30 March 2007 (UTC)
- Unless, of course, the decay product is a gas. --Bowlhover 03:53, 1 April 2007 (UTC)
- Although there might actually be a tiny reduction of mass, due to some conversion of mass into energy which leaves the barrel as heat or some form of radiation. StuRat 00:14, 31 March 2007 (UTC)
- Oh, it'd be more significant than that, wouldn't it? All those alpha and beta particles and whatnot being emitted are carrying away actual rest mass, too, not just energy. —Steve Summit (talk) 00:47, 31 March 2007 (UTC)
Wow, didn't expect so many answers, thanks all. Related question, equally stupid: If there's a set half-life for the amount of something, is the half-life half of the original half-life if the amount is cut in half? (that's a lot of halves) Jack Daw 19:10, 31 March 2007 (UTC)
- The half-life isn't for any particular amount of something, it's for the something. We don't say, "One kilogram of Uranium-235 has a half-life of 700 million years." We just say, "Uranium-235 has a half-life of 700 million years". If you start with one kilogram of U-235, after 700 million years you'll have half a kilogram of U-235 and half a kilogram of Thorium-231 and other stuff. If you start with half a kilogram of U-235, after 700 million years you'll have a quarter kilogram of U-235 and a quarter kilogram of other stuff. —Steve Summit (talk) 20:10, 31 March 2007 (UTC)
- Whoops! Don't try this at home! U-235 is, of course, fissile. I'm not sure what its critical mass is, but if it's a kilogram or less, your kilogram sample would not last 700 million years so you could watch half of it turn into something else, after all. —Steve Summit (talk) 01:47, 2 April 2007 (UTC)
- Indeed, the half life is for the particular isotope species, and does not depend on mass, for the most part. That principle is what allows radiometric dating to work. If you have something with a half-life sufficiently large (40K comes to mind), you can measures the amount of the radioactive isotope and its byproduct, and date rocks. If you want to date old bones, you probably want to use 14C, which has a half life of 5700 years. Titoxd(?!? - cool stuff) 23:21, 31 March 2007 (UTC)
Just to clarify things here... does everyone agree that if you have 10,000 pennys and you throw them over a basketball court and then pick up the ones that are tails and put them into a barrel and then repeat this process every day that eventually all of the pennys will end up in the barrel? Nebraska bob 14:49, 2 April 2007 (UTC)
- I don't agree, since how can a coin be a tail? Or what if someone else (a passerby) takes a penny, believing it a lucky charm? Other than that, I think I agree. − Twas Now ( talk • contribs • e-mail ) 14:59, 2 April 2007 (UTC)
- Almost surely. The same argument applies to atoms, but there are so many of them that it takes a very large number of half-lives before we expect .5 atoms to remain (approximately when the last one should decay). --Tardis 21:58, 2 April 2007 (UTC)