Wikipedia:Reference desk/Archives/Science/2010 March 15
Science desk | ||
---|---|---|
< March 14 | << Feb | March | Apr >> | March 16 > |
Welcome to the Wikipedia Science Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
March 15
edit"Survival of the fittest"
editCan we correct a widespread misunderstanding? If I am right (and it should be checked by an expert, which I am not), the Victorian English in which Darwin wrote this phrase has a different meaning from the one most commonly understood in modern English. "Fit" meant appropriate, not necessarily strong and energetic! What was "fittest", then, was what made the best fit, what was most appropriate. I worry that widespread misunderstanding of "survival of the fittest" has wrongly legitimised competitive behaviour, winner-takes-all, and other unhealthy characteristics of modern western society. Can Wikipedia help to shift that misunderstanding, as we move to an ever-more-urgent need for global cooperation on an unprecedented scale?124.176.69.92 (talk) 01:06, 15 March 2010 (UTC)
- I believe you are referring to Social Darwinism ("competitive behaviour, winner-takes-all, and other unhealthy characteristics of modern western society"), rather than to biological Darwinism. Intelligentsium 01:13, 15 March 2010 (UTC)
- (edit conflict) Yes, I was going to refer you to Social Darwinism as well. You may also want to look at our article on the expression itself, Survival of the fittest, where you will find that Darwin was not the first to use the expression, having picked it up from Herbert Spencer. Deor (talk) 01:18, 15 March 2010 (UTC)
- You are right that Darwin (via Spencer) did not mean "fittest" to mean "most strong" or "most vigorous" but rather to mean "most appropriate" or "most adapted." --Mr.98 (talk) 01:19, 15 March 2010 (UTC)
- I.e, Darwin's use of "fit" is the sense of Adapted to the requirements of the case; appropriate, suitable. ... LME. Biology. Possessing or conferring the ability to survive and reproduce in a particular environment. LME.. This meaning of the word is older (LME = 1350-1469) than the meaning In a suitable state for strenuous physical activity; gen. in good health. colloq. E18 (E18 = 1700-1729). Source: SOED. Mitch Ames (talk) 01:30, 15 March 2010 (UTC)
- Fit certainly doesn't mean the most energetic. Consider the sloth. StuRat (talk) 02:31, 15 March 2010 (UTC)
- There are a number of reasons that explain why people assume "fittest" means strongest, but I seriously doubt that it has anything to do with changes in the word's semantic content between Darwin's time and now. FWIW, Dawkins has advocated (though I'm not sure how strongly) changing the phrase to "survival of the fit enough" since resources are not necessarily scarce enough that only a handful of individuals survive; even when it is the case that a just few of the least "fit" do not pass their genes on, the same method of selective pressures effecting diachronic genetic changes are still in effect. — Ƶ§œš¹ [aɪm ˈfɹ̠ˤʷɛ̃ɾ̃ˡi] 06:14, 15 March 2010 (UTC)
- It's also worth saying that modern evolutionary thought does not hang on the things that Darwin said. If it were discovered that Darwin's writings have been completely misinterpreted, all that would mean would be that Darwin was wrong - not that modern evolutionary theory is wrong. Hence, if this is true, then it's at best an historical curiosity. This is something that those creationists and intelligent designists would do well to bear in mind as they crawl through Darwin's writings in an effort to discredit him. SteveBaker (talk) 12:36, 15 March 2010 (UTC)
- It's true, but I think the poster's original point was about popularization of the theory, in which case appeals to the Great Genius of the Past holds a lot of sway, even if it doesn't in modern scientific circles. (Though even there, scientists do love their heroes.) --Mr.98 (talk) 15:12, 15 March 2010 (UTC)
- Certainly "Survival of the organism that fits best into it's surroundings" is a better line than "Survival of the organism that can run fastest and pump the most iron". There are many cases where a less energetic organism can survive with less food or other resources than a stronger one and thereby out-evolve it. Animals who live in dark caves, for example, evolve to lose their eyes - this is clearly "less fit" in terms of "healthyness" (for want of a better word) than an organism with fully functioning eyes - but a much better "fit" into the dark environment of the cave. Darwin wins the prize for being one of the first people to realize that inheritance plus mutation produces speciation. But that doesn't mean that we have to hang onto his every word as "truth" - he was a clever guy - but he didn't know a lot of things that we now understand (how inheritance works for one!). There are many cases in science where we award "naming rights" for a theory to someone who didn't get it 100% right. That's OK. What we're memorializing is the initial insight - not the precise details. Whether Darwin intended a particular meaning of "fitness" rather than another is something that's really only of interest to historians. If it turned out that it was a typo and he had really intended to say "Survival of the tallest" it would make precisely zero difference to modern science. SteveBaker (talk) 16:24, 15 March 2010 (UTC)
- I think another thing to keep in mind is that common examples used to illustrate the concept are with animals evolving to run faster, reach higher, and get smarter. The idea that a slow gait, dimunitive size, or a smaller brain might result from environmental changes is commonly called "de-evolution." — Ƶ§œš¹ [aɪm ˈfɹ̠ˤʷɛ̃ɾ̃ˡi] 09:36, 16 March 2010 (UTC)
- The idea that there's devolution (biological fallacy)/backward evolution is of course common among creationists and those with an extreme poor understanding of evolution, and it's a common fallacy that evolution always means something is getting more powerful, faster etc but I'm not sure whether even those with such a understanding, baring perhaps creationists are going to think of all such changes as de-evolution, many would recognise that certain traits have evolved to be "less powerful" in organisms that they consider "more advanced" e.g. surely many wouldn't be surprised that some of our ancestors were likely stronger then us (particularly upper body strength). I don't know if they'll say that this is "devolution". The misunderstanding is more at the species level, thinking that humans are "more advanced" then other apes and that if humans evolved to be more "ape-like" that's somehow reverse evolution. Nil Einne (talk) 13:01, 16 March 2010 (UTC)
- I think another thing to keep in mind is that common examples used to illustrate the concept are with animals evolving to run faster, reach higher, and get smarter. The idea that a slow gait, dimunitive size, or a smaller brain might result from environmental changes is commonly called "de-evolution." — Ƶ§œš¹ [aɪm ˈfɹ̠ˤʷɛ̃ɾ̃ˡi] 09:36, 16 March 2010 (UTC)
- Certainly "Survival of the organism that fits best into it's surroundings" is a better line than "Survival of the organism that can run fastest and pump the most iron". There are many cases where a less energetic organism can survive with less food or other resources than a stronger one and thereby out-evolve it. Animals who live in dark caves, for example, evolve to lose their eyes - this is clearly "less fit" in terms of "healthyness" (for want of a better word) than an organism with fully functioning eyes - but a much better "fit" into the dark environment of the cave. Darwin wins the prize for being one of the first people to realize that inheritance plus mutation produces speciation. But that doesn't mean that we have to hang onto his every word as "truth" - he was a clever guy - but he didn't know a lot of things that we now understand (how inheritance works for one!). There are many cases in science where we award "naming rights" for a theory to someone who didn't get it 100% right. That's OK. What we're memorializing is the initial insight - not the precise details. Whether Darwin intended a particular meaning of "fitness" rather than another is something that's really only of interest to historians. If it turned out that it was a typo and he had really intended to say "Survival of the tallest" it would make precisely zero difference to modern science. SteveBaker (talk) 16:24, 15 March 2010 (UTC)
That science doesn't participate in hero-worship doesn't change the facts that 1) the explanation and notion behind the phrase "survival of the fittest" is still considered correct in modern biology and 2)
::Oops, I guess I shouldn't post too late at night. — Ƶ§œš¹ [aɪm ˈfɹ̠ˤʷɛ̃ɾ̃ˡi] 19:20, 16 March 2010 (UTC)
- Hmm - I guess our previous poster "de-evolved" before managing to finish that thought! Anyway - the phrase "survival of the fittest" is only ever used as a shorthand for a full explanation of the theory which properly takes several pages of closely typed text. People working in fields that depend on the theory of evolution are very clear on what the theory really means and I can't imagine any of them claiming that the science rests on that one, somewhat fuzzy, phrase.
- The phrase itself isn't even all that useful - it boils down to the tautology: "Survival of those most able to survive" and I don't actually believe that many actual practitioners in the field really DO use it extensively. That shorthand phrase does not in any way diminish what is actually understood in terms of genetics, speciation, etc.
- Sadly, the phrase does give the various ID and creationist nut-jobs something to at least try to use as ammunition against what is actually a pretty unassailable theory. Sadly, there are a lot of ignorant and gullible people in the world - and if the ID proponents can "prove that Darwin was wrong" - that would be taken by a majority of the ill-informed to suggest that evolution is wrong - when nothing could be further from the truth.
- Evolution is a very strong theory:
- Basic genetics can easily explain WHY it works, and we have direct evidence proving that it DOES work. (It is unusual for a scientific theory to be able to explain the "WHY" part - we don't know "WHY" relativity or quantum theories work, for example - only that they do work).
- There direct, long term, evidence for it in the form of fossils.
- It explains a lot of otherwise inexplicable things (my current favorite being the Recurrent laryngeal nerve - especially in the Giraffe).
- It makes testable predictions - such as the relatedness of DNA in similar species. We've long known that humans and chimpanzee's share a common ancestor - and lo and behold, when we test that, it turns out that our DNA is 99% identical - more identical than with any other species that we've tested - just as evolutionary theory predicts.
- It's been observed actually happening in modern nature - for example: antibiotic-resistant bacteria in hospitals, adaptation in the Peppered moth, warfarin-resistant rats, mixamatosis-resistant rabbits.
- It explains things about ourselves that would otherwise be tough to explain any other way (eg lactose tolerance/intolerance - and why sickle-cell anaemia still persists in the population).
- It's even been reproduced experimentally - a great example being the E. coli long-term evolution experiment.
- By any rational measure, it's a truly great theory. Changing interpretations of the phrase "Survival of the fittest" are so utterly irrelevent to that enormous body of proof as to be laughable. SteveBaker (talk) 13:35, 16 March 2010 (UTC)
- So going back to the original question, what would be the best way to correct the widespread notion of the concept of "fittest" being "strongest"? — Ƶ§œš¹ [aɪm ˈfɹ̠ˤʷɛ̃ɾ̃ˡi] 19:20, 16 March 2010 (UTC)
- Just explain to people how evolution works. From that point on, it's obvious. SteveBaker (talk) 20:42, 16 March 2010 (UTC)
- All right. Let's get on it, folks. — Ƶ§œš¹ [aɪm ˈfɹ̠ˤʷɛ̃ɾ̃ˡi] 21:45, 16 March 2010 (UTC)
- Just explain to people how evolution works. From that point on, it's obvious. SteveBaker (talk) 20:42, 16 March 2010 (UTC)
- So going back to the original question, what would be the best way to correct the widespread notion of the concept of "fittest" being "strongest"? — Ƶ§œš¹ [aɪm ˈfɹ̠ˤʷɛ̃ɾ̃ˡi] 19:20, 16 March 2010 (UTC)
Applications of Quantum Electrodynamics
editAre there any? 76.67.72.109 (talk) 01:43, 15 March 2010 (UTC)
- Quantum Electrodynamics is the best theory we have to explain how electrons interact with electromagnetic radiation and therefore is at the heart of our understanding of all things electric or electronic. The applications are too numerous to count. Dauto (talk) 03:11, 15 March 2010 (UTC)
- It may be the most fundamental but do we need so much precision? Couldn't modern electronics work with only the Dirac equation? Most products would work with only Maxwell's equations. 76.67.74.102 (talk) 03:59, 15 March 2010 (UTC)
- QED is the quantum version of the theory of electron and photon and as such it includes Dirac's equation. Transistors wouldn't work were not for the quantum nature of the world. Dauto (talk) 04:24, 15 March 2010 (UTC)
- The way I interpret the OP's line of thought is as an inquiry into whether the advanced physical description, which is indeed more accurate and correct, is necessary to make applications like electronics work. This is a difficult question to answer. Briefly stated, most engineers who actually build and work with electronics, including semiconductors, never need quantum theory - let alone quantum electrodynamics. However, much of their work is made possible by engineering approximations to the more exact physics - and somebody had to invent those approximations in the first place. That person would probably have benefited from a thorough knowledge of the full physics. Now, it's worth wondering whether a trial-and-error engineering approach, without the theoretical guidance of advanced QED and other conceptual ideas, could have ever led to modern semiconductor technology - but that's idle speculation, because we did have advanced physics to describe things like doping quantum tunneling, band gap energy, and the photoelectric effect. The same can probably be said for MEMS, nano-scale physical chemistry, protein folding, and all the other places where a quantum electrodynamic effect is at play. The applications have been expedited by a great theoretical understanding of the processes; but, for many of these applied areas, the subject and techniques have been sufficiently refined and approximated so that technical work can be done without resorting to a full QED treatment. Nimur (talk) 06:34, 15 March 2010 (UTC)
interstitial defects in ice
editSince the hydrogen bonding structure in ice makes such large open spaces, isn't there a lot of opportunity for interstitial particles to get trapped in an ice lattice? Maybe ions like Na+ or Cl-? Why are they expelled from the lattice when water freezes? John Riemann Soong (talk) 03:56, 15 March 2010 (UTC)
- Are you sure they are ? Saltwater can be frozen, right ? StuRat (talk) 04:02, 15 March 2010 (UTC)
- John Riemann Soong is right. When salty water freezes most of the salt gets left behind. The reason is that salt ions are electric charged and there would have to be charge separation in order to place those ions in the gaps. That's energetically disfavored. Dauto (talk) 04:13, 15 March 2010 (UTC)
- Yes but couldn't they be somewhat stabilised by the lattice? I mean the counterion would be in the next gap (or even in the same gap). Or is it more energetically favourable to just kick out the salt? John Riemann Soong (talk) 05:17, 15 March 2010 (UTC)
- I also observe that freezing point depression (which tends to be independent of chemical identity) behaviour doesn't dramatically change on whether the solute is charged or not (normalising for realised concentration)... so it would seem that if you dissolved ammonia for instance, it could nicely fit somewhere. John Riemann Soong (talk) 05:24, 15 March 2010 (UTC)
- Freezing point depression is caused by the enthalpy differences due to water-solute bonding vs. solute-solute and water-water (ice-type) bonding. As noted, when the water does freeze, it is pretty much pure water (see Fractional freezing) leaving the solute behind. This is true regardless of whether the solute is ionic or molecular in nature. That's because of the way that intermolecular forces work in molecular substances like water. Electrostatic effects from any solute will disturb the crystal lattice of the ice, preventing crystalization. The effects are particular dramatic (and annoying) during recrystallization processes for organic molecules. I remember my time working in an organic synthesis lab, and being frustrated by the fact that it was nearly impossible to get my desired product to crystalize if there was any impurities in it; basically wherever there was two substances present you would always end up with a pale yellow oil rather than nice white crystals, even if the substance was 99% pure. The other 1% would prevent effective crystalization. Bonding in ionic and network covalent substances is very different in this regard; silicate minerals, for example, can show surpising variation due to the presence of trace amounts of interstitial ions in the matrix. But while such interstitial ions work in ionic solids, and in network covalent solids, they do not in molecular solids, due to the way that dipole-dipole and london dispersion forces work in holding such crystal structures together. --Jayron32 18:07, 15 March 2010 (UTC)
- Have you read the Clathrate hydrate article? Graeme Bartlett (talk) 09:03, 15 March 2010 (UTC)
Ammonium chloride as a Korean condiment?
editI just got some "fried kelp" from a Korean market in Oakland, which I thought I'd enjoy because I like kelp, but it's covered with these colorless crystals that taste really weird. I believe it's ammonium chloride because the only thing I can think of that tastes remotely similar is salty liquorice. Is ammonium chloride used as a condiment in Korean cuisine for things like kelp? If so, what is the Korean name for it? —Keenan Pepper 05:22, 15 March 2010 (UTC)
- It could well be ammonium chloride. Is there a list of ingredients on the package that can confirm? If it's a Korean product it may have the corresponding list in Korean. --Kvasir (talk) 06:13, 15 March 2010 (UTC)
- No, it's like a deli item made in the market, and there's no ingredients list in English or Korean. All it says is "Fried Kelp". —Keenan Pepper 06:17, 15 March 2010 (UTC)
- Hmmm. All the recipes for "Korean fried kelp" that I can find call for sprinkling it with sugar, which of course comes in colorless crystals, so I'm a bit skeptical here. Looie496 (talk) 18:25, 15 March 2010 (UTC)
- I think the only way that the OP will be able to satisfy their curiosity ( and now ours ) is to return to the emporium and ask. Should they decline for trade secret reasons, just ask if there Green Card and other documents are in order. Of course, on the other hand if he bought it in the Oaksterdam area of Oakland it might be something completely different. You said it tasted “really weird” Keenan Pepper, did you start feeling really weird as well?
- Come to think of it, didn't Leonard Bernstein and Sondheim immortalize this sea vegetable dish in West Side Story ...
- Korea, Korea, I just ate some kelp from Korea! And suddenly I found, some Sal Ammoniac around - my fryee ....
--Aspro (talk) 19:09, 15 March 2010 (UTC)- I was thinking of suggesting that for a while, but since you already have and this is the science desk the other alternative is to analyse the crystals in some way. You can go the boring traditional chemistry route of seeing how they interact with various compounds. Perhaps JRS can help here. Or you can use some sort of mass spectrometer. Of course if you don't have access to any of these it's going to cost bucket loads and even if you do the lab supervisor may not appreciate you using them to work out what the crystals in your Korean kelp are (particularly if you tell them you haven't even asked the seller). Thinking of a recent question if you leave these crystals at a crime scene perhaps someone will analyse them for you, but unfortunately you may not be able to tell us the answer for several years in that case. Nil Einne (talk) 21:09, 15 March 2010 (UTC)
- I think the only way that the OP will be able to satisfy their curiosity ( and now ours ) is to return to the emporium and ask. Should they decline for trade secret reasons, just ask if there Green Card and other documents are in order. Of course, on the other hand if he bought it in the Oaksterdam area of Oakland it might be something completely different. You said it tasted “really weird” Keenan Pepper, did you start feeling really weird as well?
- Hmmm. All the recipes for "Korean fried kelp" that I can find call for sprinkling it with sugar, which of course comes in colorless crystals, so I'm a bit skeptical here. Looie496 (talk) 18:25, 15 March 2010 (UTC)
- No, it's like a deli item made in the market, and there's no ingredients list in English or Korean. All it says is "Fried Kelp". —Keenan Pepper 06:17, 15 March 2010 (UTC)
- Are they perhaps crystals of monosodium glutamate? They taste a bit like salty licorice and are popular in Asian cuisine. --Sean 20:59, 15 March 2010 (UTC)
- I certainly hope MSG doesn't taste like salty licorice (salmiakki) or as Salmiyuck as described here. --Kvasir (talk) 05:01, 16 March 2010 (UTC)
Oddball coordinates/geolocation
editSo I'm looking at a few biology papers. These have geolocation information, but I can't make sense of it. My working hypothesis it's some sort of map-related quadrangle info, but the relevant map is not cited by name (and I just don't knowhow to interpret it anyway):
- (1) Hillside above State Highway 79 just south of the bridge over Buffalo Creek NW1/4 sec. 28, T54N, R1W, Pike County, Missouri.
- (2)[...] on the east side of an unnamed tributary of Sycamore Creek on the Daube Ranch, NW1/4, NW1/4, SW1/4 sec. 2, T4S, R4E, Johnson County, Southern Oklahoma, Ravia 71/2'quadrangle.
Anybody can help me? Circéus (talk) 11:49, 15 March 2010 (UTC)
- These use coordinates from the "township and range" system, used mostly in the American Midwest. See Public Land Survey System#Mechanics for a description of how it works. The entire system is based on a unit of land called a Survey township, which is a square mile of land. The T and R numbers refer to a coordinate system that numbers the townships around a central point defined between two lines, the "principle meridian" (N-S) and the "base line" (E-W). T is the "township number" and R is the "range number". Therefore, NW1/4 sec. 28, T54N, R1W is the Survey Township (square mile) located in the northwest quarter of Sec. 28, 54 squares north of the BL and 1 square W of the PM. Its not as accurate as latitude and longitude would be, but its not really a system for identifying points, its more for dividing land for establishing property boundaries. Each survey township would also be subdivided into plats of individual lots. Public Land Survey System#Mechanics contains pictures that show all the bits I describe here. --Jayron32 12:46, 15 March 2010 (UTC)
- Thank you a lot! That really helped! Circéus (talk) 13:20, 15 March 2010 (UTC)
- Three things. First, it's the section that's a square mile; the standard township is a square 6 miles on a side, containing 36 sections. Second, a section was often divided into four square lots (1/2 mile on a side), quarter-sections. But these may be further subdivided into smaller squares: what I would expect the "NW1/4, NW1/4, SW1/4" part to mean is the northwest quarter of the northwest quarter of the southwest quarter of the section, thus identifying a specific square that's 1/8 of mile on a side. And third, to avoid any confusion, that word is "principal". --Anonymous, 16:53 UTC, March 15, 2010.
- Which of course we have an article on: Principal meridian. Pfly (talk) 09:41, 16 March 2010 (UTC)
- Three things. First, it's the section that's a square mile; the standard township is a square 6 miles on a side, containing 36 sections. Second, a section was often divided into four square lots (1/2 mile on a side), quarter-sections. But these may be further subdivided into smaller squares: what I would expect the "NW1/4, NW1/4, SW1/4" part to mean is the northwest quarter of the northwest quarter of the southwest quarter of the section, thus identifying a specific square that's 1/8 of mile on a side. And third, to avoid any confusion, that word is "principal". --Anonymous, 16:53 UTC, March 15, 2010.
gsp
editwhat primetime episode was it where gsp gave tour of his house and said he hated his family? —Preceding unsigned comment added by 67.246.254.35 (talk) 12:12, 15 March 2010 (UTC)
- This is the Science reference desk. Perhaps your question should be on the Entertainment desk instead? SteveBaker (talk) 12:26, 15 March 2010 (UTC)
- And you might also want to explain who "gsp" is. StuRat (talk) 16:15, 15 March 2010 (UTC)
ufc fighter —Preceding unsigned comment added by 67.246.254.35 (talk) 16:19, 15 March 2010 (UTC)
- That would appear to be Georges St-Pierre. -- Coneslayer (talk) 17:12, 15 March 2010 (UTC)
Software for synchronising separately recorded sound to HD video
editI recorded three great new bands in Brighton on my HD camcorder, but Radio Reverb had loads of professional microphones and direct lines to the instruments leading to a mixing/CD recording desk, which would presumably exceed even the Dolby 5.1 internal microphones on my camera (which seem to give studio quality sound) and I agreed with them to share the media in the hope of synchronising the audio (presumably to arrive ready mixed on a CD) to my video and then on to blu-ray disk. What software can I purchase, preferably at a reasonable price, to do thus? My computer is a modern dual Pentium dual core 2.4 ghz machine with a reasonable hard drive capacity, which I might upgrade to 2 terabytes. I am running windows XP and would prefer not to upgrade to Vista in case it breaks certain applications I have written in visual basic. My existing Sony software plays the HD video quite well, though the motion is smoother on a proper HD set and blu ray player, or (when down-converted for DVD) a standard DVD and wide-screen cathode ray set, which I do not want to get rid of due to better colour contrast than flat screen HD televisions. Would it be cheaper to get this done by audio-visual professionals instead? filming (great) bands (with their permission) is my hobby and they use their copies as a free promotional tool, I do it to get the footage. —Preceding unsigned comment added by 80.1.80.16 (talk) 13:29, 15 March 2010 (UTC)
- Our computing reference desk might be a better place to ask this question. SteveBaker (talk) 16:08, 15 March 2010 (UTC)
Feeding stick insects
editHow to feed newborn Extatosoma specimens?
They need Rubus or Quercus of course, but how can I persuade them?--87.10.128.186 (talk) 15:40, 15 March 2010 (UTC)
Where does one procure newborn Extatosomas? DRosenbach (Talk | Contribs) 16:31, 15 March 2010 (UTC)
@ Bus stop: Thanks, but those links do not answer my question.
@ DRosenbach: I obtained eggs from an adult female bought one year ago.
But how to convince newborn specimens to eat leaves? --87.16.125.218 (talk) 17:56, 15 March 2010 (UTC)
- What the Extatosoma tiaratum article does not say is that they are nocturnal feeders. Providing they have the right food; hunger should do the rest (who do you think gets them to start feeding in the wild). Examine the leaves in the morning. --Aspro (talk) 19:21, 15 March 2010 (UTC)
@ Aspro: Thanks; the problem is that they didn't eat nothing. The "oldest" has two days but unfortunatelly the leaves are still intact. I also tryed giving them a suggestion letting them sniff torn leaves left at the bottom of the tank.--Mparu (talk) 21:08, 15 March 2010 (UTC)
- Just a thought. There is a guy on eBay selling Goliath stick insect eggs that suggests “Start feeding Eucalyptus when nymphs then wean onto Bramble as older nymphs then feed a mixture of both.” [1] Maybe your source has a tip on what he first feeds them on. Some people (or rather their bugs) have no problems with feeding at all. Perhaps Eucalyptus contain a chemical trigger that has not yet been recognized and will work on your bugs.--Aspro (talk) 11:10, 16 March 2010 (UTC)
@ Aspro: It could be. I will try to give them Eucalyptus tomorrow, and then we will see. However I observe that they are all still alive. Thanks again. --Mparu (talk) 00:43, 17 March 2010 (UTC)
the 80s
editIrrespective of Jayron's comment immediately below, please continue to contribute to this section. Whatever the OP's motivation or intent, there's certainly more to be said here. --Polysylabic Pseudonym (talk) 10:27, 16 March 2010 (UTC)
This has degenerated into an excuse for the OP to editorialize about how boring the lives of people older than he was must have been. Other than an exercise in making his generation feel superior to his elders, it serves no further purpose. The article section titled 1980s#Popular_Culture should adequately lead to answers to the original question. The rest of this discussion is not within the domain of the reference desks. --Jayron32 19:32, 15 March 2010 (UTC)
what did people do in free time back then they didnt have internet wasent it boring —Preceding unsigned comment added by 67.246.254.35 (talk) 16:20, 15 March 2010 (UTC)
- Believe it or not, people can interact without the internet, and in fact did so. There were sporting events before espn.com, facebook did not invent the concept of friends, and music existed before itunes. Googlemeister (talk) 16:24, 15 March 2010 (UTC)
this is not related to freinds, its about fun. in the 90s i was bored outo my mind. —Preceding unsigned comment added by 67.246.254.35 (talk) 16:27, 15 March 2010 (UTC)
- Per WP:WHAAOE, we have a section and several related sub-articles about popular culture and lifestyles in the 1980s. Film, television, music, sports, toys, art, education, and so on, all existed in the 1980s. Notably, in light of your comparison to "zoning out" on the internet during periods of great boredom, the 1980s saw the mainstream rise of cable television, including 24 hour programming (a new and exciting cultural transition!) Nimur (talk) 16:33, 15 March 2010 (UTC)
- Nobody has mentioned video games! At Wikipedia! For shame. Comet Tuttle (talk) 16:57, 15 March 2010 (UTC)
yea i watched tv and played the lame video games in the 90s but i was still bored outta my mind —Preceding unsigned comment added by Thekiller35789 (talk • contribs) 17:07, 15 March 2010 (UTC)
- Maybe you should have played the good ones. BBSes were fun, too. -- Coneslayer (talk) 17:11, 15 March 2010 (UTC)
- Skateboarding as a popular sport developed in the 80's. Before that (in the 70s), boards were plastic with steel wheels and you couldn't do much more than slalom through some cones or something else rather lame. Some kids got into other passing fads like breakdancing or trick BMX stuff. Of those, skateboarding is the only one that I've noticed still continues with the teenagers today - at least I always see a few kids in my class bring in their boards and they are always amazed when I reference some board trick while explaining some concept in class. They can't imagine that a 40-year-old fart actually knows something about their sport. -- kainaw™ 17:33, 15 March 2010 (UTC)
- Just to clarify... I am answering what people outside of the California coast did. By the time skateboarding became nationally popular in the 80's, they had been doing tricks in places like Santa Monica for nearly 10 years. Don't want to get into a debate about local interests vs. national interests. Might as well argue about when that pathetic "Valley Girl" talk spread across the country (and I like still like totally hear students like using that talk and stuff). -- kainaw™ 17:38, 15 March 2010 (UTC)
- There are people that spend a lot of time being bored in the present day, too. Nothing has really changed in that respect. --Tango (talk) 17:41, 15 March 2010 (UTC)
- When I was a kid in the late 70's/early 80's I played video games at arcades, such as Space Invaders, Centipede, Pac Man, Galaga, and Tempest (my fav). You may say they are "lame", but that's only in comparison with the games we have now. Those games were exciting compared with what came before (pinball). The games you currently play will also seem lame to future generations ("Aw man, this isn't even 3D, how lame !"). StuRat (talk) 17:43, 15 March 2010 (UTC)
Before the very, very late 90's, life was unbearably boring. So much so that a recent Economist article on the state of television mentioned that you could pretty much put anything on TV and people would watch it, out of sheer boredom. Therefore, if none of the networks were running anything interesting, then they could all still get a lot of advertising money, since millions were still tuning in. The only problem with running totally boring content was if another network was running something marginally less boring. But the bar wasn't very high: people were bored out of their MINDS. Now, fast-foward to 2010. These days, the bar for television is very high. People will NOT put up with something totally boring, they'll just turn off the TV and go on the Internet and find something a ton more interesting for them. The bar for television is REALLY high, since people aren't forced to watch whatever's on out of sheer boredom. Actually, the setting of the sun on newspapers is the same: people used to read the daily paper at some point during the day out of SHEER BOREDOM. They would have literally nothing better to do than read a whole article on something they're not even remotely interested in, just because they've read all the articles already that they find interesting in the least. The problem was when people had more time, they would run out of EVERY article in the paper, people, get this, I am NOT making this up: people used to read the obituaries for their city. ALL OF THEM (the ones with small articles). Every day. That's how bored out of their mind they were. The problem was most excarbated, of course, on Saturday and Sunday, where if there were just a daily paper, they would be through with it by 1 PM. Then what do they do? So Newspapers made Sunday editions that were vastly larger, included comics for the bored kids, and so on. Basically, to sum it up for you, when people weren't at work, at school, or some special event, they were bored out of their minds. They would do just about anything, up to and including sitting down with a game of Monopoly or Scrabble, just to get through the evening. That's a fact. 82.113.121.99 (talk) 17:55, 15 March 2010 (UTC)
- The Pet Shop Boys wrote a song with the lines "we were never bored/because we were never being boring", which is supposed to be a quote from a 1920s socialite (sorry, searched Wikiquote but couldn't find the original). I can't remember being bored after I left home when I was 18: the world was too full of exciting things for me to be bored! --TammyMoet (talk) 19:22, 15 March 2010 (UTC)
- Nobody has mentioned books, libraries or hobbies yet. Can you provide a link for that Economist article? AlmostReadytoFly (talk) 09:22, 16 March 2010 (UTC)
why the hell was this archived this was interesting now its ruined someone un-archive it please —Preceding unsigned comment added by Thekiller35789 (talk • contribs) 20:08, 15 March 2010 (UTC)
to jayron- im not trying point out how "boring the lives of people older than he was must have been" nor am i trying to "make my generation feel superior to his elders". i am a student of history and i like the 80s and wish i could have lived then. i am interested in how people passed the time back then. please do not mistake my motives. i like to hear about the past. the article you linked dosent help unfortunately. to whoever has been archiving it-please stop. i like hearing other peoples answers.
- I was born in the '70's so my childhood was in the '80s. '83 was my favourite year. Oh, and I'm Australian - my comments relate to the 80s in Canberra, Australia. Note that no shops (but a few petrol stations) were open Sundays, and on most weekdays places closed at 5pm.
- As a young child dinner and getting ready for bed took from about 5:30. So only 2 and a half hours to fill. That was a walk to a friend's house, then riding our bikes or skateboards or roller skates to a local park, or playing on a computer or console. Or spending our money in a computer game arcade. On weekends: bike riding, visiting friends, playing kids games.
- As an older child (say 10 years or so), there was still loads to keep from boredom. Rollerskating, rollerdisco, waterslides, street cricket, playgrounds where you were allowed to get hurt! (back then we had a 5 metre high tarzan swing, with a 3m high tiered platfrom from which to jump). The computers were better. I and a few friends could catch a bus to a swimming pool, ice skating rink, roller rink (actually that was a short
walkskate away). Music on tape on a walkman, making mixtapes to play on the walkman, nintendo game and watch (or were they later?). For quiet or rainy afternoons or "can't be bothered going outside" there were board games, cards, books. - Young teenager much the same as above, but more time spent shopping - and more money to spend. A good book could waste an entire Sunday, with the Saturday spent on homework, and any of the above activities - and my parents had a swimming pool put in, so swimming more.
- Then I ran out of 1980s. In all of the above, the only worthwhile TV was a few docos and educational TV programmes on ABC (the Australian Broadcasting Commission), a couple of cartoons, and the friday night movie.
- There was far too much to do to be bored if you didn't want to be. Sure you could bore yourself to death with TV (until station close - the test pattern was less dull than the normal programming). You could sit around saying "nothing to do", but in warm weather there were a thousand things to do, bike, skates, ice skates, swimming, shopping, reading, board games, visiting friends and playing with their stuff. In winter, same but the day were shorter, there was more hot chocolate, there was at least one visit to the snow for a weekend, less swimming, less iceskating, less bike riding. More rollerskating, more reading, more board games, more card games. --Polysylabic Pseudonym (talk) 10:26, 16 March 2010 (UTC)
suicide
editwhy is it thought that young people have a high rate of suicide? isint the average age like 68? —Preceding unsigned comment added by Thekiller35789 (talk • contribs) 16:25, 15 March 2010 (UTC)
- Perhaps successful suicide follows multiple attempts. DRosenbach (Talk | Contribs) 16:29, 15 March 2010 (UTC)
i dont think so, if u fail u will prob be crippled —Preceding unsigned comment added by Thekiller35789 (talk • contribs) 16:37, 15 March 2010 (UTC)
- That would depend on the method attempted. Googlemeister (talk) 16:41, 15 March 2010 (UTC)
- Instead of speculating, why don't you check some statistics? Fast Stats from the Center for Disease Control is a good brief overview, and it links to a detailed statistical report, National Vital Statistics Report, as well as a Suicide Trends among Youths report, a Trends by Age Group report; and if you want more, here is a Google search query for more. Nimur (talk) 16:50, 15 March 2010 (UTC)
- (ec) The original poster is asking why it's thought that young people have a high rate of suicide, despite the facts. He appears to be correct about the facts; this page from the WHO has links to PDF files showing the rates by gender, and by gender and age, for lots and lots of countries. Here are the rates (both genders) per 100,000 in the US:
Age 5-14: 0.7 Age 15-24: 10.0 Age 25-34: 12.4 Age 35-44: 14.9 Age 45-54: 16.4 Age 55-64: 13.8 Age 65-74: 12.5 Age 75+: 16.8
- So, young people have a lower suicide rate than older people, at least in the US. Our article teenage suicide in the United States does not mention this (although it will shortly) and does not talk about the gap between the facts and perception. This is WP:OR but I will guess that youth suicide is thought of as more sad or tragic than adult suicide, and heavy media coverage might lead people to believe there is some sort of epidemic underway. Just a guess. Comet Tuttle (talk) 16:56, 15 March 2010 (UTC)
- An important missing bit of data is evidence that suicide really is popularly thought that young people have a higher rate of suicide. DMacks (talk) 17:00, 15 March 2010 (UTC)
- Googling teen suicide epidemic yields 62,000 hits, which is anecdotal, and some of the hits are on local "epidemics", but it supports the premise. Comet Tuttle (talk) 17:11, 15 March 2010 (UTC)
- Exactly. So "why is it thought" (emphasis mine) might be directly answered from them. Gotta be careful to avoid a self-fulfilling mediafest though (OTOH, that may really be the reason?). Just because teens to it and it's reported as such, the stories I usually see focus on teen-suicide as a symptom/involved with other teen issues. Not "only teens do it" but "teens do it for teen reasons"--the bias is in choice of separate topics, not necessarily ignoring an included subset of the topic. None of which addresses why non-teen groups are not discussed as much, right back to the initial question:( DMacks (talk) 17:22, 15 March 2010 (UTC)
Come on people, the answer is obvious. The rates of young and old are similar, but suicide is a top cause of death among young people and way down in the list for 40-70 yr olds, even though the per capita rates are similar. alteripse (talk) 17:10, 15 March 2010 (UTC)
- That's probably a factor, yes. Comet Tuttle (talk) 17:11, 15 March 2010 (UTC)
- In addition, those statistics are I'm guessing successful suicides and don't include attempted suicides. Suicides may of course be a cry for help particularly among teens and may not be truly intended so succeed although in some cases, e.g. if Panadol or weedkiller is used, the person may still die even if they later regret it. While these may not seem as serious a problem, they are still a concern. In addition even if they are more serious about it (and ultimately it's a continuum anyway), teens are generally less experienced and have access to less resources and are more likely to have some dependence on and close connection to parents or guardians, so teens attempting suicide may be more likely to be rescued in time compared to adults. In other words, even if fewer teens successfully commit suicide, there may still be more attempted suicides from teenagers. Nil Einne (talk) 18:28, 15 March 2010 (UTC)
the The rates of young and old are NOT similar. old is much more —Preceding unsigned comment added by Thekiller35789 (talk • contribs) 17:23, 15 March 2010 (UTC)
- Are you sure? from the table above we have for the US (Age 25-34: 12.4) and (Age 65-74: 12.5). Icall that similar. Dauto (talk) 17:55, 15 March 2010 (UTC)
- The OP is I believe 19 so 45-54 may be old to them. Nil Einne (talk) 18:28, 15 March 2010 (UTC)
- Something that's always bothered me is that the term "suicide" is used to include two entirely different things. One, as in most teen suicides, is choosing to end a life that would otherwise continue normally. The other, as in most elderly suicides, is choosing to end a life that will soon end anyway, often in extreme pain, from a terminal disease. StuRat (talk) 17:30, 15 March 2010 (UTC)
- It's still suicide though. You can argue mental pain vs. physical pain, but in the end taking your life is still technically suicide. — The Hand That Feeds You:Bite 19:19, 15 March 2010 (UTC)
- It is, in English, but that's just semantics. I wonder if other languages have different terms for those two concepts. For example, ritualized suicides may have different names, such as "seppuku" (hara-kiri) in Japanese. StuRat (talk) 19:49, 15 March 2010 (UTC)
- Regardless clearly the idea there are two "entirely" different things is somewhat simplistic. At the extreme edges, perhaps but it's much more of a continuum and in fact even that's too simplistic since it isn't a 1 dimensional thing. For example if someone who is diagnosed with cancer with an expected average lifespan of 5 years, but the possibility of living significantly longer if treatment is successful and the expectation that they will still be able to live a resonably painfree life for perhaps the next 3 years commits can't be said to be having their live end "soon anyway". If you're resonably young and have beaten cancer once but suffer a relapse, there's a resonable chance you may beat it again, although you're always likely to be living with cancer and the coming weeks and months are probably going to be painful whatever happens and at the end, you may die anyway. Someone who has just commit a serious crime, in a country without the death penalty live will go on, but it's hardly going to be normal and they could be in prison for a long stretch of time. Someone on death row who commits suicide is probably going to die soon (although there's still the possibility of clemency). Someone who owes a lot of money to a loan shark which they can't pay may not necessarily expect to die, but may expect to find live very unpleasant or they may simply not really know what to expect. For an extremely depressed teenager or whatever (e.g. someone who's just lost a partner) it may seem to an outsider that for them live will go on and should even eventually start to become better, but the problem is often while perhaps they kind of realise that in the back of their minds, it's not something they can really 'understand' and most likely in some ways to them it seems life will always be this depressing, unbearable existance (some may realise their life is not going to end soon which may give them an impetus to commit suicide but equally I expect some just can't/don't think about that). Of course there are also who commit suicide for other reasons (e.g. to make a point, some sort of socially expected ritual, because they think they'll transcend to become aliens) but we aren't really discussing those I guess Nil Einne (talk) 20:55, 15 March 2010 (UTC)
- It is, in English, but that's just semantics. I wonder if other languages have different terms for those two concepts. For example, ritualized suicides may have different names, such as "seppuku" (hara-kiri) in Japanese. StuRat (talk) 19:49, 15 March 2010 (UTC)
Young people in industrialized countries die of accidents in the traffic and of suicide and very few from illnesses, the chance that that happens is low because most of us get old. The percentage of suicide as cause of death for young people is high. The low number of people dieing at young age multiplied with the the high percentage gives the moderate number quoted above. --Stone (talk) 19:10, 15 March 2010 (UTC)
- ... I really can't parse what you're saying here. — The Hand That Feeds You:Bite 19:20, 15 March 2010 (UTC)
- I think he/she is basically repeating what Alteripse said Nil Einne (talk) 20:56, 15 March 2010 (UTC)
A.I. Apollo Program
editImagine the USA decided to spend billions on creating a self-aware artificial intelligence. Given a huge commitment of labor and resources how long would it take from today to reach that result?
Before you say we don't have the technology or quote Raymond Kurzweil or Moore's law keep in mind the Moore's Law is not a hard rule but rather a benchmark manufacturers aim for. Given subsidies we could surely accelerate the growth rate of processing power and perhaps even leapfrog far ahead given some advanced research. TheFutureAwaits (talk) 19:28, 15 March 2010 (UTC)
- I don't think any amount of time or money would do, because we lack even the theory as to how to make a machine self-aware (versus making it pretend to be, which we can do now). StuRat (talk) 19:40, 15 March 2010 (UTC)
- (edit conflict) Its likely an unanswerable question until you define your parameters for what you mean by Artificial intelligence. The term is extremely broad; however truly sentient machines would require a complete fundemental change in the way we currently construct hardware and software, especially for a machine that would pass the Turing test. It isn't a question of simply making faster and faster computers that can do more complex calculations. All modern computers are fundementally still Turing machines, that is with an arbitrarily large memory and enough time, every single computer, from the ENIAC to your cell phone, could all perform the exact same tasks. Any computer can model sentient responses to stimuli, but they do it in a non-sentient way. To design a sentient machine would require starting from scratch, some interesting developments are happening in the way of Artificial neural network, either virtual or actual hardware-based ones, since they actually behave the way real brains do, and thus stand the best chance of replicating actual sentience. --Jayron32 19:42, 15 March 2010 (UTC)
- As far as I've read, there is no known reason to believe that a hardware change is required for sentient machines besides the ongoing increase in computing capacity.
- There is no evidence that our own sentience or the primitive self awareness of some other animals isn't based on deterministic, physical phenomena. APL (talk) 21:45, 15 March 2010 (UTC)
- (edit conflict) (I knew this would happen)
- I think that right now, the biggest hurdle in creating a self-aware AI is that we would need to understand how self awareness works. Right now, we have only the vaguest idea how the human brain works, and if we want to replicate those abilities, we would first need to know how our brain works in detail. This is something we have been working on for a long time and it is very hard to gauge our progress. We simply do not have a good answer for this question. Googlemeister (talk) 19:44, 15 March 2010 (UTC)
- (ec) The real problem is, as far as I understand it, is that we probably don't know the scope of the problem. This isn't a "simple" engineering problem where you scale up some existing things, with a large margin of error for unknown unknowns. We don't even know what the end-situation should look like—how we could make something that was "self-aware" or not. We don't really know how that would go. Scaling up existing A.I. work does not (as far as I know) look like it would create something "self-aware" in any real sense (whatever that even means!). If we defined our outcome by some kind of more obvious metric (calculations per second; ability to play chess better than a human; ability to read and contribute to Wikipedia as good as some of our better contributors), we could probably come up with a reasonable estimate. But "self-awareness" is a vague concept at best when applied to computational thinking.
- To take another historical project as a point of comparison, the Manhattan Project did not really begin until the basic engineering constraints of the problem were pretty well understood. It was still a gamble and required discovering a lot of new things in a very small amount of time, but the theoretical basis for knowing what might be possible was pretty well understood. I'm not sure A.I. work is quite at that stage yet—or, put another way, I'm not sure we understand the cognitive functioning of biological brains well enough to make "artificial" ones yet that function similarly. Perhaps someone more informed about the current state of things would have more to add on that specifically. --Mr.98 (talk) 19:45, 15 March 2010 (UTC)
all of the posters above are quite wrong. Granted, no amount of money would bring about self-aware AI by tomorrow evening at 8 PM -- not even if you were to spend say $60 trillion on it, which is one year's of the world's GDP. In theory the entire world could somehow borrow a year's worth of GDP, ie $60 trillion. But even if the world did do that, and put the entire money into self-aware AI, it would not happen by 8 PM tomorrow. You just can't move money into project at that scale that fast.
On the other hand, if the world were to borrow 1 year of it's GDP and put it all into creating self-aware AI as fast as possible, then I think it would be done within a matter of months with that money. Probably the way to do it would be to split the 60 trillion into bets, different avenues, and even if you make 1000 such bets, each bet gets $60 billion funding. Now, just off of the top of my head I can list about a dozen of these bets, each of which I wouldn't be surprised at succeeding. (if you must know they include:
- starting from the now sequenced human genome, ie analyze it and extract from it. In this scenario you spend your $60 billion (or more, if there are fewer bets) on human brain power, the best in the world, to try to reverse engineer the roughly 1 cd-rom of data. I think $60 billion might be woefully inadequate to properly reverse engineer it, but who knows; it just might be. I consider this scenario unlikely to succeed.
- starting with more precise brain imaging, ie spend $30 billion on scanning and imaging a brain and $30 billion on hardware that will run it at 1/1,000,000th realtime speed. Bam! A consciousness, albeit it will be pretty trippy for it, considering a year goes by every 30 seconds it's conscious.
- do highyl parallel molecular/DNA computing to do highly parallel computing. I have no idea how you get intelligence out of it, but the idea is if intelligence evolved in the real world, then if you spend $60 billion on goo that does highly parallel _________, maybe you can induce and brute-force an evolution of an intelligence. I don't know if this one would even be considered "artificial" though -- why isn't it just real intelligence?
- For $60 billion, you can probably get around the proscription on certain human experiments, and somehow reverse engineer an actual human not with brain imaging, but layer by layer peeling away neurons however compensating for them electronically at each step. Actually $60 billion is woefully inadequate for this proposal, probably more like $1 trillion would let you do it.
) That's just off of the top of my head. Basically, it's just a question of money. Even the bets I just listed aren't very sure with $60 billion, and if you start with 1 year of the world's GDP, you can only make a thousand such bets. If, however, you a hundred year's of the current world's GDP at your disposal, then you can make a thousand such bets funded at $6 trillion each (or some of them funded more). Now we're really talking. If you had that much money for this project, you could probably be 100% confident of achieving the goal within 9 months.
But why would the world want to put itself into dept at 100 times it's annual GDP to produce, in addition to the six point eight billion people who can currently convince you that they are conscious and awake, cognizant of their surroundings, etc, one non-person who can convince you that it is conscious and awake, cognizant of its surroundings, etc.???
I mean it's an interesting result and all, but it's already doing something that we KNOW is possible given the physical laws of the universe and, oh, about 3 pounds (the weight of the human brain) and less than a CD-ROM worth of source code with some mild compression. (The human genome).
I mean, you're not even going to get something that is as small or as useful as the human brain. You spend 100 times the world's GDP, and get a building-sized supercomputer capable of basically the same function we have 6.8 billion biological specimens of. Meanwhile, the world is probably not going to live down the effects of the intellectual orgy you've gotten it into maybe for 200 hundred years (optimistic) or maybe it will simply never reach the level it would have if you hadn't entered it into such a crushing debt burdon. Basically, the reason we're not spending even as much as the moon missions on reproducing consciousness, a known possibility, is: why would we? 82.113.121.99 (talk) 20:21, 15 March 2010 (UTC)
- [citation needed]. Comet Tuttle (talk) 20:26, 15 March 2010 (UTC)
- You don't need a citation to point out that if something exists then it is possible. More specifically, you would need a citation if what I just said (I'm the same poster, maybe my IP has changed slightly) either could or could not be the case. But it is not a possibility that it is not the case: you could not read, a la the Goedel incompleteness theorem, a published proof that it is impossible for a body of finite mass to be self-conscious. Just imagine for example the idea that we would abandon the idea of ever making artificial AI, because there is now a proof floating about that, for any such AI that can exist, it must be infinite in mass. A mathematical proof like that. Just imagine it. You can't imagine it, because it is preposterous and absurd, given that we know that three pounds of stuff can do it, and we know it six billion times over. So the idea that you would need a citation, whereas the alternative state of affairs is prima facie preposterous, is absurd. 82.113.106.100 (talk) 20:49, 15 March 2010 (UTC)
- It is not absurd to ask for references on a reference desk, particularly when you invent a bunch of numbers to support your points. Comet Tuttle (talk) 22:23, 15 March 2010 (UTC)
- fine, what numbers do you need referenced that aren't obvious to you or common knowledge? --82.113.106.92 (talk) 23:08, 16 March 2010 (UTC)
- It is not absurd to ask for references on a reference desk, particularly when you invent a bunch of numbers to support your points. Comet Tuttle (talk) 22:23, 15 March 2010 (UTC)
- Personally, I think the final answer will be a proof that we are not "conscious" in any special way after all...that all sufficiently complex systems have a rudimentary self-awareness. However, I believe that raw complexity is the solution here. I suspect (without proof) that if you built a computer with comparable complexity to the human brain - had it run a neural network simulation at comparable speed to the brain - with cameras and microphones hooked up to it appropriately - and took it through the same kinds of developmental and learning processes that a baby goes through in the womb and for the first half dozen years of life - then there is a good chance that it would exhibit all of the properties of a conscious human. Sadly, we're perhaps 50 years of solid Moore's law expansion away from being able to do that. However, the odds are extremely high that if we did that - and it worked - then we'd learn nothing whatever of value from doing this since we don't have a way to prove that a being truly is "conscious" - or even a practical definition of what that means - and the likely complexity of a computer that would exhibit conscious-like behavior would probably be comparable to the complexity of the brain of a "higher animal" - and therefore as far beyond our ability to analyse as a real human brain. SteveBaker (talk) 21:30, 15 March 2010 (UTC)
- I concur with the others who say that there's an immense theory gap. No amount of effort or money will help us if we don't know precisely what intelligence is in the first place. If NASA didn't have a handle on where the moon was, they couldn't have made it to the moon with a rocket that was 100 times as efficient. We can make computers succeed at some tasks that intelligence can also solve, but we don't know how people solve those tasks in the first place, so we don't even know whether we've made progress or not. Paul Stansifer 22:31, 15 March 2010 (UTC)
- I just want to clarify that just because there is a large theory gap does not mean it is not possible to throw a lot of money at it and get results. It just means that estimating the amount of money necessary is probably not possible ahead of time. It may be, as SteveBaker posits, that the theory gap is illusory. But we don't know have great ways to know that at the moment. There is a difference between saying, "we can't do this" and "we don't know how much it would cost to do this." I think the latter is true and the former is probably false.
- Just as a point of comparison, we've thrown a lot of money at cancer research in the last century. It turns out to be a very non-trivial problem—a different sort of medical problem than, say, finding a vaccine for polio, which took only a couple of decades to develop after real money was put behind it. This is a comment on the apparent nature of the problem, not the nature of science itself. Cancer is hard. Is A.I. hard? Opinions differ on this. If the answer is "yes" then it means that it's possible that huge amounts of money won't do much other than tell you exactly why it is hard. If the answer is "no" then huge amounts of money can get rapid results. A lot of problems are obviously in between these two extremes. The thing is, I don't think we know where self-aware A.I. falls in this spectrum. --Mr.98 (talk) 22:41, 15 March 2010 (UTC)
- If NASA sent up enough rockets they would have found the moon eventually! Seriously, there are parts of the problem that could be worked on in the hopes that other parts would fall into place later. Specifically large computers. (Would throwing more money at computer engineering significantly increase the rate of progress? It's already a well funded industry.) Also, with an unlimited budget some vast parallel supercomputers could begin work on the type of experiment that Steve describes above. (And other proposed types of emergance intelligence.) If nothing else, dead-ends could be eliminated from future consideration.
- Of course, even if they didn't know where it was, NASA would have recognized the moon when they landed on it. Would we recognize a sentient AI if we saw one? (Is Commander Data sentient? He'd never in a million years pass a rigorous Turing test. Neither would HAL9000.) APL (talk) 22:52, 15 March 2010 (UTC)
- The question is not, "would throwing money at it get some kind of results." It surely would. The question is, "would a crash program work like Apollo or would it work like the War on Cancer?" Or, more specifically, could we possibly know ahead of time? (And a secondary question is, "is this the best thing to be spending resources on?", which is not a question that science alone can answer.) --Mr.98 (talk) 22:55, 15 March 2010 (UTC)
- What made cancer difficult was that it turned out not to be just one disease with one cause - but hundreds of separate diseases with hundreds of separate causes. The push to cure it did an amazing amount of good. There are now dozens of cancers that we can cure - there are dozens more that we can detect early and at least have a good shot at curing - and dozens and dozens of causes that we have eliminated from our environment. The moon shot was just about the opposite of that. It was a single clear goal with a small set of distinct problems to resolve to get there. The quest for AI is yet a different problem - we don't even really know what the question is yet - and we wouldn't recognize the answer if we solved it tomorrow. How would we know if the Internet was sentient? How do we know that it isn't? That puts it a long way from getting a man on the moon - and probably further out of reach than curing all possible cancers. But we honestly don't know. It's perfectly possible that we already have machines of sufficient complexity that they are already "conscious". It's also perfectly possible that there is really no such phenomenon. The answer to this question is "We don't know - and we don't even know why we don't know." SteveBaker (talk) 23:16, 15 March 2010 (UTC)
- If successful AI means anything, a minimum is an ability to carry on a coherent natural-language conversation. And the simple fact is that nobody currently knows how to build a machine that can do that, for any amount of money. Looie496 (talk) 01:56, 16 March 2010 (UTC)
- The problem is in part that we don't really know how people do that, and people seem quite exceptional in their ability to do that. I think one of the main A.I. problems in general is that human language and abstract reasoning capabilities are pretty off-the-map. It's what we do; it took millions of years of evolution. Until we figure out how it works (and from what I've read, it's not just "add more neurons and it'll spontaneously emerge"—things are a lot more specialized and complicated than that), we're going to have a hard time making a machine do it well. Doesn't mean it's impossible... just that we're not really sure how complicated a problem it is, even though we've had a lot of people working on it for quite some time now. We'll probably find an answer—it's not magic—but it's not clear that just throwing money at it alone is going to turn up a shortcut. --Mr.98 (talk) 02:22, 16 March 2010 (UTC)
Sneezing and hair length
editA general wonderment. If I measured the distance my head moves when I sneeze, and the time it takes for my hair to land back down on my head, could I calculate the length of my hair (assuming my hair grows pretty straight). If so, any ideas what I'd need? Would I have to use things like gravity formulas and calculating air resistance, or is there a nice simple way with a bit of maths and angles? Edit: Actually thinking about it your head doesn't go straight down when you sneeze so there might need to be some force directiony things to that I did in maths a long time ago. —Preceding unsigned comment added by Jimothyjim (talk • contribs) 21:20, 15 March 2010 (UTC)
- Well, if you had good numbers for the mass of your head, the elasticity of your neck, the velocity and mass of material ejected during the sneeze, the shape of your scalp and the springyness, density and cross-sectional area of the hairs - then a mathematical model (probably a differential equation of some kind due to the distributed nature of mass along the length of the hairs and the curvature of the scalp) could be used to calculate the hair length from the bounce time. But ask yourself this: What are the error bars like? I could believe that you could measure the bounce time accurate to 10% - but there would be at least a 10% error bar on each of the other numbers - some of them possibly more like a 50% error. When you multiply out all of those sources of error and take an honest look at the size of the total error in the length estimation you'd get as a result, the answer would be something like "Between 5cm and 50cm"...which, to be honest, is something you already knew! So it's certainly possible - but without good data, the result isn't much use. SteveBaker (talk) 23:07, 15 March 2010 (UTC)
- You might start by assuming a spherical head and one strand of hair. 24.12.190.7 (talk) 03:47, 16 March 2010 (UTC)
- Well, what if you made a few assumptions like, momentum form things expelled from the nose being negligble and things like that, and concetrated on the more important things like the elasticity/snap back of the neck after the sneeze and the hair mass and such alike, and recorded the sneeze with a slow-mo video camera so you could accurately get the times and distances via a computerJimothyjim (talk) 22:12, 16 March 2010 (UTC)
- As Steve implied above, there are just too many variables that cannot be measured accurately. Hair behaves very differently in different atmospheric conditions, and even more variation arises from the length of time since you last washed it. Dbfirs 09:36, 16 March 2010 (UTC)
- If you have all of that fancy computer measuring stuff - why not just use it to measure the length of the hair? SteveBaker (talk) 12:51, 16 March 2010 (UTC)
- Surely thinks like atmosphere can't make _that_ much difference, and you could use controlled conditions as much as possible with everything. Also I could use a ruler to measure my hair without all the fancy computer stuff, but thats not the point :P Jimothyjim (talk) 22:11, 16 March 2010 (UTC)
- If I were to take some strands of hair in one hand and an anvil in the other and dropped them simultaneously - don't you think the anvil might crush my foot to a mushy pulp quite a bit before the hairs reached the ground? Yeah! It matters! But suppose we try to reduce the problem to the barest minimum: If you can measure the period of swing of the hair once it's been perturbed from equilibrium - then you'd be reducing the problem to (essentially) figuring out the period of a pendulum where the mass and air resistance was evenly distributed along the length of the hair, then maybe you could use that to figure out the length...but the whole sneezing thing is really just a gigantic complicating and error-introducing factor which you'd be eliminating by taking the 'pendulum' approach. Doubtless someone here could tell us the formula for determining the period of a pendulum who's mass is evenly distributed along its length - and my best guess is that wouldn't depend on the mass - which is good because it's an error-prone unknown. But you're definitely going to need to know the air resistance - and that's very non-trivial for a mass of hairs with turbulance induced in the airflow between them and who-knows-what other weird effects going on. SteveBaker (talk) 01:27, 17 March 2010 (UTC)
- What's wrong with using a ruler? You'll be using a ruler to take some of those measurements and angle for your fancy calculation anyway. --Kvasir (talk) 03:10, 17 March 2010 (UTC)