Talk:AI winter/Archives/2014

Latest comment: 10 years ago by 78.73.90.148 in topic Google as source of funding?


ToDo

This is a start. From where I sit, there will be pointers going in many directions as the description expands. AI Winter is classic and ought to be cherished (it's a concrete example of something we need to get a hold of).

Until this page is more fully expanded, please see ICAD and Knowledge-Based_Engineering for recent examples of this phenomenon.

Understanding the dynamics behind the AI Winter will be an important part of the future of computing. So, there is no shame to be associated with the related events. Rather, what we have here is just a blimp in the long trail of evolution. --jmswtlk 14:38, 28 December 2005 (UTC)


Some issues

Hi, thanks for the contribution, but the article doesn't currently seem to be the type of encyclopedic writing that Wikipedia is looking for. For one, what is the source for the material? Unles the term is a proper noun, Wikipedia:Naming conventions call for it to be lower case. Another is that the article never really tells what the term is or means or how it is used. I don't mean to criticize, but familiarizing yourself with the editing conventions can help your contributions be more useful. Let me know if there is anything I can help with. You don't need to learn everything all at once, but some time spent getting used to the major policies will save time in the end. --Taxman Talk 15:46, 29 December 2005 (UTC)

Thanks. I see what you mean. The stub was there empty; I entered some thoughts to get response. I'll re-edit soon in more a definitional mode and push the current discussion into a 'folklore' or similar section. --jmswtlk 17:13, 29 December 2005 (UTC)
Taxman- the term "AI Winter" (whether it should be capitalized, I do not know- I've seen it both ways, but as it refers to a specific downturn in a specific part of the tech industry, I'd say leave it caps- the direct links to here instead of the lowercase argue for leaving it be) was coined, as far as I can tell, by the press covering the AI industry back in the late '80s, and is used widely- I heard of it long before I began researching Lisp machines or even before I ever heard of Wikipedia. It's definitely not a neologism worth deleting. As far as JMS' editting goes, I can't disagree there, but I'm confident he'll pick it up quickly. His contributions have already improved markedly. --Maru (talk) Contribs 18:16, 29 December 2005 (UTC)
Taxman and Maru - It's good to have the reviews so fast. I'll work to get 'source' and other details in line. I was there (on the commercial side as a user - several types of applications) and am happy to see that the events are interesting enough to write about.
Comment: IMHO, wiki is a kind of axiomatic space with loose interpretative rules that tend to cause a type of convergence. Perhaps, there have been studies about the phenomenon that it represents. If not, we'll have to see that this happens.
That's one way of putting it, I guess. Seems needlessly compplex- could just say "Wikipedia is a place were articles bounce around the space of possibly articles. Because there aren't too many possible articles which are obviously the Right Thing, articles tend to become ever closer to an ideal article".
Result of comment: I was trying to use wiki as grounds for a paper that I'm working on where I can reference the material (that ought to be encyclopedic as you mention) so as to further the discussion that is needed at this point in time to both understand the present as well as to better map the future. In a sense, I'm exploring a newer type of environment for problem solving. --jmswtlk 21:25, 29 December 2005 (UTC)
Well, I would caution you here- remember that this is supposed to be an encyclopedia article, and papers tend to involve a lot of original research, personal point of views, and conjecture, which are all generally verboten here; not to mention the very different style of writing. --Maru (talk) Contribs 18:58, 30 December 2005 (UTC)
Maru. I got it. These pages will be limited to the format and tone as required. I need to offload to another site like this example or something similar. Also, I like the way this AI winter page is going. It's good to see the 'wiki' way up close. Now, how extensive can this encyclopedia be? For the Lisp Machine, I listed a lot of software. Ought these be covered (albeit briefly)? jmswtlk 01:05, 31 December 2005 (UTC)
I'd say little more than linking- any coverage probably belongs on the software's page; lord knows most of the articles will need everything they can get. --Maru (talk) Contribs 02:55, 31 December 2005 (UTC)
If you like cliki, you might also be interested in the AI wiki and other closely related wiki. --68.0.120.35 22:43, 10 February 2007 (UTC)

Micro Computers?

I don't get all the text about microcomputers and AI winter. Surely having more general access to computing should have improved any field of comp sci research? Could someone a) scale back the references and b) explain the ones left a little better? Thanks! --Jaibe 08:59, 1 February 2006 (UTC)

Hi Jaibe. I think that this page has its particular flavor since the original development came from the Lisp machine discussion. About your reference to comp sci reaseach, I must add that many corporate research organizations had AI activities. I know personally that there was a lot of money involved with AI outside the academic environment. Not all of this was funded through government projects. Much of this work was tied to the Lisp machine. Depending upon one's view of Lisp, these workstations were very much productivity enhancing devices. Then, the work to add Lisp cards to the PC and the Mac was a very interesting blip in the long evolution of the computer. So, the parts dealing with hardware are important, I think, but may play a lesser role in the whole page eventually.
There are probably ranges of meaning for "AI Winter" that ought to be made more clear. For instance, some might point mainly to the 'strong AI' aspect dealing with how far the computational can go in meeting or beating human intelligence. From the operational side, there are the issues that might come under software engineering practices, however it involves much more than that. This was my motivation for putting the End-user computing link. jmswtlk 15:53, 1 February 2006 (UTC)

OK -- I have made a stab at making the intro more general and giving the lisp machine part its own section, but I'm not sure that a) I put the section in the right place or b) that it has as much detail as it deserves. Now that it's not in the intro, perhaps your explanation could be moved into the article? Or maybe we should have a brief explanation in this article, and make more leverage of the existing lisp machine article?

Also, maybe there should be another subsection about AI Spring, rather than just a line in the lisp machine section? I did a google search on "ai winter" and the top N articles were about the new AI Spring (it's in wired so it must be true...) --Jaibe 10:41, 2 February 2006 (UTC)

References

There are many. A few will be added, for now. This page could lead to a lot of in-depth analysis. (Why is wiki so much fun?) --jmswtlk 22:38, 29 December 2005 (UTC)

Wiki is so fun because it's fairly easy, it's virtuous, it's filled with generally smart people, and it almost by definition deals with interesting stuff (well, usually).
I've borrowed a few references from the lisp machine and related articles, but I don't think I got'em all. --Maru (talk) Contribs 18:55, 30 December 2005 (UTC)


Recent edits by an anonymous

I'm commenting out some recent edits- the chill on British research and on neural nets in the '70s are not what this topic addresses. They each might be considered an AI Winter, but not the AI Winter, so to speak. --Maru (talk) Contribs 20:53, 31 December 2005 (UTC)

I have to say I think that's quite a mistake, that was the most solid content in this article so far. Rather than comment them out, if you think it's just *an* AI winter, then edit the sections to say so. But I've heard that it the 70s to mid 80s were *the* canonical AI winter in AI courses at both MIT and Edinburgh. But whatever, pitch it the way you want. But why lose good content?
By the way, is there some button thing for generating the dateline on talk? --- jaibe
Well, I commented them out (not the same thing as removing them- as far as I can tell, they are perfectly factual and such; I'm just questioning whether this is the right article for the text to be in); I'm not trying to lose good content here. It's just that I thought this article concerned itself with the American AI bubble and crash in the '80s, with Symbolics, LMI, Goldhill, etc. not those other AI Winters, since I hadn't heard the term AI Winter applied to the specific time periods adduced. I could be wrong here, so you other editors who were around, please weigh in.
And you get the dateline thing automatically if, at the end of your post, you type in --~~~~ --maru (talk) Contribs 23:28, 31 December 2005 (UTC)


Maru, I would vote to leave in (uncomment where you commented) all contributions. I know that your goal is probably the quickest and smallest entry. But, some things do take more than a few iterations to clear out.
At some point, the thing could be split into the respective (agreed upon) categories. This is where 'wiki' gets interesting in that it can support 'convergence' almost realtime of entries from disparate views. Did you put the original 'red node' (however the empty article is called)? I saw it first in Lisp machine. When I made the first post, I did so in order to not have it empty. It seemed to me that AI Winter was something that needs discussion. And, we need to do this from more than the normal perspectives.
Now, about the particular viewpoint of my post (and the relative weight, thereof :-), it was from a practitioner's that is seldom heard. Why? Those within the commercial environment are usually under heavy stricture. Even if they are allowed to publish, it’s under serious restraining edits. Why? IP, competitive advantage. So, I’m writing from the perspective of someone who actually had his hands on and used 9 different variety of the Lisp_machine. So, that makes me an interested party. I also have been involved continually from those days in the various methods that have been allowed to carry forward. This despite the so-called large influence of the “AI Winter.” From where I sit, I’ve seen many mini “AI winters” which upon careful study leads me to suspect that more attention on the management of expectation, on truth in computing, better End-user computing, and several other factors might be of interest to the large community.
At this point, I would think that we want to hear about all aspects and from all the sides. This page would then be a real gem. Thanks. --jmswtlk 01:21, 1 January 2006 (UTC)
I was indeed the person who began making the redlinks to AI Winter.
I'm sensing that you two want to steer this article towards a more general coverage of AI winters in general, so I uncommented the sections. As you said, JMS, the important thing is to flesh the article out, and worry about editing it down later.
As far as the more philosophical portion goes, I chalk this up as showing that even academia and the tech industry are vulnerable to bouts of irrationality. At least modern computers are starting to incorporate Lisp machine ideas. (ex. Emacs, StumpWM, that sort of thing). --maru (talk) Contribs 02:15, 1 January 2006 (UTC)

There are indeed two different "AI Winters" being discussed here. I was a software developer at Aion (an Expert System company of the late 80's and early 90's) during the "summer" between. I was educated when everyone remembered and often discussed the "AI Winter" of the 70s. This article describes it fairly well, I think. The next AI winter probably began with the American recession in around '91 or '92, when commercial AI took a very serious hit and a lot of companies (including ours) had to downsize or close their doors. I could research this and try to see what I can come up with. By the way, I think the "History of AI" article is useless -- it doesn't even mention any of this -- and it should. User:CharlesGillingham (Date uncertain)

Originally, this page was a 'red-link' on the Lisp machine page, so it was started in that context, which is an example of the latter AI Winter, as the Lisp machines were a commercialized effort to spread the research. Content related to the earlier event was added at a later point. Both of these are covered in the Onset section which could no doubt be improved. Perhaps we could have 3 sections: AI Winter (research), AI Winter (commercial), AI Winter (why it's important). Then, the references would relate to each of these.
The popular press cites would be mostly about the AI Winter (commercial) as there were many periodicals related to the subject of applied AI. ICAD, which began its service life on the Symbolics, was used as an example for this page because of the Lisp machine connection and because of the fact that it was actively being supported until 2003 and somewhat beyond. Aion's system would be another example. Just a few years ago, it was essentially embedded within a framework that came from it passing through several sales. As well, the system evolved from a proprietary flavor through Pascal to C++. Is this documented somewhere? There is a section about the impact on Lisp (and its machines); so, we really need something on other approaches.
AI techniques are now in use and supported almost everywhere. Any future Winter would apply mainly to some subset, perhaps. Yet, one would think that successive Winters that have occurred ought to be included in the History page. jmswtlk 02:29, 11 June 2007 (UTC)

Google

I removed a section on Google hiring Lisp and Python programmers and Google making AI, and Marudubshinki put it back (admittedly, better written now). The reason I took it out originally was that I didn't see any relevance to "Lisp and Lisp machines," the section header; lots of companies hire Lisp programmers, so singling out Google didn't seem to be of much use. It might seem more appropriate for a new "AI Spring" header. (I made an AI spring redirect to this article.)

By the way, this was mentioned in a recent NYTimes article. Don't know if it's an appropriate External link, since these links go dark after a week or so and require an account. SnowFire 14:44, 20 July 2006 (UTC)

We should be mentioning Google because they are one of the vanishing few companies who we can source (as in, to the specific piece of software; ex. MapReduce) using AI and related things. As for the NY Times article, sure (we link to Encyclopedia Britannica, and they don't just go dark, but are always restricted), but I generally like to add page numbers and dates to make hardcopies easy to look up. --maru (talk) contribs 03:29, 21 July 2006 (UTC)

I actually think the edit is a mistake & it should go back the way it was. First I've heard the rumour confirmed by Peter Norvig -- Google are trying to do human-level AI & have put it on their project timeline. Second, I think the whole lisp-machine thread is kind of lame & detached from the main topic, but the fact a significant software company a) uses lisp & b) is trying to solve strong AI actually brings the section back into being interesting to the main article reader. --Jaibe 20:21, 22 July 2006 (UTC)

PS you can cite a NYT article without linking to it. Put it under "References".

Whose edit? SnowFire's removal, or my quasi-reversion? --maru (talk) contribs 23:51, 22 July 2006 (UTC)
The whole combined edit (I think that should be obvious from what I said!) The reversal is better than the deletion, but I liked the original best of all. --Jaibe 10:30, 23 July 2006 (UTC)

Erm- what? Look at what it said before, as I deleted it as much for poor style as for the unclear relevance to the section:

In particular, the online search company Google is rumoured to hire any lisp programmers they can find, as well as Python programmers. The company has also been rumoured to be interested in solving AI.
  • "lisp" isn't capitalized
  • as well as Python programmers? What? Is Python associated from AI? This is out of nowhere.
  • "solving" AI? What does that mean? And "rumored" is pretty wimpy, especially if marudubshinki is correct that they have outright said that they're interested (which I belive is correct).

I find marudubshinki's edit as a vast improvement over what was said before, making the intent of the passage obvious and writing in clear prose. SnowFire 18:57, 24 July 2006 (UTC)

About Python and AI, please reference Peter Novig's work (AIMA Python Code). Some claim that Python can do about everything that Lisp does (one noted exception is providing macros) thereby offering a possible replacement to those who need to get away from Lisp (for various reasons).
In the context of history (applied view dealing with implementations), the Lisp Machine page, I think, is a definite plus. I see a whole lot of extensions that can built from that page. For example, there were many development (IDE type) of packages that provided their mark for a time. ICAD was one that carried over to Unix. Also, I believe that many user interface improvements were prototyped within that environment. jmswtlk 19:52, 25 July 2006 (UTC)
Well, I was speaking as an ignorant reader. Python was not mentioned in the article with regards to AI, and still isn't. I do happen to be a Python programmer where I work... but not everyone reading the article is. SnowFire 21:28, 26 July 2006 (UTC)
Please see the addition of Python's (and its cohorts - the multi-paradigm languages) contribution to the AI Spring.
I'm sorry, I still think it lacks punch. I'm going to try to edit back in the human-level AI stuff. I don't care if we explicitly reference google or not. If you guys revert me again though I won't thrash.

COMPLETE REWRITE

As promised above, I have researched the details. This is the whole story on AI winter as far I can see from my sources.

I left a few citation needed spots, especially to the paragraphs I brought over from the old version. If you are the author of these paragraphs, please help me <ref> these, especially if your reference is right there already.

There is considerable overlap between this page and History of AI, and I hope to figure out what to do about that at some point. It's hard to (a) make the History of AI tell a coherent story and (b) skip all the AI winter stuff because it's already here. Anyway, I think it's okay that there's duplication between articles for now. They will diverge in the future.

Cheers.CharlesGillingham 09:53, 31 July 2007 (UTC)

Charles, great work!!. It demonstrates the spirit of 'wiki' in action. This progressive step is a long way from the initial page of a few words. I'll look to see what 'practice' information might be needed and to help identify sources for citation requirements. jmswtlk 14:57, 31 July 2007 (UTC)
This is a fantastic revision! Succinct, but still complete and insightful :-) 69.17.50.130 22:17, 19 August 2007 (UTC)

Added AI Now

I've removed the functional programming discussion at the end - what Python has to do with the future of AI escapes me completely. If it was Lambda Prolog, or even Mercury, that we were talking about I would be more sympathetic, but Python, and even Lisp or Prolog, have very little to do with the current development of models of intellegence. Really, people use them as convienient tools and just as often use C++ or Java as their implementation vehicles.

I've replaced it with an "AI Now" section, which is probably mistitled for an encyclopedia. I think it should be renamed "AI in 2007".

I've highlighted what I think are 3 successful technologies to come out of the AI program. I think that these sections could be bolstered and cross linked, I also think that Constraint Satisfaction is another technology that has been notably successful.

I think that DARPA and FP7 are reasonable as starting points for looking at activity levels in research, however it would be good to get evidence about what the national science program in the US is doing and other national funding bodies like the EPSRC in the UK. A large number of papers in AI recently have been produced by Chinese researchers as well, but I don't have any knowledge of what is going on in China - perhaps someone else can add some information there.

Publication rates would also clarify the position and provide a quantatative picture of how the "AI winter" impacted activity in Computer Science.

Simon Thompson - 4/9/2007 —Preceding unsigned comment added by 193.113.37.9 (talk) 07:32, 4 September 2007 (UTC)

I would also comment that while the article starts well I think that it still reads a bit like an opinion piece. It would be better if it disambiguated between the AI winter event at AAAI and the general idea of AI winter. I think that the latter would be best treated quantatatively in the way I suggest above.

I never liked the Python paragraph much anyway.
I like the idea of actually studying publication rates -- my guess is that it would show that there is almost no relationship between research and so-called "AI winters". This would bolster the point made at the end of the intro: that "AI Winters" are mostly a matter of perception. (Which is currently unverified). Is there a source that has done this? We can't do the research ourselves, of course, because Wikipedia doesn't allow original research.
Maybe the opening should be made a clearer (to someone who has never heard the term "AI winter"). I'll make an edit later today. Let me know if you like it better.
Your research on AI Now is really good and very detailed, but, to tell you the truth, I don't understand what mentioning AI's most recent successes has to do with AI Winters. To say that research has continued despite the criticism? It would be better to find a source that says exactly this. To say that people were "wrong" to criticize AI? This isn't the subject of the article. Similarly, although the state-of-the-art in Chinese AI and so on is interesting, I'm not sure is has anything to do with AI Winter. Would you like to add something to the top of AI Now to put these new advances in context? (i.e. to say explcitly what their relationship is to AI Winter.).
I have another idea for AI Now: I think this might read better as a bullet list.
  • like
  • this
Right now, it's a little dense for casual readers and has a lot of jargon. (Remember, this is a history article. I wouldn't even assume that they know what "Fuzzy logic" is.) A bullet list would allow users to scan through it and accept it as evidence for the section's point about AI Winter.
One more note: you can sign your posts on the talk page by typing ~~~~.
Thanks for your help. ---- CharlesGillingham 20:26, 4 September 2007 (UTC)

I agree with your comments, working on this I have realised how hard developing this kind of authorative content really is!

I think that the article is best as a cold description of the term "an AI winter is" and then a history "there was a debate at AAAI ect", and then a controversy or current context section?

- Definition - History - Discussion/Controversy/Debate? - External refs?

My *opinion* is that it's important to realise that what is written is taken and quoted as fact so if Wikipedia says "there is an AI Winter" people will believe that. Instead I think that we should be saying that the term means such and such, that it's origin was such and such and that other points of view and perceptions also exist. 193.113.37.9 11:31, 5 September 2007 (UTC)

Hi, I've remembered to log in now.

Also of note is this http://www.inf.ed.ac.uk/about/AIhistory.html which contains an account of the impact of the Lighthill report and also discusses the impact of Alvey. —Preceding unsigned comment added by Sgt101 (talkcontribs) 12:00, 5 September 2007 (UTC)

And this http://pages.cpsc.ucalgary.ca/~gaines/reports/MFIT/OSIT84/OSIT84.pdf Sgt101 12:39, 5 September 2007 (UTC)

The intro is now more or less along the lines you suggest. Four paragraphs: (1) the original definition of the term (2) the history of the term (3) the specific details of what the term descibes and how it is used today (4) the fact that some people don't think it's a particularly useful term. I have reliable sources for all four paragraphs except the last one. (My most reliable sources (Crevier, Russell & Norvig, NRC) don't make this point.) I'm looking for a reliable source that says "AI winter is a lot of nonsense" but I haven't found one.
Just a thought on the "authority" of Wikipedia: I believe that every line in Wikipedia should be attributable to the most reliable source on the subject. If not, it should be marked as {{citation needed}}. I hope that I've done that in this article. The authority of Wikipedia comes from it's sources.
I'd like to add a paragraph on Alvey and the revival of academic AI research in England. I think that's part of the story of Lighthill. I'm also going to add a paragraph about how DARPAs reallocation of funds (away from pure AI research) eventually led to the creation of successful battle management systems. That's part of the story as well.
---- CharlesGillingham 21:31, 5 September 2007 (UTC)

AI Spring reference

Hi, The editorial in AAAI'99 talks about AI spring http://www.aaai.org/Conferences/AAAI/aaai99.php of course this is a case of the community deciding that winter is over, but since the debate at AAAI'84 is an initiating event for the winter an editorial in 1999 would indicate that some people thought it was over.

Not so strong - more research required! Sgt101 15:31, 6 September 2007 (UTC)

Used this reference in Hope of another spring ---- CharlesGillingham 07:49, 16 September 2007 (UTC)


The abandonment of perceptrons in 1969

I think this section is good, but when I added the line about a solution to the XOR problem relatively early in the 10-year neural network winter, the section rises another question: why didn't these improvements null out the effects of Minsky and Paperts book? I definitely don't want to add my own speculations but I'm looking for citeable texts that touches on this, preferably by someone who worked in that era. This would also add to the greater story about the social dynamics that create AI winters. Has anyone got any leads? EverGreg 11:45, 24 September 2007 (UTC)

This section is a work of unscience-fiction. Anyone who read the book will see with his own eyes that it talks about solving the XOR function for N inputs. The problem is not the solution, is the size of the network needed. The limitations identified there were never overcome, and AFAIK never will be!!... I am preparing myself to try to modify this and other similar claims throughout wikipedia, help and discussions are appreciated. -- NIC1138 (talk) 07:21, 22 March 2008 (UTC)
I must confess I've never read Minsky and Paperts book, but if I understand you correctly they criticize how the number of nodes needed increase as a function of inputs for the XOR problem? And is that "the" XOR problem as in e.g 1110 xor 1001 = 0111 or some more complicated function with several xors? EverGreg (talk) 16:50, 22 March 2008 (UTC)
It is the combined XOR of all inputs, also called "parity function". AFAIK the book does present how to implement it for any number of variables. but I must fetch it in the library to find the best pages to point to... I believe it even has a nice graphic showing how this solution works. And they didn't criticize anything, they just preset this truth, they show what are the easy and difficult problems, what are the true intrinsic limitations. "Limitations" in the nice sense, that of the characteristics and engineer must keep an eye on when doing his project. Does an electronic engineer criticize transistors because of the difficulty of implementing the XOR functions with AND and OR gates, and to build a carry look-ahead adder? -- NIC1138 (talk) 21:52, 22 March 2008 (UTC)
Is the only issue here with the sentences about the XOR function? (Surely you don't disagree that "there were severe limitations to what perceptrons could do" and that "virtually no research was done in connectionism for 10 years." If there is controversy about these points, then major sources (such as Crevier 1993, pp. 102−105, McCorduck 2004, pp. 104−107 and Russell & Norvig 2003, p. 22) choose to ignore it, and I'm not sure that Wikipedia is the place to dispute them.) ---- CharlesGillingham (talk) 03:32, 23 March 2008 (UTC)
For now I only intend to question the XOR affair. Not only the book Perceptrons talks about the possibility to implement any boolean function with a Perceptron (including XOR), but older books already said that, like Rosenblatt's itself. Also, I don't know when did the “severe limitations” of feed-forward nets were overcome. Is there any articles claiming to have done such, and that do not present mistaken readings of the book? Claims to have surpassed the findings presented in Perceptrons go hand-in-hand with incorrect, not to say plain false statements about what the book is all about.
AFAIK the book presents a nice picture of the characteristics (benefits as well as limitations) of perceptrons, and not malevolent unfounded claims done with second intentions. It's a nice scientific book, with good insights, proofs and explanations. The book is also very clear in that networks with loops should present new, interesting capabilities.
As for Norvig's book (the only of those I have available right now), I do believe it is incorrect and it's a nice example of the popular misconceptions regarding Perceptrons. First of all, because he mentions the concept that the incapability of a single neuron to represent XOR (or also the equivalence function) has any importance in the conclusions. This only comes early in the book, when what is called “predicates of order 1” are considered. Minsky's book goes beyond that, considering more layers. Otherwise, it could not even bear the name “Perceptrons”, since this is the name of a structure with at least the input, an intermediary (“association”, currently known as “hidden”) layer, and an output layer, and not just a single neuron (or a single “linear threshold function”).
Norvig also does like many other authors, and throws sentences like “funding (...) soon dwindled to almost nothing” without presenting a good list of how many projects on the area there were, and how many were proposed after 1969, and surprisingly rejected. I don't think a claim such as this would be accepted at wikipedia without at least a [citation needed] by the side. Plus, all these references, I believe, are books about the tools, and should not be immediately recognized as good references about the history of the subject. Norvig graduated in 1978, and Russell was born in 1962. -- NIC1138 (talk) 06:06, 25 March 2008 (UTC)

On the possible motives for the diversion away from Perceptrons in the 1970s

I question the idea that the publishing of a single book could possibly cause so drastic consequences in the world's scientific research community. Perceptrons is no Principia Mathematica. But even if it turns out to symbolize a temporary change of direction in the collective fellings of researchers, isn't the readers to blame for their acts? Were by any chance all AI researchers of the 1970s skeptical of free-will, and willing to obey thoughtlessly to the commands of Minsky, the Merciless? So, people don't make an opinion of their own, and it's his fault? And what about the deaths of Rosenblatt, McCulloch and Pitts all in 1969 didn't have any influence?

What about the development of minicomputers, the Prolog and C languages, graphical user interfaces, couldn't it be possible that all these new and exciting technologies drove the attention away from something that looked too much like old analogic electronics, that had already been studied for many years? The PDP waiting there with all those different instructions, computer programming finally becoming accessible, and you would care for difficult floating-point physical simulations?... Also people talk about money. It even looks like 1969 was the only year in history that people had problems in obtaining money. I also believe it is an offense to all persons involved in taking the decisions about where to apply money to say that they might have based their decisions on the incorrect reading of a book, and by the hunch of a single man. If there was something wrong, was the naiveté of young scientists uncertain about what to do. The same naiveté that allow such rumors and myths to spread today the way it happens.

Let's not also forget that the early 80s saw the arrival of micro-computers. My theory is that this and other developments made the perfect scenario for research on structures such as neuronal networks. One interesting story that may show the connection between 80's technology and “connectionist” models is that Feynmann took Hopfield to give a lecture at Thinking Machines in 1983. About the same year too, Stephen Wolfram switched from his work with the Symbolic Manipulation Program (something quite symbolic) to study cellular automata, something that was known since the 50s but were only starting to be researched.

(Sorry for such large postings... I'm rehearsing for the article. Comments are appreciated.) -- NIC1138 (talk) 06:06, 25 March 2008 (UTC)

Fascinating. You bring up a lot of interesting points, and there is a lot of truth in what you say. My own speculation, which I can't resist sharing, is that the story took its modern form in the middle 80s, when connectionism was on the rise. People liked it because it explained (1) why connectionism hadn't done anything worthwhile before the 80s (perceptrons had flaws/we didn't get a chance to try other ideas) (2) why it will succeed now (we've moved beyond perceptrons). And along the way, it blames "symbolic AI" (connectionism's arch-enemy) as the real culprit. It's a story that makes you feel good about doing research in connectionism--it manages to make older approaches to connectionism look bad, and make symbolic AI look bad at the same time, implicitly making modern connectionism look good. History is written by the victors. (I have absolutely no sources for this speculation.)
I invite you to look at Frank Rosenblatt, which is an article which could desperately use more research, and also perceptron, which could use more detail in its history section. I think these are the right places to cover these topics in more depth. There's room in those articles to do some research and cite some fringe sources that tell a different story than the canonical one that this article retells.
However, I don't think that "AI winter" is the right place to cover this subject in much more depth. It merits only a paragraph here. It's only mentioned in this article because a few sources (Russell & Norvig is one) mention it in relation to the first AI winter (under "Dose of Reality", p. 21-22). The only facts we really need to report here are the ones that relate to AI winter:. we need to report the fact that it was hard to get funding for connectionist projects in the 70s, why it was hard (Rosenblatt's exaggeration, Minsky's book), and why it became easy later (Hopfield's paper, Rumelhart's book (with Werbos' algorithm)). These are the essential facts, and I don't believe they are in dispute. ---- CharlesGillingham (talk) 09:28, 25 March 2008 (UTC)



I'm excited to see an indepth challenging of the facts concering the "XOR problem debate", even though a thorough discussion in the article may be more appropriate in Feedforward_neural_network, Perceptron or even Marvin_Minsky. If a concensus in these articles develop, the AI winter article should reflect it and possibly gain new insight from it. As CharlesGillingham touches on, the storytelling in the AI community is intertwined with the rise and fall of AI schools of thought and their funding.
This being 1969, it is not unlikely that ANNs were discussed with stronger emphasis on electronic circuits than today. I'm no expert on this, but [1] and its subsection gave me some new insight. The odd parity function take n bits as input and output true iff the sum of the bits is an odd number. It is obvious that a function that loops through the bits can easily compute this for any number of bits N, but a feedforward neural network will find it more difficult as the parity function is not a threshold function and the feedforward network has no loops. Here we should remember that the training of neural networks with loops is a harder problem than feedforward networks, with much of the progress being quite recent. (e.g Echo state networks) I also note that [2] claims that circuit construction from truth tables are inefficient for the parity function and perhaps a neural network could be said to be built from the truth table. The training data that is.
The paper 'N-bit parity neural networks:new solutions based on linear programming' [ http://liu.ece.uic.edu/~dliu/PS/liu-hohil-smith.pdf] support your view that the parity function and not the XOR problem was Minskys focus. According to this paper, Rumelhart, Hinton and Williams in 1986 found that you need N hidden neurons to solve the N-bit parity problem but gives the impression that this is still an open research field. Which must mean that the early 70's papers did not 'solve' the parity function.
In short, it becomes easier to imagine that the academics of the day may have become disillusioned when faced with a problem which was simple in terms of logic circuits but turned out to be a major challenge in terms of neural networks. EverGreg (talk) 13:24, 25 March 2008 (UTC)


Thanks for the reactions. I agree that the right place to do this discussion is in these other articles, and I am working on them too... I just have time for a quick reply to Greg: I didn't quite get what you mean, because the XOR and the parity problem are just the same. And regarding Hinton's article: the fact that you can implement any possible Boolean function in a simple perceptron just like you design digital circuits, in the disjunctive or conjunctive normal form, was widely known already in the 1950s.
I found this very interesting article the other day that might interest you all: A Sociological Study of the Official History of the Perceptrons Controversy by Mikel Olazaran. He does not detail exactly what the book really says (the “actual” history instead of the “‘official’”), but he shows how the book served to consolidate the field of “symbolic” AI, and was then later used by the 1980s research to consolidate their own field. How shows the rhetoric mechanisms used by the two groups. The new connectionists got what they wanted: made a name for themselves, but left us with this distorted vision of history... -- NIC1138 (talk) 17:46, 25 March 2008 (UTC)
Thanks, I'll read Olazarans article. It's references just like that we need in a wikipedia article about this. :-) EverGreg (talk) 10:27, 26 March 2008 (UTC)

The Lighthill Report and its effects

I came across an obituary of Donald Michie in the guardian [3] It highlights that the Lighthill report came at the time of an economic crisis in the UK. Universities looked everywhere for budget cuts and the report helped single out AI for cutting. Thus, the large scale of the funding cuts was in proportion to the economy, not in proportion to the "force" of the report itself. The obituary also explains the subsequent "AI spring" in economic terms:

By the early 1980s, automated assembly robots in Japan were outstripping traditional methods of manufacturing in other countries including the UK. Additionally, computer systems which imitated the decision-making of human experts were becoming increasingly successful. As a consequence, governments in the UK, Europe and US resumed large-scale funding of artificial-intelligence projects in response to the Japanese Fifth Generation project.

More generally perhaps, the funding of a field may be hit harder than other fields during a recession if it is not conceived as one of the "core" competences or tasks of a institute or faculty. This I belive, is often the case for AI in e.g a computer science department and is also a typical problem for much interdisciplinary research. EverGreg (talk) 11:54, 21 November 2007 (UTC)

Yes, this is interesting, and makes perfect sense: AI funding is an exaggerated curve of the economy in general. The criticism I often heard around 1992 or so was "AI is just a toy." A very expensive toy: corporations were paying around $60,000.00 a pop for software and consulting on an expert system shell in 1990. When the economy tightens up, corporations can't afford toys, so AI has to close up shop. There may be a place to work this into the article: it certainly could connect to the English winter of 1973, but it applies to a few other downturns as well. ---- CharlesGillingham (talk) 08:23, 22 November 2007 (UTC)
There is some criticism to contemporary research funding that goes on saying that today research is too much focused on applications and the creation of products, and the needs of basic research are not being satisfied. There is an interview with Minsky where he says that explicitly... This idea have to do with all of this: when economy goes well, basic research gets more funding, because it is less difficult to convince people to invest the money. (Notice that bad economy implies less basic funding, but you can have good economy and bad funding as well.) -- NIC1138 (talk) 22:07, 22 March 2008 (UTC)

Todo

  1. Under Lighthill: a paragraph on Alvey and the revival of AI research.   Done
  2. Under DARPA: an opening paragraph about J. C. R. Licklider's policy of "funding people not projects" and the freewheeling research atmosphere that created.  Done
  3. Under DARPA: a closing paragraph about how DARPA's reallocation of funds eventually led to creation of successful logistics systems that saved billions in the Gulf Wars. I.e., as far they are concerned, they were right.  Done
  4. Weave into a few sections the connection between economic downturns and AI funding. (source above).   Done
  5. Although no one ever calls it an AI winter, the abandonment of AI by IBM in 1961 has a lot in common with other AI episodes (see Nathaniel Rochester (computer scientist)) ---- CharlesGillingham (talk) 08:38, 22 November 2007 (UTC)
I've tried to take point 4 into a section on its own to "connect the dots" for the reader after the historical overview. Material supporting the claims are the wikipedia articles on interdisciplinarity and hype cycle as well as the above obituary. More independent sources would be good though. EverGreg (talk) 13:05, 25 November 2007 (UTC)

Hendlers's insider comment

James Hendlers editorial in IEEE Intelligent Systems (March/April 2008 (Vol. 23, No. 2) pp. 2-4) available at [4] refer to this article and counter some claims we've made. His insider's view of the past AI winters is well worth a read and may improve our article, though we should be careful not to include too much speculation about future winters in an encyclopedic article. EverGreg (talk) 07:14, 23 July 2008 (UTC)

He's not disagreeing with us. He writes: "people say that AI hype caused the frost". Wikipedia says the same thing -- we don't claim that hype caused the frost, we only claim that people say hype caused the frost. I don't think we have a problem on this issue: we have some very solid quotes that promote the "hype" or "over-promising" theory of AI winter -- from a researcher no less notable than Hans Moravec (under Darpa). (I would like to see a source for the bit about Cog being over-hyped in the early 90s (under Fear of Another Winter)).
As for Hendler's three-point theory of AI winter, we cover one point, touch on the other, and don't cover the third -- We cover his "Moore's Law" theory of the Lisp Machine market. We touch on the "change in management" theory (where we talk about Mansfield under Darpa) (and maybe we could expand this and make it a little clearer. We could use some info about the changes in the Strategic Computing Initiative in the late 80s when they phased out AI. Another "todo".) I'm not sure what to think of his idea that AI winters are partially caused by infighting. Is there a second source on this? Something more central? ---- CharlesGillingham (talk) 11:25, 23 July 2008 (UTC)
What particularly stood out for me was his claim that the funding cuts in the 70's were masked by industrial investment in applications and only kicked in a decade later. Hendler here envision a traditional pipeline model where knowledge from basic research is transferred to applied research and then onto real-world applications. What we under the heading "The fall of expert systems" describe as a collapse due to the inherent brittleness of expert systems, Hendler explains by a lack of basic research to "feed the fire" of application development.
By the pipeline model, forcing AI researchers to "integrate" with other fields or work on concrete applications would have the same effect as cutting funding for basic research, which is what Hendler says at the end of the editorial. But in that case, embracing expert system applications could have done as much harm in the 80's as the claimed disowning of it, so I find it hard to evaluate his infighting-theory. EverGreg (talk) 13:15, 23 July 2008 (UTC)
"Brittleness" comes from several sources: Lenat & Guha, Crevier, etc. so I think it's an interesting, respectable explanation. For Lenat, the underlying problem is commonsense knowledge—lack of commonsense knowledge makes them brittle. So, with some effort, you can weave this into Hendler's explanation: if DARPA had funded basic research into commonsense knowledge in the late 70s and early 80s, then, by the time expert systems were being used commercially in 1990, commonsense knowledge bases would have been available for them to access, and they wouldn't have been brittle. It's a stretch, but it sort of fits. (It's WP:SYNTHESIS, of course, so we can't use it.)
I think Hendler's pipeline argument is a specific aspect of a more general problem: the failure to fund basic research or too much focus on specific problems. He is (with some effort) trying blame this as the underlying cause the late 80s AI winter, but of course the dates don't line up right, so the pipeline explains the delay. Like I say, I think it's a stretch.
DARPA, on the other hand, has had great success by focussing on specific problems rather than basic research. That's where the DART battle management system came from. It's an investment that really paid off. So they're happy to ignore Marvin Minsky, John McCarthy and anyone else who interested in basic research. It's working for them. ---- CharlesGillingham (talk) 20:14, 6 August 2008 (UTC)

Reorganized the later sections a bit.

I've moved a few things around but I didn't change any language. A couple of sections seemed a bit orphaned to me. I've retitled some sections so that it's clear that they are essential to the topic of article as a whole. ---- CharlesGillingham (talk) 18:50, 6 August 2008 (UTC)

More TODO

I think we now have a place to add some of Hendler's ideas, if we want, in the retitled section Underlying causes. There are several other things I'd like to see covered, eventually:

  • Technical issues
    • Unexpected limits and brickwalls: See History of AI#The problems.
    • "Wrong Paradigm" accusations: e.g., fans of sub-symbolic AI blame the limits of symbolic AI, fans of knowledge (like Minsky) blame the limits of sub-symbolic AI,
  • More specifics on hype: internal hype, like Newell & Simon's bad prediction (see History of AI#The optimism), and external hype, like the Life Magazine article that hyped Shakey and embarassed everyone. (Source would be What Computers Can't Do.)
  • Instutional factors
    • Hendler's change-in-management ("new pharoah") theory.
    • The "failure to fund basic research" in the early eighties & Hendler's "pipeline" theory.   Done

-- CharlesGillingham (talk) 20:18, 6 August 2008 (UTC)

Translation

It is probably a good idea to mention how a system such as Google Translate has become so widespread that their browser banks on that as a marketing advantage. So the comeback should be mentioned somewhere. And by the way, if you take a look at the reverts done by the neural net of ClueBot on Wikipedia you will see how neural nets are getting used on pages such as this one. So these need to be mentioned along with the funeral of AI issues. I do not want to bother to comment on Handler, but his is just one perspective and he got it wrong in my view, e.g. the failure of Lisp was partly because it was hard to learn for some people (still is) and the marketing people over hyped the whole field, etc. But that is another story. However, overall a nice article. Just needs some post 2,000 updates. Cheers. History2007 (talk) 08:38, 6 February 2011 (UTC)

I've also heard that Rodney Brooks' "insect robot" approach has borne a lot of fruit; the article quotes him on the general topic without e.g. citing his particular successes, which I've been told include saving the MIT AI Lab when e.g. the also venerable Stanford SAIL was shut down.
It is another story, but it the claim that Lisp was blamed for the failure of various over-hyped expert systems ventures is a part of Lisp's history ... and I'd add that its winter was very possibly harsher; it's only been in the last few years that it's experienced something of a spring or at least a few green shoots. Hga (talk) 22:10, 7 February 2011 (UTC)
Yes, but in the end why languages fail or succeed is mostly speculation. When they were pumping ADA with big time designers, who would have thought it would fizzle so quickly and who would have thought Perl with a part time designer would fly...? And the same may go for Linux, one boy in an attic... So who knows in the end. But AI is coming back. History2007 (talk) 22:29, 7 February 2011 (UTC)

Google as source of funding?

The article has a section on sources for funding on AI research, mostly traditional governmental funding agencies. It would be interesting to see whether Google possibly fits in that list, since the creation of AI is sometimes portrayed as being the primary long-term goal for that company. 78.73.90.148 (talk) 12:51, 27 December 2013 (UTC)