Talk:Computational creativity
This is the talk page for discussing improvements to the Computational creativity article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Merge of artificial creativity
editI have merged in the stub article on "artificial creativity" into this much more comprehensive and well researched article. ---- CharlesGillingham (talk) 04:06, 19 March 2008 (UTC)
No applications that aren't external?
editWhy are Applications and Examples only under External Links? Can't there be applications already in wikipedia?
Rmkeller (talk) 03:35, 14 September 2010 (UTC)
External links: Stephen L. Thaler, Ph.D.
editIn researching computer-generated music, I stumpled upon this guy and his websites (google 'Imagination Engines' if you want to waste some time). In a nutshell, he claims to have build 'a free-thinking neural architecture called a "Creativity Machine"' which - allegedly - is capable of doing almost anything, be it composing music, developing security systems or designing weapons. To the best of my knowledge, such a technology is - as of now, anyway ;) - mere science fiction, which leads me to the conclusion that Mr. Thaler is a fraud. Frankly, I am quite surprised that I have yet to find a source that debunks the whole thing. Be that as it may, given that there is also no credible source which would back up his extraordinary claims, I'm removing the link (which doesn't seem to work anyway). — Preceding unsigned comment added by 91.52.62.244 (talk) 00:19, 15 September 2012 (UTC)
- The comments below (refers to the comment above now that this has been moved to appropriate location --Ronz (talk) 21:40, 29 January 2013 (UTC)) are definitely inflammatory and violate Wikipedia guidelines. They also demonstrate a total lack of comprehension of the patent suite and accompanying academic publications. Quite frankly, the references appearing in this comment are irrelevant to this discussion. Keep in mind that the whole panorama of academic literature and prior patent art were considered in declaring these patents both novel and useful. Furthermore, some of the personalities listed blessed the technology as "watershed." That's probably why you haven't found anyone to "debunk the whole thing." Instead, Thaler took his theory (actually developed in the 70s) and has applied it across all disciplines for large commercial entities as well as the U.S. government. The patents have survived intense scrutiny across the world, including Europe. --Periksson28 (talk) 20:31, 26 January 2013 (UTC)
- I think we're on a slippery slope when the name calling begins and a life's work, substantiated by numerous publications and issued patents, is called into question: — Preceding unsigned comment added by Periksson28 (talk • contribs) 20:26, 26 January 2013 (UTC)
- On 21 January 2013, User:Periksson28 inserted lots of praise of Thaler. So I looked at Thaler's patent of 1996. I am not a patent lawyer, but I think it is invalid. Certainly not a "landmark patent" as claimed by User:Periksson28. The problem is that identical or extremely similar stuff got already published much earlier by others.
- What does the patent say? Thaler simply trains an artificial neural network that maps input patterns to target output patterns. This is and was a standard procedure. He also trains a second network whose inputs are the outputs of the first network. Its target outputs are reward signals given by human observers who judge outputs of the first network according to some problem-specific criterion. The so-called "creative" part is to randomly modify the weights of the first network a bit, to obtain modified outputs that cause high predicted reward through the second network. - Tivity (talk) 18:48, 24 January 2013 (UTC)
- You really haven't done a thorough job of reading then. Periksson28 (talk) 01:21, 29 January 2013 (UTC)
- However, many other researchers have applied and published this simple idea before 1996 in various contexts. To name a few:
- A dual backpropagation scheme for scalar-reward learning. P Munro - Ninth Annual Conference of the Cognitive Science, 1987
- Neural networks for control and system identification. PJ Werbos - Decision and Control, 1989.
- Forward models: Supervised learning with a distal teacher. MI Jordan, DE Rumelhart - Cognitive Science, 1992.
- The truck backer-upper: An example of self-learning in neural networks. D Nguyen, B Widrow - IJCNN'89, 1989.
- A possibility for implementing curiosity and boredom in model-building neural controllers. J Schmidhuber - SAB'91, 1991.
- Neural networks for control. WT Miller III, RS Sutton, PJ Werbos - 1995.
- None of the earlier studies is even mentioned by Thaler. How the patent was able to pass the checks I don't know. (This further undermines my trust in the patent system.) Tivity (talk) 18:48, 24 January 2013 (UTC)
- In fairness to those who went before, all valid prior art has been referenced through the first figure, which links to the USPTO. I personally trust that patent office's opinion, especially on the big-picture patents.Periksson28 (talk) —Preceding undated comment added 00:59, 29 January 2013 (UTC)
- I travelled to meet Dr Thaler in 1997 after the 1994 patent filing was upheld. Amongst Dr Thaler's prototypes were novel neural networks which certainly were and are landmarks in the AI field. Dr Thaler is the leader in machine vision worldwide with his ANNs guarding airfields and assisting the military in robotic vision. The 1994 patent (issued in 1997) was and is unique in that the prototypes were the first ANNs to conduct creative trial and error, solving problems that had never before been solved by ANNs and which the list of documents cited above totally fails to take into consideration. Dr Thaler is the most highly respected scientist in his field and attempts on Wikipedia by ill informed non-experts to call his patents or credibility into question are uncalled for. — Preceding unsigned comment added by Deliberater (talk • contribs) 02:25, 27 January 2013 (UTC)
- Apart from the lack of novelty, it seems far-fetched to call these random weight changes "creativity". The section on Thaler calls for deletion or at least for a major rewrite. Tivity (talk) 18:48, 24 January 2013 (UTC)
- Many of us out here don't think that "Formal Theory of Creativity" is at all novel. The decision as to the novelty of Thaler has already been made.Midnightplunge (talk) 04:34, 27 January 2013 (UTC)
- You still don't get it! Droves have! Midnightplunge (talk) 04:11, 27 January 2013 (UTC)
- But patents are definitely peer-reviewed and exhaustively researched by patent examiners. The novelty was unanimously substantiated by patent offices around the world. The pure academics weren't sure enough to brave such criticism Midnightplunge (talk) 03:38, 27 January 2013 (UTC)
- Since the text on Thaler's patent by User_talk:Periksson28 read like an advertisement, I removed unsourced claims etc. I removed some references without peer review (very hesitantly - more needs to be done here). I inserted references to earlier work by others mentioned above. Probably this part needs additional work, but it's a start. Tivity (talk) 19:10, 25 January 2013 (UTC)
- Decimation of periksson28's contribution is an unfortunate loss to Wikipedia readers. Many of the sources you have "hesitantly" eradicated are from Ph.D.s who are acting independently of academic guilds. It seems that you have an agenda.Midnightplunge (talk) 04:11, 27 January 2013 (UTC)
- It seems pretty obvious that Midnightplunge (talk • contribs) and Deliberater (talk • contribs) are sockpuppets of Periksson28 (talk • contribs). You chose to reinstall the advertisement of Thaler's patent, deleting all references to those who published similar 2 network systems earlier. I'll put up the template saying that the contents are disputed, hoping others with some expertise in this area will help to clear this up. Tivity (talk) 10:26, 27 January 2013 (UTC)
- Tivity, and the same applies for the "Formal Theory" section. It does not pay tribute to all those who have built network cascades involving multiple algorithms or networks, making it appear that JS has somehow done something unique. There is nothing novel about this theory and it seems to be a cheap imitation of "Comprehensive Theory." On the other hand, Thaler's work has been repeatedly (and I emphasize, repeatedly) judged unique and hence patentable. Also, "Wow-effect" is not standard scientific vocabulary among AI practioners and it appears to be a marketing term. Further, Periksson28 has presented no inappropriate references. They represent peer-reviewed papers, conference articles, and reputable press (e.g., Scientific American and the Pullitzer's flagship newspaper). With regard to Rumelhart et al, I see no connection with Thaler's very essential synaptic perturbation scheme, nor does DER present all the inventive elements necessary for an invention nor does he discuss the core features of a general theory of creativity. I would suggest a take-down of the "formal theory." It's just silly! Midnightplunge (talk) 12:05, 27 January 2013 (UTC)
- Also, I have never before seen an article containing retroactive dates in it (1990-2010). Jürgen (2010), Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990-2010). IEEE Transactions on Autonomous Mental Development, 2(3):230-247! --What a bogus way of creating revisionist history! The paper was published in 2010 for God's sake and that's its priority date.Midnightplunge (talk) 13:08, 27 January 2013 (UTC)
- Well, did you read the formal theory paper at all? It is a survey of publications since 1990. The basic principle is: The intrinsic reward of a reinforcement learning module is the learning progress (the wow-effect) of a separate data encoder or predictor module. The first implementation of this was published in 1991: J. Schmidhuber. Curious model-building control systems. In Proc. International Joint Conference on Neural Networks, Singapore, volume 2, pages 1458-1463. IEEE, 1991. There both modules were neural networks.
- Thaler also used two networks. But he did this much later, in 1997, and he did not even implement Schmidhuber's principle of curiosity/creativity, but something even older: He essentially re-implemented the distal teacher approach of the 1980s (the name was coined in 1992, but the principle is older). Here the goal is to find weights for one neural network such that its outputs cause high predicted reward through a second network, which was first trained to model human preferences. (No intrinsic reward like in Schmidhuber's systems since 1991.) To find good weights of the first network, Thaler uses random weight changes. The authors below describe identical setups (just like in the figure that you posted), but they also use search algorithms more sophisticated than random search, notably gradient descent. Their papers predate Thaler's by almost a decade:
- A dual backpropagation scheme for scalar-reward learning. P Munro - Ninth Annual Conference of the Cognitive Science, 1987
- Neural networks for control and system identification. PJ Werbos - Decision and Control, 1989.
- Forward models: Supervised learning with a distal teacher. MI Jordan, DE Rumelhart - Cognitive Science, 1992.
- The truck backer-upper: An example of self-learning in neural networks. D Nguyen, B Widrow - IJCNN'89, 1989.
- All these authors could claim for their systems the same attributes that your rewrites of the article claim for Thaler's much later approach. A quick search on Google Scholar shows that they also got many citations for their work. Numerous other researchers have described and applied variants thereof. On the other hand, Thaler's later patent got very few citations. This seems to indicate that your rewrites and numerous citations of Thaler do not really reflect the general opinions of the field. I am not suggesting that you delete all of what you wrote, but obviously this must be placed in context, and you'll have to be very careful with expressions such as "landmark patent." Tivity (talk) 16:09, 27 January 2013 (UTC)
- Sorry, but there is little, if any relevance, to the research mentioned, nor is this distal learning. I don't think Thaler has been in the citation game. He just invents ahead of the pack.Midnightplunge (talk) 17:56, 27 January 2013 (UTC)
- The "Wow-factor" is a totally unscientific term and to many represents an attempt at commercial name branding. Now, who repackages AI under a new name? Have you read Thaler's patents and their emphasis upon attentional consciousness (e.g., curiosity)?Midnightplunge (talk) 18:14, 27 January 2013 (UTC)
- Call the wow-factor whatever you want - what counts are the formal details. To summarize, the 2 network systems of Munro, Jordan, Werbos, Nguyen work like this. The first network is a function f from R^m to R^n: f(in)=out. The second net is a function g from R^n to R^s: g(out)=r. The parameters of g are set through training on a training set, where r typically denotes reward signals (s=1). Now some search method is used to discover parameters of f that maximize r. In the 1980s, Munro, Jordan, Werbos, Nguyen et al used gradient descent to do this. In 1997, ten years later, Thaler did exactly the same, except that he used random search instead of gradient descent. What Schmidhuber did in 1991 is something else: he measured changes (or first derivatives) of f's parameters and errors during learning, and used that to generate additional intrinsic curiosity rewards. Anyway, all of this happened long before the first publication of Thaler. All your claims that Thaler was first in any of this are clearly wrong. I assume that's why he isn't cited, despite your astonishing claim that "he is the leader in machine vision worldwide." If you think my analysis of his patent not correct, then point exactly to where you think it is not correct. Tivity (talk) 20:05, 27 January 2013 (UTC)
The term "wow-factor" is, as the section admits, not an accepted one in connectionism and one of the 4 personalities you mention from the 80s was involved in the due diligence of the patent. His assessment, and I am quite sure of how this event was played back to me, was "watershed." That's because no gradient descent, no first derivatives, no pedantic human meditations/incantations of any kind were required. The creativity machine figures that all out for itself! And what you describe as a random process, isn't really that. In the public demos I've seen, the whole process can be initiated through recitation of the Gettysburg address to the system, if need be. Thereafter, the architecture enters into a mode that can only be termed systematic. Nevertheless, independence from these past clever but faded approaches help in large extent to propel the patents. The system is self-organizing itself to achieve its creative brilliance. That's why it was named "device for the autonomous generation of useful information," and not "system for citation and tenure assurance!" ;-) I'm sorry, but court was held long ago... I don't know what the truth on machine vision is since there are a lot of clever and industrious approaches to it, not to mention towering egos. Nevertheless, his company does well because of skipping the human involvement part. He has outperformed whole teams of machine vision programmers, typically getting the job done at one tenth the cost. His systems do well, but a whole career could be dedicated to discussing the underlying mechanics and he is embroiled in a more pragmatic game right now.Periksson28 (talk) 21:36, 27 January 2013 (UTC)
- I don't mind deleting "wow-factor" which is just a shorthand for learning progress. But I think you must cite the references to earlier nearly identical work (which you deleted), and you really need a source to support your claim that using random weight changes on a pre-trained network instead of gradient descent (like the earlier approaches for the same architecture) is "watershed". Random search is normally viewed as the most naive of all search algorithms. Tivity (talk) 18:25, 28 January 2013 (UTC)
- It was Paul Werbos himself who rightfully said "This is watershed!" and "This is the successor to genetic algorithms!". Look up Robert Hecht-Nielsen and what theories he describes - he got them all from Dr. Thaler first, who is the true creator of Confabulation Theory. I think you are interpreting it from the viewpoint of outdated neural network approaches. There it may be "naive" to use "random search", but the noise is not random in the usual sense. It is a statistical average of synaptic perturbation resulting in the "Virtual Input Effect" described by Dr. Thaler. If you do research, you can find out how this Cavitation Rate is defined and what its optimal value is. It can be adjusted by the networks at will(!). The network undergoing cavitation (imagitron) has been trained beforehand and contains the implicit relationships of the information it was exposed to. Thanks to cavitation it can now try out every single possibility, jumbling the trained memories around until the desired solution is finally captured by the second, or other watching networks. If we instead use more imagitrons that are each generating their own unique activation patterns (ideas) then we can achieve juxtapositional invention. There are several types of Creativity Machines, the more complex they get, the more human-like they become. It's only a question of time and resources. If the goal is to create totally autonomous, naturally behaving AI, we can't rely on methods like "gradient descent". They are called "Creativity" Machines for a reason, it's the most brain-based and creative system available. Chaos (Cavitation) and consciousness (Neural Networks) are inextricably linked, the refusal to see this is why popular AI tends to fail.Terraforming (talk) 04:52, 25 March 2013 (UTC)
- This section looks as factual as can be and all references adhere to Wikipedia guidelines, so what's the beef?Periksson28 (talk) 23:47, 27 January 2013 (UTC)
- I'll summarize the issues in a list below. Tivity (talk) 20:04, 28 January 2013 (UTC)
- This section looks as factual as can be and all references adhere to Wikipedia guidelines, so what's the beef?Periksson28 (talk) 23:47, 27 January 2013 (UTC)
- Quite frankly I like the general theory of creativity section myself. I've checked the references & they look ok 2 me. This is a really a sweet addition to wikipedia. The formal theory section looks more like advertsiement. no, i'm not a sockpuppet of anyone but myself.KeenOnDetail (talk) 00:56, 28 January 2013 (UTC)
- Join the crowd, KeenOnDetail!Midnightplunge (talk) 01:04, 28 January 2013 (UTC)
List of problems with the recent section on Thaler's "Creativity Machine"
editDue to their extremely focused edits so far, the following editors and supporters of the recent section on Stephen Thaler are suspected to be sockpuppets of a single person: Periksson28 (talk • contribs), Midnightplunge (talk • contribs), Deliberater (talk • contribs), KeenOnDetail (talk • contribs). I made a list of problems with their edits:
- Deletion of previous very similar work by others: Thaler's 2 network system was published in 1997. But in fact it is exactly the earlier 2 network architecture of Paul Munro[1], Paul Werbos[2], D. Nguyen and Bernard Widrow[3], published in the 1980s, and called "distal teacher" in 1992 by Michael I. Jordan and David Rumelhart[4]. The only difference is: Thaler re-trains the first network by random search instead of gradient descent. References to prior art got deleted though.
- Invalid sources: Apparently in reaction to the priority issue above, a link to an interview of 2009 was inserted. There Thaler himself refers to his own unpublished thoughts of 1974.[5] This cannot serve as a valid reference. Thaler's first valid document on this stems from 1997.
- Grand claims: The section title claims this to be a general theory of creativity. But in fact it's just about random mutations of a pre-trained feedforward neural network. (Unlike earlier work by others, this does not even address the general setting of active agents embedded in a dynamic environment.)
- Lack of balance: Thaler's patent is very rarely cited in comparison to works of other authors in this field (check Google Scholar). This indicates that his contributions are not widely considered significant. But his section is now by far the biggest. In fact, Thaler's section has become a variant of Thaler's web page. This is causing a major POV issue, IMHO.
Presently, the text on Thaler's work dominates the entire article. A footnote would be more appropriate. Tivity (talk) 20:04, 28 January 2013 (UTC)
- I am not a sock puppet. I know Dr Thaler personally and met with him in 1997 when the first patent was granted. The debates on Wikipedia are pointless if someone sock puppets or if someone accuses distinct people of sock puppeting. I reiterate that all the conjecture over Thaler's work is pointless without taking into account the prototypes which Thaler made at the time of his patent application. I have seen the prototypes in action and along with a detailed search at that time and since have found no one in academia who has replicated Thaler's work to this day. Deliberator. Check the IP address. 122.57.49.190 (talk) 03:34, 29 January 2013 (UTC). Signed in Deliberater (talk) 03:36, 29 January 2013 (UTC)
- Fine, I'll take this in good faith. But what about users Periksson28 and Midnightplunge who inserted hundreds of Thaler-promoting edits, and nothing else. And are there any sources that support your claims on Thaler's prototypes? Tivity (talk) 17:57, 29 January 2013 (UTC)
Periksson28 (talk) responded as follows:
--- Paranoia Runs Deep Periksson28 (talk) 21:50, 28 January 2013 (UTC)
- Deletion of previous very similar work by others: Thaler's 2 network system was published in 1997. But in fact it is exactly the earlier 2 network architecture of Paul Munro, Paul Werbos, D. Nguyen and Bernard Widrow, published in the 1980s, and called "distal teacher" in 1992 by Michael I. Jordan and David Rumelhart. The only difference is: Thaler re-trains the first network by random search instead of gradient descent. References to prior art got deleted though.
---Two Network Systems: Two network systems and larger have abounded for quite some time. What sets Creativity Machines apart is what drives them. That phenomenon was described in the peer-reviewed journal, Neural Networks, in 1995, as referenced in the article. You should read more... Besides, some of these people know and respect Thaler.
Adaptive vs Creative: I don't think that any of the works cited would be considered creative. The researchers would claim that they are adaptive, their very important contributions having more relevance to an adaptive systems Wikipedia entry. Besides, none of these systems use Thaler's mechanism to achieve even adaptation.
---Sorry, Irrelevant - But all these cited works were considered irrelevant to patent examiners skilled in the art who were consulting with the AI/connectionist community. Just reading your account of Thaler's work shows that you have no grasp of it. "Thaler re-trains the first network by random search instead of gradient descent." That's nonsensical in itself. How do ANNs train themselves via random search? Periksson28 (talk) 21:50, 28 January 2013 (UTC)
- Invalid sources: Apparently in reaction to the priority issue above, a link to an interview of 2009 was inserted. There Thaler himself refers to his own unpublished thoughts of 1974. This cannot serve as a valid reference. Thaler's first valid document on this stems from 1997.
---Well Researched - Cohen's article was well researched, and written at a time when there weren't any futile challenges to Thaler's approach, so motive simply wasn't there. The article simply stands for the record. Nevertheless, it is a valid Wikipedia reference and supports the fact that the patents were backed by over 30 years worth of his research.Periksson28 (talk) 21:50, 28 January 2013 (UTC)
- Grand claims: The section title claims this to be a general theory of creativity. But in fact it's just about random mutations of a pre-trained feedforward neural network. (Unlike earlier work by others, this does not even address the general setting of active agents embedded in a dynamic environment.)
Very Justifiable Claims: "random mutations of a pre-trained feedforward neural network"! You still haven't fathomed the patents. "Active agents in a dynamic environment"! I would say that extremely clever robotic swarms on a battlefield is a dynamic enough environment. You need to better research this issue since you seem to ignore the bigger picture here.
- Lack of balance: Thaler's patent is very rarely cited in comparison to works of other authors in this field (check Google Scholar). This indicates that his contributions are not widely considered significant. But his section is now by far the biggest. In fact, Thaler's section has become a variant of Thaler's web page. This is causing a major POV issue, IMHO.
Presently, the text on Thaler's work dominates the entire article. A footnote would be more appropriate.
---Truth of the Matter: Let me explain this to you in simple terms - In academia, guilds form that are essentially self-congratulatory societies. George says that he'll publish his friend John's paper and vice versa and openly profess each other's boundless and exclusive expertise in an area. (Ironically, that's how it was all explained to me by an academic Ph.D. and it all makes sense.) Google Scholar is infested by such types with university PR departments solidly backing them. Pioneers like Dr. Thaler, will submit sweeping, yes sweeping discoveries, in the form of patent applications and then challenge objective and aloof, total strangers to take their best aim at the concept, shooting it down in flames if they even get a whiff of prior art from self-promoting academic institutions.Periksson28 (talk) 21:50, 28 January 2013 (UTC)
A Major Player in the Field: The literature is overwhelming, as you can see, in terms of reputable press and peer-reviewed publications. JS only shows just a couple of references. That Thaler boy has been extremely busy, as the record clearly demonstrates! Periksson28 (talk) 21:50, 28 January 2013 (UTC)
---Thanks, that was fun! And God Bless Wikipedia for the opportunity to broadcast something so extremely profound, objectively vetted, and well documented. Periksson28 (talk) 21:50, 28 January 2013 (UTC)
- Periksson, you write that it is nonsensical in itself to say that Thaler re-trains the first network by random search instead of gradient descent. But that's what he does, according to the 1997 patent. As I said before, he uses the earlier 2 network architecture of Munro, Werbos, Nguyen, Widrow, published in the 1980s, and called "distal teacher" in 1992 by Jordan and Rumelhart. One artificial neural network maps input patterns to target output patterns. The outputs of the first network are the inputs of a second network, whose target outputs are reward signals given by human observers who judge outputs of the first network according to some problem-specific criterion. Now the goal is to modify the weights of the first network, to obtain modified outputs that cause high predicted reward through the second network. The second network is normally trained by gradient descent. To modify the weights of the first network, the authors of the 1980s used gradient descent as well. Thaler's 1997 patent uses random mutations instead. That's the only significant difference, if any. Wouldn't you agree? Tivity (talk) 08:41, 29 January 2013 (UTC)
You have a twisted interpretation of that patent. There's no mention of such training.Periksson28 (talk) 18:45, 29 January 2013 (UTC)
I don't agree in the least.Midnightplunge (talk) 17:06, 29 January 2013 (UTC)
- Then where exactly is my analysis incorrect? Tivity (talk) 17:57, 29 January 2013 (UTC)
First of all, I think that your description is too generous and a kluge of several separate efforts, so I won't touch it other than to say that Thaler was established as first to invent. The feature you subjectively call minor is in fact major.Periksson28 (talk) 18:41, 29 January 2013 (UTC)
- So at least you agree that Thaler's architecture was not new, and that his weight change algorithms were not new, except that he used random weight changes for the first network, instead of backpropagation-driven weight changes only.
- Now you say this was a major feature, not just a minor twist. But in the 1980s and early 1990s, many others also used random weight changes in conjunction with backpropagation. This was considered a standard way of escaping local minima, or getting more robust networks. Noise on the synapses also was widely used in the field of neuroevolution. There are lots of references on this that predate Thaler. How could anybody consider this novel? Sorry, but like so many other patents, Thaler's 1997 patent would not be able to withstand an attack if anybody was interested in challenging it.
- Anyway, an encyclopedic article must mention the prior work. I cannot believe that you keep deleting references to it. Why should you want to do this? Please don't. Tivity (talk) 19:08, 29 January 2013 (UTC)
You are putting words in my mouth. Cavitation is not annealing noise and here is what the experts identified as prior researchers: Matsuba et al., Skeirik, Carpenter et al., Chen, Wittner et al., Yoda, Tawel, Masui et al., Mead et al., Gersho et al., and Hutcheson et al. None of these admirable works served to disqualify Thaler.Periksson28 (talk) 19:24, 29 January 2013 (UTC)
- Which of these papers describe systems of two neural networks, where the first network sends its outputs to the second network for evaluation (like in the work of Munro, Werbos, Nguyen, Widrow, Jordan, Rumelhart, Thaler)? Tivity (talk) 08:00, 30 January 2013 (UTC)
You fail to grasp that the methodology was totally unique. The journal Neural Networks doesn't publish unoriginal, me-too work. The phenomenon discussed there is the basis of Creativity Machines and underlies all of creativity. The concept is not based upon the control techniques you have mentioned.Periksson28 (talk) 19:38, 29 January 2013 (UTC)
- You keep claiming there was something novel or unique. But you have been unable to say what it is, and how it is different from what was published by the other authors I mentioned. (Most journal articles describe minor improvements of previous work; very few can claim novel breakthroughs.) I summarize Thaler's approach again. The weights of Network 1 are changed to maximize the evaluation of its outputs through Network 2. Known as distal teaching long before Thaler published or patented anything. Since the academic community has largely ignored Thaler's writings, the size of your huge section on Thaler does not match the perceived importance of his work. This makes the present article unbalanced. Tivity (talk) 18:18, 31 January 2013 (UTC)
References
- ^ A dual backpropagation scheme for scalar-reward learning. P Munro - Ninth Annual Conference of the Cognitive Science, 1987
- ^ Neural networks for control and system identification. PJ Werbos - Decision and Control, 1989.
- ^ The truck backer-upper: An example of self-learning in neural networks. D Nguyen, B Widrow - IJCNN'89, 1989.
- ^ Forward models: Supervised learning with a distal teacher. MI Jordan, DE Rumelhart - Cognitive Science, 1992.
- ^ Cohen, A. M. (2009). "Stephen Thaler’s Imagination Machines," http://www.wfs.org/May-June09/Thalerpage.htm
Formal Theory of Creativity (Thaler again)
edit- Note of 30 January 2013: This section was created after sections (below) discussing the neutrality of the text on Stephen Thaler's work, and the deletion of earlier relevant work by other authors. It was started by the following single purpose accounts whose exclusive activity so far was to promote rarely cited work by Thaler: Periksson28 (talk • contribs), Midnightplunge (talk • contribs), KeenOnDetail (talk • contribs). Users Midnightplunge and KeenOnDetail were created right after the Thaler discussion started, raising suspicions of sock puppetry. Tivity (talk) 07:55, 30 January 2013 (UTC)
- (I moved this section to be in better chronological order. The note above refers to sections both below and above this one now. --Ronz (talk) 17:50, 30 January 2013 (UTC))
- Moved it again. --Ronz (talk) 17:04, 19 May 2013 (UTC)
I'm an AI theorist and I really don't see this as a comprehensive model of creativity, and I'm not quite sure why it's called a "formal theory." That to me means logical or symbolic (GOFAI as someone has added). I think it is a worthy addition to this article in terms of attentional mechanisms and curiosity, but not the entire gamut of creative cognition. Thaler has come much closer to that goal and the wealth of information he has contributed to the article is scaling with his 35 years of not just theorizing but producing loads of practical results, with the military, government, and industry. Further, there are many ways of introducing learning, attention, and curiosity, but to name his differential approach "wow-effect" challenges my patience.
I have seen Thaler say this (wow-effect) repeatedly in his presentations, but obviously with tongue in cheek. His systems achieve the same without algorithms conceived by humans. In that way it all seems much more biomimetic. Otherwise we would all have homunculi in our skulls!
I concur with periksson28 in what appears to be a sensible compromise.Midnightplunge (talk) 02:07, 29 January 2013 (UTC)
- midnightplunge, you seem to be more generous than I care to be at the moment. I've seen some venomous and unwarranted attacks on the 'general theory' stuff. I would suggest that the 'formal' theory stuff needs a major overhaul because there's nothing at all informative about it. In fact it seems obvious to me, and the approach one of many. --- Notice that the 'general' section isn't entitled "the general theory of creativity, so there is room for competition and not ad hominem attacks by the troll.Periksson28 (talk) 02:54, 29 January 2013 (UTC)
- The troll is well advised to help mend the so-called formal theory section rather than rant against what i consider the most intelligent part of this discussion, the Thaler part. I see nothing novel about this theory of attention and curiosity and I doubt that anyone could receive patents on it. I don't think standards of novelty, utility, and non-obviousness are as strict in academia as in the patent world. They just publish stuff and their friends eat it up. KeenOnDetail (talk) 03:44, 29 January 2013 (UTC) — Preceding unsigned comment added by KeenOnDetail (talk • contribs) 03:42, 29 January 2013 (UTC)
- I honestly can't find evidence for creativity in the 1991 reference. I see surprise, novelty, prediction, boredom, reinforcement learning, and buzz words but nothing more in the way of generative AI. I suspect that this is a bogus attempt at a temporal foothold, but the paper does not really address creativity. I might start to consider the later papers relevent.Midnightplunge (talk) 05:06, 29 January 2013 (UTC)
To clarify the temporal order of the edits on this talk page: This section was created after the section further down: "List of problems with the recent section on Thaler's Creativity Machine." Like some sort of retaliation. To repeat: Thaler's 2 network system was published in 1997. But in fact it is exactly the earlier 2 network architecture of Paul Munro[1], Paul Werbos[2], D. Nguyen and Bernard Widrow[3], published in the 1980s, and called "distal teacher" in 1992 by Michael I. Jordan and David Rumelhart[4]. One artificial neural network maps input patterns to target output patterns. The outputs of the first network are the inputs of a second network, whose target outputs are reward signals given by human observers who judge outputs of the first network according to some problem-specific criterion. Now the goal is to modify the weights of the first network, to obtain modified outputs that cause high predicted reward through the second network. The second network is normally trained by gradient descent. The papers of the 1980s also use gradient descent to modify the weights of the first network. Thaler's 1997 patent uses random mutations instead. That's the only significant difference, if any. I think it's a joke to call this a general theory of creativity, and to rewrite this article such that it is dominated by references to little cited work of Thaler. The above references to prior art got repeatedly deleted by the creator of the present section. Tivity (talk) 08:28, 29 January 2013 (UTC)
- Thaler does not use "random mutations of weights." In the preferred embodiment of his patents, transient bursts occur at the synapses (if need be, in combination with other such disturbances on neurons) driving the net into a confabulatory regime rich in potential new ideas. Oftentimes, the critic, which may or may not be trained by human mentorship, adjusts such cyclic noise so as to optimize the turnover of valuable notions. Sorry, but you seem to be stuck in the paradigm rut of genetic evolution of weights and gradient descent. With all due respect, the patents are major and meticulously documented.Midnightplunge (talk) 15:11, 29 January 2013 (UTC)
- Well, I did print the 1997 patent, and that's what it boils down to: random mutations of the weights. The same as adding noise to the synapses. That's what your "transient bursts at the synapses" really mean. Tivity (talk) 17:35, 29 January 2013 (UTC)
- Everyone is entitled to their opinion. I could contend that the Wright brothers did not discover powered flight, but repeatedly, the Creativity Machine has been granted patent status through intense scrutiny by AI specialists around the world. I suspect that you have not delved into them in enough detail to understand their function and novelty judging from your knee-jerk reaction. Nor do you appreciate the fact that the effect that drives them was published in 1995 in the revered journal Neural Networks, and not in a conference paper. Thaler has taught the technology to several IEEE and IJCNN conferences without one iota of objection from the academic audience. He has lectured to many CS,ECE, and EE university departments in the US and Europe without drawing fire. One would think that if the technology existed so long ago that the "ancients" you mention would have left their academic ranks for industry and the big bucks to invent and discover everyting imaginable (Thaler has). You are totally wrong and in the minority in your assessment. You are correct, however in the lineage of the patent from the pioneers of machine learning (e.g., backpropagation). Midnightplunge (talk) 14:45, 29 January 2013 (UTC)
- You mention Thaler's 1995 paper in addition to his 1997 patent. But all the other authors I mentioned published before 1995. (And, as always in science, lots of stuff that passes peer review and gets published or patented is challenged later. We all know that many patents wouldn't stand a chance in patent court if they were challenged on account of prior art. On the other hand, most such patents are never challenged because they are commercially irrelevant anyway.)
- Why did you delete my references to extremely similar work on 2 network systems by Munro, Werbos, Nguyen, Widrow, published in the 1980s? You have to agree that the only difference is in the weight change method of the first network, because even Thaler (1997) trains the second network by backpropagation. The other researchers used backpropagation to change the weights of the first network, too. Thaler instead performs what you called "transient bursts at the synapses." In other words, he adds noise to the synapses. Also known as random weight mutation. To call this tiny little difference "creativity" seems ridiculous. The earlier work must be mentioned. The claim "general theory of creativity" must be toned down. Otherwise the article will keep looking as if it was designed to promote Thaler's work. Tivity (talk) 17:35, 29 January 2013 (UTC)
- You are comparing apples and oranges and I can point to numerous others who have played with two network systems, as you call them, including several Japanese inventors. Besides, even the 1997 patent goes well beyond two to N networks.Periksson28 (talk) 19:03, 29 January 2013 (UTC)
- Did these Japanese inventors also publish 2 network systems with the same particular purpose (in the sense of Munro, Werbos, Nguyen, Widrow, Thaler)? Where Network 1 tries to maximize the evaluation of its outputs through Network 2? Then they must be mentioned, too. Please add the references. Anyway, this particular setup was known as distal teaching long before Thaler published or patented anything. The generalization to N networks is trivial and was not new either. (Even a single network can be viewed as a collection of N networks, where N is the number of output units.) Tivity (talk) 17:44, 31 January 2013 (UTC)
It is a general theory of creativity since it describes the entire breadth of it.Periksson28 (talk) 19:03, 29 January 2013 (UTC)
- You keep repeating this IMO ridiculous statement. Clearly, your view is not shared by the academic community, which up to now basically has ignored Thaler's writings, presumably because they were published long after the original work. The size of your huge section on Thaler does not at all match the perceived importance of his work. This is what makes the present article so unbalanced. Tivity (talk) 17:44, 31 January 2013 (UTC)
- Of course, the burden of proof lies with Dr. Thaler, but he can provide it with flying colors. So far the attacks against him have been based on arguments from personal incredulity ("I don't understand how it works, therefore it must be a fraud") and appeals to authority. Just because academia lags behind the cutting edge or simply isn't aware of it, does not make it less true. In my opinion it would be unbalanced NOT to mention his work. It must be more widely known or else academia will be kept in the dark. Ultimately, credentialism doesn't matter. It's the technology that's important. I got more out of Dr. Thaler's writings on this subject than from anything else out of academia, where endless theorizing predominates to the detriment of real developments. Perhaps one could formulate the "General Theory" text even more neutral, maybe reduce the images to the "CM-Paradigm" and "Spectrum of Consciousness", but otherwise I call for a removal of the neutrality notice. Terraforming (talk) 14:21, 25 March 2013 (UTC)
Can we get this discussion focused?
editI suggest focusing on relevant policies, especially WP:NPOV and WP:DR. The patents mean very little. Scholarly sources that demonstrate the importance of Thaler's work are necessary, and some need to demonstrate the importance in the context of computational creativity as a whole, which is what this article is about. --Ronz (talk) 21:49, 29 January 2013 (UTC)
- Sounds good. The current version of the article is dominated by a fringe theory and departs significantly from the mainstream view. It does not represent the field in a balanced way. I believe it needs a total rewrite from a neutral point of view. This could be done by following the structure of the recent book on computational creativity by Jon McCormack and Mark d'Inverno (editors), perhaps the most authoritative survey: McCormack, J. and d'Inverno, M. (eds.) (2012). "Computers and Creativity". Springer, Berlin. Tivity (talk) 17:26, 30 January 2013 (UTC)
- Sounds like a great way to move forward. --Ronz (talk) 17:46, 30 January 2013 (UTC)
- I find these comments strange. If a theory has been reduced to practice, it should predominate. You can have lots of theories about bridges but if no has built one it is nothing more than fascinating rumination. If I have built a bridge which stands and works then my theory carries more weight. Science which has been reduced to practice is engineering. — Preceding unsigned comment added by Gdippold (talk • contribs) 20:21, 3 February 2013 (UTC)
- Amen, Gdippold, and I think that the term "fascinating rumination" is too kind for this mob. The so-called "formal theory" is absolutely lame and taken from a mere conference paper that everyone has chosen to ignore. The only interesting content is the general theory section which has been reported far and wide and is published in peer-reviewed journals. This mob, which has obviously been hiding under a rock, has taken the strategy of offering up a totally botched perception of that intellectual property and its underlying theory, suggesting that they are just cockier than they are intelligent. IMHO, these patents are worth a thousand of their pretentious publications. BTW, I smell Asperger and an honorary (bought) Ph.D. x 7 in all this.Midnightplunge (talk) 06:44, 6 February 2013 (UTC)
- Midnightplunge, I understand your frustration, but I think we should maintain civility above all else. Because Dr. Thaler's technology puts theory into practice, it can stand on its own, and in the end, actions always speak louder than words. Many also seem to forget that just because Dr. Thaler is not active in a university (where you are not as independent to pursue your own projects), he nevertheless remains a very well-credentialed scientist that could easily fulfill an academic position if he wanted to. If there are no serious attempts by researchers to contact him in order to assess this technology in an academic environment, then I am afraid this falls into the category of pseudo-skepticism and should be acknowledged as such. If there is ego-investment involved, which does not allow for the realization that you are pursuing a fruitless path, gaining a voice in the establishment can be a struggle for any competitors or even colleagues. After reading Prof. Schmidhuber's "Formal Theory", it does seem like a watered-down version of Dr. Thaler's theory, lacking neuro-biological details. As it was stated before, we can have the most abstract theories floating around in the air, but without a material foundation and straight-forward science, they remain just that. I personally would rather trust Dr. Thaler's long-standing expertise and experience in this matter, simply because he developed the sought-after recursively self-improving AI, which in itself is worthy of a Nobel Prize. On top of that he can prove it experimentally. I regard him as the foremost authority on neural-based AI, the saving grace in an intellectually stagnating field. This statement may sound totally overblown, but is not without merit. An elegant theory, based on the actual human brain, with so much predictive power is the hallmark of science. Just observe what capabilities are hailed in popular circles and then ask yourself: Can Dr. Thaler's networks do this? So far the answer has always been: Yes, and probably better. Terraforming (talk) 18:58, 19 March 2013 (UTC)
- I assert that this article is focused and that both the general and formal theories have merit. However I feel that Thaler has the edge. I saw him lecturing at Princeton in 1994 and was impressed with him, his patents, and his theories on consciousness and creativity. He gave the "when machines outdo scientists talk" long ago, in 1998 at Wright State University and heard shortly thereafter that he was invited to Harvard Med to carry out dream research. I know of at least two invited talks he's given at the University of Illinois Urbana on his creative neural systems. He has taught at least three IEEE sponsored conferences. In short, he definitely is connected with academia, but he has been achieveing real results while others simply pontificate on the theoretical possibilities and perform toy experiments (you are correct gdippold). That is certainly worth reporting in this article and I myself will protect this significant Wikipedia contribution from the slander and lies that have been forthcoming from self proclaimed experts in both creative AI and patent law.KeenOnDetail (talk) 09:11, 6 February 2013 (UTC)
- Oh, by the way, look what I found: http://imagination-engines.com/media/Wright_State_Talk.pdfKeenOnDetail (talk) 09:19, 6 February 2013 (UTC)
- I see that not even John Koza is immune to this group's venom, denying the validity of his GP patent. It just goes to show you how cliques work in academia. That's why true pioneers in any field, not just comp creativity, need to break away from monstrous corporations and arrogant academic/governmental institutions to do truly original work. The trouble then is that they aren't backed by the well-financed PR departments. --- Why isn't Koza represented here? Does he know better?Periksson28 (talk) 23:04, 6 February 2013 (UTC)
I have never met Dr. Thaler. I have, however, carefully read everything about his work that I could possibly find which forced me to conclude that this is unlike any other AI principle currently on the market. The reason why it is regarded as no big deal is because of several factors. One of them is that people don't know that the standard "Creativity Machine" model of two interlinked networks is not the whole story. He actually builds, or more accurately lets them build themselves, very large cascades without algorithmic programming(!) based on his second patent (5,845,271). To my knowledge, self-forming cascades based on self-training ANN elements are unprecedented. Another reason why it is so easily dismissed is that "randomness" gets a bad rap in general, when it is a key ingredient for brain activity. It is simply impossible to have genuine AI without random idea generation or imagination. Neural networks mirror the external world and these perturbations (via an adjustable Cavitation Rate) are softening the relationships of the information they absorbed resulting not in random nonsense, like it is often thought, but in new plausible information(!). This new information is then judged by other networks for usefulness, and if it is not useful, the perturbations become more severe and the machine relentlessly tries out every single conceivable possibility until a solution is found. Useless neurons also get eliminated in this process, which is completely automatic, and new STANN-Objects are produced as they are needed. No genetic algorithms whatsoever are involved here, neither is simple back-propagation. You really have to wrap your mind around this to grasp the magnitude of it. Dr. Thaler is just way ahead of the curve waiting for academic inertia to catch up. Neurobiology also shows parallels to this invention. For example, Obsessive-Compulsive-Disorder (OCD) research clearly proves that some areas of the brain generate ideas which other areas act upon. It is also known that stronger perturbations (like the ones induced by drugs) cause stronger hallucinations. It is just unfortunate that Dr. Thaler's work does not enjoy the same recognition as it does in the government, which gladly uses this technology for secret projects. While I think that the article is slightly biased and maybe should be reduced to the basic "Creativity Machine" diagram, Dr. Thaler deserves the same fair hearing as any other scientist. If he is correct an his machines can solve all kinds of problems, then we can objectively test them out and see how they hold up to public scrutiny. That he is not frequently referenced does not discredit his accomplishments. I have found experimental support as well as a critique of his concept. For confirmation search for "A Modular Neurocontroller for Creative Mobile Autonomous Robots Learning by Temporal Difference" by Helmut A. Mayer from the Department of Scientific Computing of the University of Salzburg, Austria. For the critique look for "A Critical Review of the Creativity Machine Paradigm" by Adhiraj Saxena, Akshat Agarwal & Anupama Lakshmanan from the National University of Singapore. Also, why doesn't Dr. Thaler have his own Wikipedia entry? Terraforming (talk) 16:00, 19 March 2013 (UTC)
References
- ^ A dual backpropagation scheme for scalar-reward learning. P Munro - Ninth Annual Conference of the Cognitive Science, 1987
- ^ Neural networks for control and system identification. PJ Werbos - Decision and Control, 1989.
- ^ The truck backer-upper: An example of self-learning in neural networks. D Nguyen, B Widrow - IJCNN'89, 1989.
- ^ Forward models: Supervised learning with a distal teacher. MI Jordan, DE Rumelhart - Cognitive Science, 1992.
Todd and Barucha's neural creativity machines for music composition (1989-1992)
editDavid Cope's "Experiments in Music Intelligence" (1987) used symbolic methods to make a creative machine that could compose music. But who made the first neural creative machine? To user Tivity I say: The people you mentioned described distal teaching with two neural networks, but arguably they did not use this to make creative machines. I think the first who did this was Peter Todd. Together with Barucha, he produced several papers on this between 1989 and 1992:
Todd, P.M. (1989). A connectionist approach to algorithmic composition. Computer Music Journal, 13(4), 27-43.
I am quoting from the abstract: "The arrival of a new paradigm for computing--parallel distributed processing (PDP), or connectionism--has made a new approach to algorithmic composition possible. One of the major features of the PDP approach is that it replaces strict rule-following behavior with regularity-learning and generalization. This fundamental shift allows the development of new algorithmic composition methods which rely on learning the structure of existing musical examples and generalizing from these learned structures to compose new pieces."
Bharucha, J.J., and Todd, P.M. (1989). Modeling the perception of tonal structure with neural nets. Computer Music Journal, 13(4), 44-53.
Todd, P.M., and Loy, D.G. (Eds.) (1991). Music and connectionism. Cambridge, MA: MIT Press.
Todd, P.M. (1992). A connectionist system for exploring melody space. In Proceedings of the 1992 International Computer Music Conference (pp. 65-68). San Francisco: International Computer Music Association.
The paper of 1992 also proposes to combine this with your distal teacher approach. The first network is trained on well-known melodies. It then produces variations or new melodies. A second network rates the melodies composed by the first network, to reduce the task of a human composer interacting with the system.
Kusiana (talk) 10:29, 8 February 2013 (UTC)
- I forgot the overview in Peter Todd's web site: http://www-abc.mpib-berlin.mpg.de/users/ptodd/publications/e.htm Kusiana (talk) 11:23, 8 February 2013 (UTC)
- No reply for a month or so - has everybody lost interest? I think what needs to be done now is this: 1. Change the title of the disputed section on "General Theory of Creativity" using a less exaggerated and more appropriate title. I suggest "Models of Creativity Based on Neural Networks" 2. Explain the first such models by Todd and Barucha (1989), who trained a neural network on music, then changed its weights a bit to creatively generate new music. 3. Mention Todd's extension (1992) which combines this with the distal teacher approach, advocated by user Tivity, who mentioned above Munro, Werbos, Nguyen, Widrow, Jordan, Rumelhart. There the first network is trained on well-known melodies. It then produces variations or new melodies. A second network rates the melodies composed by the first network, to reduce the task of a human composer interacting with the system. 4. Mention the patent of Thaler, and point out how it is different to the earlier work by Todd and the others, although I cannot do that, because I for one don't see a clear difference to earlier work. But perhaps someone else does. Kusiana (talk) 13:18, 5 March 2013 (UTC)
- The main difference is that in Dr. Thaler's systems, the evaluating network can interact with the generating one and push it into a desired direction based on either overall objectives or by user input in the form of value judgements, but this is optional. The latest generation does not rely on prior training and is not confined to only two networks. They can learn from scratch and employ any number of additional self-learning and generating networks. Another difference is that no genetic algorithms of any kind are used. Terraforming (talk) 17:27, 19 March 2013 (UTC)
- Ok, why don't you add something like this after the text I inserted (resurrecting some of the deleted references):
- The main difference is that in Dr. Thaler's systems, the evaluating network can interact with the generating one and push it into a desired direction based on either overall objectives or by user input in the form of value judgements, but this is optional. The latest generation does not rely on prior training and is not confined to only two networks. They can learn from scratch and employ any number of additional self-learning and generating networks. Another difference is that no genetic algorithms of any kind are used. Terraforming (talk) 17:27, 19 March 2013 (UTC)
Since 1989, artificial neural networks have been used to model creative activities such as music composition. In particular, Peter Todd (1989) first trained a neural network to reproduce musical melodies from a training set of musical pieces. Then he used a weight change algorithm to modify the network's parameters or weights. The modified network was able to creatively generate new music.[1][2][3]
In 1992, Todd[4] extended this through the so-called distal teacher approach by Paul Munro[5], Paul Werbos[6], D. Nguyen and Bernard Widrow[7], Michael I. Jordan and David Rumelhart[8].
In this approach there are two neural networks. The first network is trained again on well-known melodies. After weight changes, it can produce variations or new melodies. A second neural network is trained to rate the melodies composed by the first network. The target outputs of the second network can be ratings given by humans. During operation, the ratings provided by the second network can facilitate the task of a human composer interacting with the system.
User Periksson28, you deleted all my text on Todd and others! How could you do this? What is the purpose of this? First you added many references to Thaler. Then you deleted related earlier work. I think you cannot simply suppress all citations of previous work. Kusiana (talk) 10:05, 9 April 2013 (UTC)
User Midnightplunge, when I reinserted the text on Todd and others deleted by Periksson28, you deleted it again! Why? I already wrote before: First you added many references to Thaler. Then you deleted related earlier work. Why? Is it because Todd published this many years before Thaler? I think it is not right that you simply suppress all citations of previous work. Kusiana (talk) —Preceding undated comment added 12:33, 9 April 2013 (UTC)
- I think now we are in edit war. It is my first edit war. I hope it will end soon. Kusiana (talk) 14:01, 9 April 2013 (UTC)
- Yes you are; I have warned the last 4 participating users and have watchlisted the page. Consider dispute resolution. Full protection might follow, so try to hash this out here and form a consensus. Lectonar (talk) 17:30, 10 April 2013 (UTC)
- I think no progress was made at all in the past month. I inserted the section POV template. But the promoter(s) of Thaler has ignored the talk page. There is no consensus. Kusiana (talk) 13:27, 9 May 2013 (UTC)
- Immediately after I did this, user Shibbuleth removed my section POV warning and references before Thaler, repeating what user Perkisson28 did earlier. I think this behavior is sad but it is also almost funny at the same time. Kusiana (talk) 13:43, 9 May 2013 (UTC)
- I think no progress was made at all in the past month. I inserted the section POV template. But the promoter(s) of Thaler has ignored the talk page. There is no consensus. Kusiana (talk) 13:27, 9 May 2013 (UTC)
- Yes you are; I have warned the last 4 participating users and have watchlisted the page. Consider dispute resolution. Full protection might follow, so try to hash this out here and form a consensus. Lectonar (talk) 17:30, 10 April 2013 (UTC)
PS - With all do respect, I can't substantiate your references. Yes, the titles are out there, but they do not state what you claim. — Preceding unsigned comment added by Periksson28 (talk • contribs) 18:44, 9 May 2013 (UTC)
- I cannot believe your behavior - you may not simply remove the POV-section template from your Thaler section! Kusiana (talk) 15:54, 17 May 2013 (UTC)
References
- ^ Todd, P.M. (1989). A connectionist approach to algorithmic composition. Computer Music Journal, 13(4), 27-43.
- ^ Bharucha, J.J., and Todd, P.M. (1989). Modeling the perception of tonal structure with neural nets. Computer Music Journal, 13(4), 44-53.
- ^ Todd, P.M., and Loy, D.G. (Eds.) (1991). Music and connectionism. Cambridge, MA: MIT Press.
- ^ Todd, P.M. (1992). A connectionist system for exploring melody space. In Proceedings of the 1992 International Computer Music Conference (pp. 65-68). San Francisco: International Computer Music Association.
- ^ A dual backpropagation scheme for scalar-reward learning. P Munro - Ninth Annual Conference of the Cognitive Science, 1987
- ^ Neural networks for control and system identification. PJ Werbos - Decision and Control, 1989.
- ^ The truck backer-upper: An example of self-learning in neural networks. D Nguyen, B Widrow - IJCNN'89, 1989.
- ^ Forward models: Supervised learning with a distal teacher. MI Jordan, DE Rumelhart - Cognitive Science, 1992.
Help request: repeated deletion of references to original work on computational creativity
editThere is a group of editors whose only purpose seems to be to exaggerate the contributions of a person called Stephen Thaler. These editors include User:Periksson28 and User:Midnightplunge.
Again and again they have deleted the references to previous work by other researchers who did very similar things. For example, see the section on the work of Todd and Barucha further down.
I have the book "Computers and Creativity" by McCormack, J. and d'Inverno, M. It has many chapters by many different authors. None of them even mentions Thaler. However, this Wikipedia article is full of Thaler through the editors mentioned above. This is out of proportion.
I cannot believe that such behavior is acceptable. Does Wikipedia provide a way of dealing with this? Or can any editor simply do here what would never be acceptable in the academic world? If this was an article about a popular topic then probably there would be enough editors correcting this. But since the topic is a bit obscure, there are not many competent editors. I am not sure what to do here, and I am thankful for input from others with more Wikipedia experience. Kusiana (talk) 15:18, 10 April 2013 (UTC)
These complaints are supported by members of the Computational Creativity research community, who run the annual Computational Creativity conference (ICCC) and various ancillary events. This page was originally created to act as a high-level summary and jumping-off point for readers interested in the theory and practice of Computational Creativity, and should not be used to expand at length on the work of any one researcher. If the work in question (e.g. that of Stephen Thaler) is of sufficient interest to users of Wikipedia, it should be placed in a page of its own, and this page can be linked from the Computational Creativity page. It is very poor Wikipedia etiquette to overload a page of a topic like Computational Creativity with an excessive description of any one person's work, no matter the person. The problem is exacerbated by the description of this work as a General Theory of Creativity. If one reads the rest of the page, it becomes apparent that this field is diverse and complex, and no single model can claim to be a General Theory of (Computational) Creativity. To those users who insert large amounts of text concerning Stephen Thaler on this page: please refrain from doing so, but please do add a concise and relevant reference/link to another page on Mr. Thaler's contributions. Please respect the spirit of this page (diversity, balance). Kimveale (talk) 09:10, 9 May 2013 (UTC)
I find it odd that editors take a meat ax to the article and then cry about it being reversed. Appeals to authority in the face of strong evidence is a weak. I think that something is excessive when it is out of proportion to actual contribution. True there are multiple claims of computational creativity but the preponderance of evidence appears to be in Thaler's favour as far as I can tell. The ugliness of some of the comments here makes me suspicious of academic pettiness, a group who live in a state of war over abstractions like credit and recognition. The fact that Thaler has reduced his theory to practice should carry more weight then some professor x once, almost speculated about it in a paper his friends published or our council of high priests will determine what is valid and what is valid is what our friends publish. Gdippold (talk) 20:58, 9 May 2013 (UTC)
I really appreciate this support! Kusiana (talk) 13:51, 9 May 2013 (UTC)
Kusiana, you really need to read more books on computational creativity, including Springer's new Encyclopedia of Creativity, http://www.springerreference.com/docs/html/chapterdbid/358097.html. A unifying theory of creativity is presented by Thaler therein detailing his work in the field from 1974 to present, so this is not just a pipe dream. His commercial, pioneering efforts in the field have earned him dozens of patents in artificial creativity, something that could not have happened had he not planted the flag. I'm sorry you have been out of the loop. Cheers! - 18:23, 9 May 2013 (UTC)Periksson28 (talk)
Please sign your contributions to this talk page and show some etiquette when adding to pages that are intended to be comprehensive but concise.
Moreover, it is you who need to read more on Computational Creativity, a sub-discipline of AI that has been maturing through a long series of workshops and international conferences for over a decade. The International Conference on Computational Creativity is the conference of the Association for Computational Creativity. Stephen Thaler does not submit to, or attend, these events, and his work is not cited by the researchers that do. It is a gross exaggeration to claim that his work (which may be of value, this is not the point here) has influenced the academic field of computational creativity, where contributions are measured in cited publications, not patents. Kimveale (talk) 21:56, 9 May 2013 (UTC)
Computational Creativity, a sub-discipline of AI has been maturing for many decades, not just in your workshops and conferences. Thaler's brand has been evolving over 40 years. If these events were less political and more scientific, they should have invited him.Periksson28 (talk) 23:32, 9 May 2013 (UTC)
The point here is that his work is of significant value and of particular relevance to computational creativity and this, yes this, Wikipedia article. He likely doesn't attend your conferences because they are too political as Gdippold suggests. He's too loaded down because he is carrying out myriad real-world projects in computational creativity. Personally, I would prefer to be in Thaler's shoes, owning all these patents rather than making deals to cite one another.Periksson28 (talk) 23:06, 9 May 2013 (UTC)
Now, all of this is becoming unnecessarily aggressive. All that Kusiana, myself and others are asking is that you show moderation in expressing your admiration of Stephen Thaler on this page. Please create a page on his contributions and link to it from the Computational Creativity page. Do not overload the Computational Creativity page with an excessive description of one man's work -- the page is meant to be a comprehensive resource providing concise pointers to many different issues and approaches. Kimveale (talk) 21:58, 9 May 2013 (UTC)
The admiration is warranted. He's made a major, major contribution, so there is much "we" have to talk about. Plus, it's all factual, but you want to nuke a treasure.Periksson28 (talk) 22:22, 9 May 2013 (UTC)
Thaler not cited enough? I'm beginning to see some of the dilemma. He's cited in medical, materials, legal, aerospace, robotics, philosophical, and you name it journals so he is "horizontally recognized" as opposed to "vertically integrated" into the ICCC or whatever creative AI clique. The repeated deletion of the general theory section is a crime though. The page has a thumb, so the article can be as long and informative as need be. I personally would like to learn more about symbolic AI approaches.Midnightplunge (talk) 01:05, 10 May 2013 (UTC)
Peer review exists for a reason -- to recognize the coherence of a contribution to a field. Please read the rest of the CC page before being so naive as to call one man's work a General Theory of Creativity -- that shows such a loose grasp of computational creativity and no grasp at all of human creativity. Human creativity is rich and diverse -- it manifests in many different forms, to produce many different effects. Leonardo was a different kind of creator to Einstein, who was different to Picasso, who was different to Steve Jobs. A single generative mechanism and means of evaluating output is not a general theory of creativity, no matter how many commercial applications you can allude to, or patents you can cite. Why do you insist in dumbing down creativity in this way. Please have some respect for a diverse and complex phenomenon, show some restraint when adding material to this page (be concise), and show some understanding of the field as a whole. And please stop citing politics as a reason for anything -- conspiracy theories do no one any credit at all. Kimveale (talk) 11:18, 10 May 2013 (UTC)
Sorry, but we'll have to agree to disagree on peer review, which even the more honest academics admit is associated with a highly political, mutual congratulation society that shuns outsiders. The dishonesty inherent to the process is the subject of intense controversy on the internet, spawning yet another profitable conference topic concerning the ever growing flaws in the peer review system. Just Google "peer review" + "corruption" and one may find plentiful hits. Besides, in the general theory section, numerous peer-reviewed papers are cited, but the toughest ones involve the patent process wherein total strangers, experts in the relevant field (not chums) are enlisted to kick a concept around to its fullest until it is either dead or thriving. So, go ahead and draw upon the general theory of creativity in your own brain to creatively invent new reasons to viciously attack very solid work that has delivered real results. And thank goodness for Wikipedia, where knowledge has been democratized. Midnightplunge (talk) 15:29, 10 May 2013 (UTC)
Sorry again, but what you call "dumbing down" is the process of reductionism. I know it sounds foreign in folk psychological circles, but when so much can be described by a simple principle or equation, a general scientific theory results. Midnightplunge (talk) 15:37, 10 May 2013 (UTC)
Peer review is peer review. Thaler hardly gets cited, because other people did the same thing many years before him, as said many times on this talk page. Wikipedia is about notable things. Thaler's work is not notable, or it would get more citations. You should not use Wikipedia to "correct" the scientific literature thus trying to bypass peer review. Kusiana (talk) 14:56, 14 May 2013 (UTC)
Repeatedly, libelous comments have been made minimizing Thaler's contributions, but none have been verified.Periksson28 (talk) 17:06, 14 May 2013 (UTC)
"Wikipedia" is about notable things." See this notable reference.[1]Midnightplunge (talk) 18:07, 14 May 2013 (UTC)
I'm sorry to tell you that dozens of patents have been issued to this pioneer after exceedingly intense peer review and novelty check. It's just that it wasn't by the "peers" you would like to have seen perform the review. As discussed above, the evidence is overwhelmingly in Thaler's favor. Furthermore, you keep creating a strawman and attacking it, demonstrating no understanding whatsoever of the concept.Periksson28 (talk) 16:21, 14 May 2013 (UTC)
See, for instance, "Publish-or-perish: Peer review and the corruption of science" [2].Periksson28 (talk) 16:45, 14 May 2013 (UTC)
- I think it is useless to argue with you. But you may not simply remove the POV-section template from the Thaler section you wrote. Kusiana (talk) 15:56, 17 May 2013 (UTC)
Why they're all about Thaler's role in computational creativity. Thanks...Periksson28 (talk) 03:05, 20 May 2013 (UTC)
Please summarize the rationale for the pov-tags
editPOV-tags require discussion. While there's been plenty of discussion here, it's difficult to see which discussions apply to which tags. Could editors please summarize their rationale and/or indicate which discussions apply? --Ronz (talk) 15:56, 18 May 2013 (UTC)
- I'm going to remove them then. --Ronz (talk) 14:56, 19 May 2013 (UTC)
- After going through the discussions above again (and trying to format some for others to follow), the January 2013 POV tag is definitely justified for the Thaler discussions. --Ronz (talk) 15:52, 19 May 2013 (UTC)
POV in article - Thaler
editThe Thaler material, especially in light of the comments on this page, looks clearly undue. Is there even a single independent and reliable source demonstrating his work is notable or even worth mention? --Ronz (talk) 15:11, 19 May 2013 (UTC)
- Your comment with its attempt to appear impartial is silly on its face. Diederik Stapel, Jan Hendrik Schön, Michael Bellesiles and Scott Reuben certainly met your test for many years. Disinterest still requires personal integrity. I have found commerce is far more purifying with its results orientation than academic publishing with contributors greedy for recognition. A lot of this could be cleared up, if the doubters would contact Dr. Thaler directly and pay him a visit instead of the constant use of ipsedixitism which, while useful for rallying the troops, does very little to satisfy doubt. Following your logic, Clifford Cocks who developed asymmetric encryption in secret at MI6 before Diffie and Hellman should be stricken from Wikipedia. Gdippold (talk) 20:45, 19 May 2013 (UTC)
- Please focus on content.
- All someone has to do is provide some sources... --Ronz (talk) 20:48, 19 May 2013 (UTC)
- I'm sorry, but you are demonstrating your agenda here. For a start, see the independent and reliable articles from Scientific American. Yam, Philip (1993). "Daisy, Daisy: do computers have near-death experience?", Scientific American, May 1993 and Yam, Philip(1995). "As They Lay Dying ... Near the end, artificial neural networks become creative", Scientific American, May, 1995. These folks in New York thought this was all very notable research activity, going so far as to check with several neural network aficionados on the reality and novelty of what Thaler was doing. In the intervening years, many other independent sources have commented on the notability of Thaler's work as is evident through the Internet.Periksson28 (talk) 20:56, 19 May 2013 (UTC)
- Please focus on content. The battleground attitude is distracting, disruptive, and looks like an attempt to divert attention from the policy issues here. --Ronz (talk) 21:05, 19 May 2013 (UTC)
- Starting sources provided...Periksson28 (talk) 21:08, 19 May 2013 (UTC)
- Please focus on content. The battleground attitude is distracting, disruptive, and looks like an attempt to divert attention from the policy issues here. --Ronz (talk) 21:05, 19 May 2013 (UTC)
- I'm sorry, but you are demonstrating your agenda here. For a start, see the independent and reliable articles from Scientific American. Yam, Philip (1993). "Daisy, Daisy: do computers have near-death experience?", Scientific American, May 1993 and Yam, Philip(1995). "As They Lay Dying ... Near the end, artificial neural networks become creative", Scientific American, May, 1995. These folks in New York thought this was all very notable research activity, going so far as to check with several neural network aficionados on the reality and novelty of what Thaler was doing. In the intervening years, many other independent sources have commented on the notability of Thaler's work as is evident through the Internet.Periksson28 (talk) 20:56, 19 May 2013 (UTC)
The sources for review are:
- Daisy, Daisy; May 1993; Scientific American Magazine; by Philip Yam; 2 Page(s) abstract
- As They Lay Dying; May 1995; Scientific American Magazine; by Yam; 2 Page(s) abstract
Both require a subscription. Editors will need to quote from these sources the material on Thaler and his work. --Ronz (talk) 21:19, 19 May 2013 (UTC)
- The issue was whether a bio on Thaler would be shunned by Wikipedia due to his perceived lack of notability. These two references are only to provide evidence regarding such notability. The second article from 1995 alludes to the general theory discussed in the computational creativity article. The fact that they are posted by Scientific American, in the first place, is evidence of his acknowledgement by an independent and reliable source. Please forgive my skepticism, but judging by the battlefield attitude expressed by your cohorts, ready to destroy a life's work, I'm not expecting much from this assessment.Periksson28 (talk) 21:51, 19 May 2013 (UTC)
- Once again, focus on content please.
- We're discussing POV, WP:NPOV and specifically due weight. If we can't even verify the information, then it's not going to remain per WP:BLP, WP:NPOV, and WP:V. --Ronz (talk) 23:11, 19 May 2013 (UTC)
Focusing exactly on content, the second abstract reads as follows:
As They Lay Dying; May 1995; Scientific American Magazine; by Yam; 2 Page(s)
Not too many personal computers are known to hallucinate. But the one belonging to Steven Thaler has been doing so, off and on, for the past couple of years. The physicist, at McDonnell Douglas in St. Louis, has been exploring what happens as an artificial neural network breaks down. But rather than allowing the network to peter out into oblivion, Thaler has a second network observe the last gasps of its dying sibling. Some of those near-death experiences, it turns out, are novel solutions to the problem the net was designed to solve. Thaler says he has found a kind of creativity machine that can function more quickly and efficiently than traditional computer programs can.
An artificial neural network is software written to mimic the function and organization of biological neurons. The system consists of units (representing neurons) connected by links (standing in for dendrites and axons). Like the brain, an artificial network can learn: the programmer presents it with training patterns, which it learns by adjusting the strengths, or weights, of the links. Many researchers use these networks to model brain function and, by destroying part of the net, to mimic disorders such as dyslexia.
The issue is one of whether this is a notable person. I see "Thaler" (in bold) in the context of a Scientific American article concerned with creativity and neural nets. That doesn't take a subscription to the magazine to discern. It's right there in front of your face! As Gdippold has observed, your attempt to appear impartial seems silly. You simply have an agenda, as you hide behind "policies" that you bend to your own prejuduces. Who are you going to review, the reporter, Philip Yam?Periksson28 (talk) 23:51, 19 May 2013 (UTC)
- WP:FOC please!
- Sorry, but it's the sources and policies that matter. Learn them or not, your choice.
- Again, the issue isn't whether or not he's a notable person. If I didn't make that crystal clear, let me know what needs further clarifying and I'll do my best.
- Copying the material I linked is no help. I clearly indicated that we need quotes from the material that requires a subscription. --Ronz (talk) 00:38, 20 May 2013 (UTC)
Then see, as mentioned previously, "A Modular Neurocontroller for Creative Mobile Autonomous Robots Learning by Temporal Difference" by Helmut A. Mayer from the Department of Scientific Computing of the University of Salzburg, Austria. The pdf is at citeseerX. "This idea is greatly inspired by Creativity Machines..."Periksson28 (talk) 01:36, 20 May 2013 (UTC)
From http://cogprints.org/6637/1/CreativityasDesign-NOVA.pdf Minati, G., and Vitiello, G., (2006), Mistake Making Machines, In: Systemics of Emergence: Applications and Development (G. Minati, E. Pessa and M. Abram, eds.), Springer, New York, pp. 67-78. "Our approach is close to those introduced to model and simulate creativity (Creativity Machines and Imagitron: Holmes,1996; Thaler, 1996a; 1996b,1994; 2005),..."Periksson28 (talk) 01:50, 20 May 2013 (UTC)
- Helmut A. Mayer. A modular neurocontroller for creative mobile autonomous robots learning by temporal difference. In Proceedings of the IEEE International Conference on Systems, Man & Cybernetics: The Hague, Netherlands, 10-13 October 2004. pages 5742-5747, IEEE, 2004. doi 10.1109/ICSMC.2004.1401110 [1]
- Thanks! A paper on an implementation based upon Thalers. What about it? --Ronz (talk) 01:59, 20 May 2013 (UTC)
APA Online: http://www.apaonline.org/APAOnline/Publication_Info/Newsletters/APAOnline/Publications/Newsletters/HTML_Newsletters/Vol11N2Spring2012/Computers.aspx "In the next paper Stephen Thaler talks about creativity machines. While some philosophers may still not be sure whether and by what standards machines can be creative, Thaler designed, patented, and prepared for useful applications some such machines so the proof seems to be in the pudding, and some of the proof can also be found in this interesting article."Periksson28 (talk) 02:01, 20 May 2013 (UTC)
- Thaler, S. L. (2012). "The Creativity Machine Paradigm: Withstanding the Argument from Consciousness," APA Newsletters, Newsletter on Philosophy and Computers, Vol. 11, No. 12, Spring 2012.
- This is not an independent source. --Ronz (talk) 20:18, 20 May 2013 (UTC)
- I think that what is being alluded to is not the Thaler article, but the editor's preface to the article.Shibbuleth (talk) 00:27, 23 May 2013 (UTC)
- Yep. It's not an independent source. Such prefaces are usually written from a short autobiography and introduction written by the author, which is then used to present the article in the style and with any appropriate themes for the specific journal. --Ronz (talk) 00:43, 23 May 2013 (UTC)
- I can understand your suspicion, however, this was a statement that originated purely from the editor.Shibbuleth (talk) 01:06, 23 May 2013 (UTC)
- What do you base that opinion on? --Ronz (talk) 01:35, 23 May 2013 (UTC)
- Familiarity with the editor and the journal's tradition. Besides, the preface is in no way hyped up and sounds very neutral to this journal editor.Shibbuleth (talk) 02:05, 23 May 2013 (UTC)
- Thank you for your opinion. --Ronz (talk) 02:49, 23 May 2013 (UTC)
- Familiarity with the editor and the journal's tradition. Besides, the preface is in no way hyped up and sounds very neutral to this journal editor.Shibbuleth (talk) 02:05, 23 May 2013 (UTC)
- What do you base that opinion on? --Ronz (talk) 01:35, 23 May 2013 (UTC)
- I can understand your suspicion, however, this was a statement that originated purely from the editor.Shibbuleth (talk) 01:06, 23 May 2013 (UTC)
- Yep. It's not an independent source. Such prefaces are usually written from a short autobiography and introduction written by the author, which is then used to present the article in the style and with any appropriate themes for the specific journal. --Ronz (talk) 00:43, 23 May 2013 (UTC)
- I think that what is being alluded to is not the Thaler article, but the editor's preface to the article.Shibbuleth (talk) 00:27, 23 May 2013 (UTC)
Wired for War: The Robotics Revolution and Conflict in the Twenty-first Century By Peter Warren Singer: http://books.google.com/books?id=AJuowQmtbU4C&pg=PA79&lpg=PA79&dq=stephen+thaler+robotics&source=bl&ots=ujZis55F_-&sig=RKht3W1PxeSX9xOvOW03_I9YDIE&hl=en&sa=X&ei=84SZUfXVCZCi8AS5joHwDw&ved=0CC0Q6AEwADgU#v=onepage&q=stephen%20thaler%20robotics&f=false "One example at the Air Force Research Laboratory is based on th eresearch of Stephen Thaler."02:09, 20 May 2013 (UTC)02:09, 20 May 2013 (UTC)~~ — Preceding unsigned comment added by Periksson28 (talk • contribs)
- A 2009 book that claims that the Air Force Research Lab was conducting AI research based upon Thaler's work. It also says Thaler is the inventor of the Oral-B toothbrush, which is incorrect. From that, I'd say that this is not a reliable source. --Ronz (talk) 16:34, 22 May 2013 (UTC)
- With all due respect, I don't know how/why you would want to discredit the author of this book. Through just a casual Google search I can find a couple million dollars reported paid from the Air Force Research Lab for Thaler's work http://www.sbir.gov/sbirsearch/detail/187689. I don't know how you can come to a decision one way or another regarding his involvement with product design for oral-b. From what I can tell from the other legal authors here, he was contracted by the parent razor blade company to do exactly what the author claims he did.Shibbuleth (talk) 00:03, 23 May 2013 (UTC)
- Not true? How did you arrive at that conclusion? I believe the claim is the Oral-B CrossAction toothbrush which was performed under a work for hire agreement, not the inventor of the "Oral-B Toothbrush". Would you like to see the invoice to Gillette? If it was false, Gillette would have sued him by now since the claim has been on his website since its founding. I strongly doubt even an invoice would satisfy all skepticism. I think the editors have made a lot of compromises to reduce the POV elements and focus on the content but it seems, and I could be wrong, that every compromise brings escalated demands. Again, it seems to me that if you are an academic it is relatively easy to meet Wikipedia's standards but if you have operated in the commercial sector under confidentiality agreements, work for hire, non-competes, greatly limits what one can provide. That is fairly obvious to anyone who has worked in research in the private sector. I do understand your points about "focus on the content." In the Mandelbrot Set entry for example, Benoit Mandelbrot's name only appears three times excluding the title and its named after him. Gdippold (talk) 02:03, 23 May 2013 (UTC)
- With all due respect, I don't know how/why you would want to discredit the author of this book. Through just a casual Google search I can find a couple million dollars reported paid from the Air Force Research Lab for Thaler's work http://www.sbir.gov/sbirsearch/detail/187689. I don't know how you can come to a decision one way or another regarding his involvement with product design for oral-b. From what I can tell from the other legal authors here, he was contracted by the parent razor blade company to do exactly what the author claims he did.Shibbuleth (talk) 00:03, 23 May 2013 (UTC)
- The content in contention here is not being presented in Wikipedia. I hold Dr. Peter Warren Singer in the highest regard, but respect your opinion to think otherwise. As a Presidential appointee, he has been properly vetted by academia.[1]Shibbuleth (talk) 03:36, 23 May 2013 (UTC)
Magnus, Stephan http://www.vreedom.com/material/TAF_2012-01-10_ATICA-Artikel_eng.pdf "Stephen Thaler, creator of the Creativity Machine..."Periksson28 (talk) 02:19, 20 May 2013 (UTC)
- It's not clear what this is, but looks self-published, so it wouldn't be considered a reliable source. --Ronz (talk) 03:03, 23 May 2013 (UTC)
Robots Unlimited: Life in a Virtual Age http://www.amazon.com/Robots-Unlimited-Life-Virtual-Age/product-reviews/1568812396 "Also Type-3 is the bridge-playing COBRA machine, and the Poki poker-playing machine, the Thaler Creativity Machine, the BRUTUS storytelling machine, all of which are discussed in the book."Periksson28 (talk) 02:24, 20 May 2013 (UTC)
- Not a reliable source, the review that is. --Ronz (talk) 16:27, 26 May 2013 (UTC)
http://link.springer.com/chapter/10.1007%2F11785231_10#page-1 Creativity of Neural Networks, Markowska "The most interesting example is the Creativity Machine developed by S. Thaler..."Periksson28 (talk) 02:29, 20 May 2013 (UTC) (be sure to click the preview to see the first page for free!)Periksson28 (talk) 02:35, 20 May 2013 (UTC)
- A brief mention in a review section in a research paper. I'm not sure what we could possibly use it for. --Ronz (talk) 04:19, 29 May 2013 (UTC)
http://artintelligence.net/review/?p=12 Reason, Imagination and Play: Hume, Freud and Semiotics Filed under: Theory, Imagination — Graham Coulter-Smith "As noted earlier Hume’s focus on imagination as the association of ideas resonates with contemporary explorations in artificial intelligence and artificial creativity (e.g. Stephen Thaler’s ‘Creativity Machine’)." Periksson28 (talk) 02:47, 20 May 2013 (UTC)
Applications for self-aware systems, Pepperell http://www.robertpepperell.com/papers/Applications%20for%20Self-Aware.pdf Several quotes... — Preceding unsigned comment added by Periksson28 (talk • contribs) 02:52, 20 May 2013 (UTC)
The Genie in the Machine: How Computer-automated Inventing is ... By Robert Plotkin http://books.google.com/books?id=pot5WUcz2a4C&pg=PA52&dq=autonomous+invention+law+thaler&hl=en&sa=X&ei=AZGZUYKeJJPg8AT654DwDw&ved=0CDoQ6AEwAQ#v=onepage&q=autonomous%20invention%20law%20thaler&f=false "Then Dr. Thaler gave the Creativity Machine one last instruction, dream..."Periksson28 (talk) 02:59, 20 May 2013 (UTC)
- Thanks for all the potential sources. It would help if you tried to describe how they might be used or what they present that's relevant to this article. I'll do the same when I get the time. --Ronz (talk) 03:01, 20 May 2013 (UTC)
They're all about Thaler's role in computational creativity.Periksson28 (talk) 03:24, 20 May 2013 (UTC)
Dewey-Hagborg Thesis, "Creating Creativity" http://itp.nyu.edu/projects_documents/1178645182_thesis_draft1.pdfPeriksson28 (talk) 03:07, 20 May 2013 (UTC)
A Framework for Exploring the Evolutionary Roots of Creativity Hrafn Th. Thórisson http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.109.5719 "Stephen Thaler [5] has created what he calls a “creativity machine” by perturbing connections in artificial neural networks and thereby creating ‘noise’ which manipulates learned concepts. A system inhabiting an ability to recognize abstract rules in the environment and use efficiently to manipulate conceptual structures (neural clusters) would behave similarly except for applying ‘noise’ in the form of “rule hyperstructures” to produce more ideas which correlate with the environment."Periksson28 (talk) 03:20, 20 May 2013 (UTC)
This might be of general interest to this crowd... https://litigation-essentials.lexisnexis.com/webcd/app?action=DocumentDisplay&crawlid=1&doctype=cite&docid=71+Tul.+L.+Rev.+1675&srctype=smi&srcid=3B15&key=678d357777bfbecf9d182240046c8c6b Clifford, INTELLECTUAL PROPERTY IN THE ERA OF THE CREATIVE COMPUTER PROGRAM: WILL THE TRUE CREATOR PLEASE STAND UP? "Importantly, this computer system, termed a Creativity Machine by its inventor,is demonstrating skills that Dr. Thaler himself does not possess..." Cheers! Periksson28 (talk) 19:05, 20 May 2013 (UTC)
References
Refimprove tag
editJust to be perfectly clear: Besides helping with the POV problems, more and better references will help with the poorly referenced and unreferenced sections, along with any other pov and or/syn problems. --Ronz (talk) 16:56, 19 May 2013 (UTC)
Very Unbalanced Article
editThe article has not improved by much in recent weeks. A person who gets very few cites in the scientific literature (Thaler) is cited 16 (!) times in the Wikipedia article. While other authors who are much better known in this field are hardly cited at all. I think the POV template must remain in the Thaler section until there is a consensus. Kusiana (talk) 14:37, 13 June 2013 (UTC)
Sorry, Kusiana, all references in the Thaler section are encyclopedic by Wikipedia guidelines. This section has already been cut back considerably.Periksson28 (talk) 00:09, 23 September 2013 (UTC)
This article used to be an excellent start into the various topics involved in computational creativity. Lately, it has in my opinion become a depressing read, this talk section especially. If what some claim is true and Thaler contributed to the field of computational creativity, and if this can be backed by solid scientific references, then the article should include a few lines about his work. But entire paragraphs, along with deletion of other contributions, while some influential scholars are hardly mentioned or not at all? Neural networks are a useful machine learning technique, but arguing that they are the basis for a general framework and theory of creativity is a (vast) overstatement. I studied the literature on creativity for four years during my PhD, but I did not encounter the work of Thaler anywhere. Tomdesmedt (talk) 17:55, 23 August 2013 (UTC)
Thanks for your opinion, but I'm sorry. A general theory of creativity has been proposed and publisher by Springer after peer review. This section also cites other published works underlying this theory.Periksson28 (talk) 00:09, 23 September 2013 (UTC)
rearrangements and some deletions
editI see that there's a bit of a struggle here on the talk page. I made a number of changes today that I think are "neutral" with respect to these discussions. First, I moved the "general theory" stuff to the bottom of the page, preserving almost all of the prose that's there, except the paragraph about the "grim reaper" which AFAIKT is a very specific technique that's not of general importance. I also deleted a few other paragraphs (unrelated to the debate here) from earlier sections, where the prose was just too dense to follow. I would invite other editors to step away from the debate here and put some more work into the non-contentious sections, since these, too, still need a lot of work. Generally a useful article though, many thanks to those who have been contributing! Arided (talk) 23:17, 13 September 2013 (UTC)
I think the article is looking better. Thanks for your help, Aridad!Periksson28 (talk) 00:18, 23 September 2013 (UTC)
Conceptual blending: that's just your POV, man
editRemoved this from the section on conceptual blending, at least until some citations to back up the critique can be added.
Blending theory is an elaborate framework that provides a rich terminology for describing the products of creative thinking, from metaphors to jokes to neologisms to adverts. It is most typically applied retrospectively, to describe how a blended conceptual structure could have arisen from a particular pair of input structures. These conceptual structures are often good examples of human creativity, but blending theory is not a theory of creativity, nor – despite its authors’ claims – does it describe a mechanism for creativity. The theory lacks an explanation for how a creative individual chooses the input spaces that should be blended to generate a desired result.
Please check WP:NOR for more details, thanks! Arided (talk) 17:49, 29 September 2013 (UTC)
Stephen Thaler
editI removed the whole section based 100% on primary sources from the same author (not especially notable, as I see in Google) As such they constitute original reseacrh and undue weight. Staszek Lem (talk) 22:55, 21 May 2015 (UTC)
Staszek,100%? Please check again for secondary references. I'm counting at least three, and if need be, the primaries may be deleted to up the ratio of secondary sources. Thanks!Periksson28 (talk) 03:58, 22 May 2015 (UTC)
External links modified
editHello fellow Wikipedians,
I have just modified one external link on Computational creativity. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Added archive https://web.archive.org/web/20111103192105/http://www.narrativescience.com/solutions.html to http://www.narrativescience.com/solutions.html
- Added
{{dead link}}
tag to http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&co1=AND&d=PTXT&s1=5659666.PN.&OS=PN/5659666&RS=PN/5659666
When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}
).
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 07:32, 29 November 2016 (UTC)
External links modified
editHello fellow Wikipedians,
I have just modified one external link on Computational creativity. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Added archive https://web.archive.org/web/20100226121804/http://www.miller-mccune.com:80/culture-society/triumph-of-the-cyborg-composer-8507/ to http://www.miller-mccune.com/culture-society/triumph-of-the-cyborg-composer-8507/
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 19:46, 13 January 2017 (UTC)
Thaler update
editI feel bad if I'm stomping on someone's carefully constructed compromise, but is anyone going to object if I remove "The Creativity Machine Paradigm: Withstanding the Argument from Consciousness" by Stephen L. Thaler per WP:UNDUE? Rolf H Nelson (talk) 04:45, 23 February 2021 (UTC)
- @Rolf h nelson No objection was raised, I assume you removed it, and I also followed up and removed the POV template which was probably about that section being undue. Piotr Konieczny aka Prokonsul Piotrus| reply here 05:16, 12 August 2021 (UTC)
Free image
editI am having trouble finding a freely licenced image to illustrate this article. Do you think this would be relevant? https://flickr.com/photos/jurvetson/42392216182/ Piotr Konieczny aka Prokonsul Piotrus| reply here 05:23, 12 August 2021 (UTC)
- PS. Few more minutes and I found that we have a good image (File:AI (Artificial Intelligence) Dog.jpg) on Commons, even with OTRS permission - but it was never added here...? Why? Ping uploader User:Norttis. (I'll be bold and add it). It's listed at [2] as one of the "images created using AI" so seems pretty relevant. --Piotr Konieczny aka Prokonsul Piotrus| reply here 05:26, 12 August 2021 (UTC)
Merge from Artificial imagination, Creative computing and Artificial intelligence art
editartificial creativity was merged here in 2008. But we missed Artificial imagination, which seems like another synonym of the concept discussed here. Thoughts? PS. I checked other synonyms listed in our lead (which currently states that computational creativity is "also known as artificial creativity, mechanical creativity, creative computing or creative computation)"), and created a few missing redirects (some others may be missing, I added one for "computer creativity"). I also discovered that there is also the rather messy article under Creative computing that should probably be merged here too. I am also concerned about Computer-generated art which redirects to Algorithmic art; I think algorithmic art is a different concept from "computational creativity", but the redirect at Computer-generated art should probably be a disambig? (We also have Computer art which this term should point to as well). PPPS. Aaargh, I also discovered we have Artificial intelligence art, another article that almost certainly needs a merge here. Lastly, given all those synonyms, some of which are hardly intuitive, I'd propose renaming this article to artificial intelligence and art (and if it was my field, i.e. one in which I'd be publishing as a scholar, I'd campaign for an artsy neologism like "art^2"...).Piotr Konieczny aka Prokonsul Piotrus| reply here 06:43, 12 August 2021 (UTC)
- Caveat: I didn't have time tonight to look at all these articles and what they cover, but this is my understanding of what these various terms mean.
- Artificial imagination: I think this is a different topic, because this can refer to visualization and mental modeling, which is not the same as creativity (having original thoughts).
- Computer generated art: This is closer, but still a different topic. Artificial creativity is about simulating the way human beings have original thoughts, including scientific thoughts and so on. Computer generated art is an application of artificial creativity -- it's about real, non-theoretical programs that people have already written. Some of these programs involve strictly random processes or are basically smart tools that artists have guided -- in other words, people have made "computer generated art" without using artificial creativity. Thus they are not the same topic.
- Computer art: This is category of "art" and a term used by curators in the art world. It is a much broader topic than just algorithmic art or artificial creativity. It includes things like image manipulation and so on, where the computer is just functioning as a tool in the artistic process. So I think people (in the art world at least) would count programs that claim to have "artificial creativity" as one (very small) sub-category of computer art.
- Algorithmic art: Similarly to "computer generated art", algorithmic art is not typically "creative". Algorithmic art includes various kinds of deterministic algorithms that have no resemblance to human "creativity".
- So here's my opinion, FWIW. I wouldn't merge any of them into this article. However, I would support a merge of "algorithmic art" and "computer generated art" because it seems to me that they are very close to the same thing. Some might say "algorithmic art" is an approach to computer generated art, or others might say all computer generated art is "algorithmic" by definition. They are intertwined and could be merged.
- And, I'm guessing that all these articles would benefit from a thorough review that sorts out the material correctly and has good "Main" and "See also" tags. Sadly, this is the kind of thing that Wikipedia often has trouble accomplishing -- it requires a pretty dedicated editor or team to get it right. So I'm glad you did this review, and I hope that you continue to work on this. Wikipedia needs this kind of "relevance" editing more than any other kind. ---- CharlesGillingham (talk) 07:29, 14 August 2021 (UTC)
- Closing, with no further merges (noting that "algorithmic art" and "computer generated art" have already been merged), given the uncontested objection, no support and stale discussion. Klbrain (talk) 15:50, 24 December 2021 (UTC)
Update needed to cover deep learning transformers: GPT, DALL-E, etc.
editThis article seems to be out of date -- it doesn't include anything from the last two years. I will tag it shortly unless someone here has a plan. ---- 2A01:B747:27:344:591C:E6BF:6C1E:E44B (talk) 15:04, 3 August 2023 (UTC)
- Feel free to tag it as out of date. Part of the issue here is that there are multiple articles -- Artificial intelligence art, Computational creativity, Generative artificial intelligence, Synthetic media, Synthography, and more -- that all cover roughly the same topic, which divides editors' time, leading to issues with all of these articles. It's probably worth trying to merge some or all of these articles so we have fewer articles to focus on. Elspea756 (talk) 15:32, 3 August 2023 (UTC)
Wiki Education assignment: Digital Writing
editThis article is currently the subject of a Wiki Education Foundation-supported course assignment, between 26 August 2024 and 13 December 2024. Further details are available on the course page. Student editor(s): Coconutloco34 (article contribs). Peer reviewers: Excalibur456, GingerStoleMyBread.
— Assignment last updated by Gcutrufello (talk) 17:30, 5 November 2024 (UTC)