Talk:Artificial consciousness/Archive 13

Latest comment: 9 years ago by Tkorrovi in topic Stephen Thaler
Archive 10Archive 11Archive 12Archive 13Archive 14

Objective less Genuine AC

By "less Genuine" we mean not as real as "Genuine" but more real than "Not-genuine". It is alternative view to "Genuine AC", by that view AC is less genuine only because of the requirement that AC study must be as objective as the scientific method demands, but by Thomas Nagel consciousness includes subjective experience that cannot be objectively observed. It does not intend to restrict AC in any other way.
An AC system that appears conscious must be theoretically capable of achieving all known objectively observable abilities of consciousness possessed by a capable human, even if it does not need to have all of them at any particular moment. Therefore AC is objective and always remains artificial and is only as close to consciousness as we objectively understand about the subject. Because of the demand to be capable of achieving all these abilities, computers that appear conscious are a form of AC that may considered to be strong artificial intelligence, but this also depends on how strong AI is defined.

To start with i'd say this needs to be way clearer re what the actual point/position is and who holds it, and to have the sub-issues disentangled (eg, scientific observation of C vs whether it's there, objective vs artificial, AC vs strong AI, etc). And the first sentence of the second paragraph is either part of the "less Genuine" position that needs to be explained, sourced, and related to the position; or it's a general statement re AC which is thus either original research or mistaken re many peoples views. It implies that there couldn't be a dog-mentality AC because it wouldn't be "capable of achieving all known objectively observable abilities of consciousness possessed by a capable human", and that there couldn't be aliens very different from humans (eg, not capable of pain, etc) but nonetheless conscious. One could hold this but many people do not. Much more to say but that should be enough; thx again and hope that helps, "alyosha" (talk) 06:40, 3 January 2006 (UTC)

Suggest:

Artificial Consciousness must not be as genuine as Strong AI, it must be as objective as the scientific method demands and capable of achieving known objectively observable abilities of consciousness, except subjective experience, which by Thomas Nagel cannot be objectively observed.

The point is to differentiate AC from Strong AI, which by some approach means just copying the content of the brain, by no paper is this the aim of AC. There are no such terms as "genuine AC", "not genuine AC" etc, these were invented by a person who was banned from editing this article indefinitely by the decision of the Arbitration Committee. Overall the text of the article is too long, I have always said it should be shortened so that the reader could follow it.Tkorrovi 01:54, 7 January 2006 (UTC)

If one really wants to improve this article, please notice that the very first sentence was edited wrong, nowhere is said that the aim of AC is to produce definition of consciousness, the aim of AC is to implement known and objective abilities or aspects of consciousness. The definition by Igor Aleksander said "defining that which would have to be synthesized were consciousness to be found in an engineered artefact", who can follow it, this says something very different than producing a "rigorous definition of consciousness".Tkorrovi 02:48, 7 January 2006 (UTC)

Suggest:

This article interchangeably uses the words intelligence, consciousness, and sentience. In fact, an artificial conscious program would more correctly be described as an artificial >sapience<, as sapience implies complex reasoning on the level of a human. Sentience, by contrast, merely reflects the ability to feel. Almost all multicellular animals have the ability to react to their environment through a nervous system, and so can be to some degree be considered 'feeling' and therefore sentient. It is quite possible that a program might be programmed in such a way as to not incorporate feeling at all. Additionally, the difference between intelligence and consciousness is quite great. An intelligent being might be capable of performing a complex task, reacting to its environment accordingly, without actually using any reasoning, and it is reasoning that defines sapience.

(gigacannon, 01:26 GMT 15 May 06)

We cannot use our own terms here, in spite that there has been a lot of criticism about "original research" in this article, this article is about that which was written in various papers. The term "consciousness" has mostly been used regarding humans, so it is about the kind of awareness which humans have, or anything else which may have the same kind of awareness. Even the bacteria have the ability to react to their environment, the difference is how advanced such awareness is, like the awareness of humans is so advanced that the brain can model every kind of external processes. But then, all these thoughts are for us to understand the things, what we can write in the article is how exactly these things were explained in various papers.Tkorrovi 15:03, 24 May 2006 (UTC)

Personality, personal identity, and behaviourism.

"Personality is another characteristic that is generally considered vital for a machine to appear conscious. In the area of behaviorial psychology, there is a somewhat popular theory that personality is an illusion created by the brain in order to interact with other people. It is argued that without other people to interact with, humans (and possibly other animals) would have no need of personalities, and human personality would never have evolved. An artificially conscious machine may need to have a personality capable of expression such that human observers can interact with it in a meaningful way. However, this is often questioned by computer scientists; the Turing test, which measures a machine's personality, is not considered generally useful any more".

Does this make sense to anybody? Aspects of personality, such as extroversion/introversion are well-defined, objectively measurable and fairly fixed. They are even describable as behavioural dispositions. Since when was something an illusion because it need not necessarily have evolved? Is the human appendix an illusion? It is hard to see how even a behaviourist could dismiss them as illusory. Does the article really mean "personal identity"? (cf Dennett's "center of narrative gravity").1Z 17:41, 22 January 2007 (UTC)

Illusion is not the correct word, although the following statements certainly make sense. What about something like 'a construct of the brain' ? Stringanomaly (talk) 19:55, 2 April 2008 (UTC)

Illusion is certainly not a correct word. It seems, that someone wanted to widespread the "consciousness is illusion" idea everywhere, which is really a hidden homunculus argument. So some sentences seem to be inserted about these arguments, in improper places. It seems that this person also was an idealist, considering a lot of text about idealist philosophy inserted in the article, which might be a reason why this person didn't like Artificial Consciousness, which aims to explain the things rationally. This paragraph is not in the article already a long time. Tkorrovi (talk) 22:11, 3 April 2008 (UTC)

Here's another one

"The phrase digital sentience is considered a misnomer by some, since sentience means the ability to feel or perceive in the absence of thoughts, especially inner speech. It suggests conscious experience is a state rather than a process".

What's going on here? Can't digital systems do processing?1Z 18:50, 22 January 2007 (UTC)

This is almost a great article

I've marked this article for cleanup and started in on it.

I think that there is some very solid information in this article, but it's very uneven and confusing at the moment. The strongest section is the description of the work of Franklin, Sun and Halkonen. It seems to me that their understanding of their own field should carry the day. Specifically, these are my criticisms:

  1. A few paragraphs sound like original research. These should be cut.
  2. A few paragraphs sound like someone with only a passing familiarity with AI or consciousness simply mulling the subject over. These should be cut.
  3. Several subjects are discussed two or three places, such as Searle, self-awareness, etc. The article should be reorganized so that each is mentioned only once.
  4. A few sections are about closely related, but different, subjects. These need to be tied to the main subject and should probably be substantially shortened. The other subjects are (at least):
    1. artificial general intelligence, or what futurists call Strong AI.
    2. philosophy of artificial intelligence, (Turing Test, Searle etc)
    3. Philosophy of consciousness and the hard problem of consciousness
    4. artificial intelligence in fiction
  5. The intro needs to put the subject into context -- specifically, it must distinguish artificial consciousness from artificial intelligence and from artificial general intelligence and it must cite sources that make this distinction. It should also specify it's relationship to the philosophy of artificial intelligence.
  6. The whole article doesn't flow, mostly because the titles of some sections don't capture their content, the lead paragraphs of some sections don't provide any context, and some paragraphs don't seem to be on the same topic as the rest of their sections.

To identify the original research (and other bull----), we need to reference each paragraph. Since harvard references seem to be more common in the stronger sections of the article, we should use harvard referencing throughout. Once the references are in place, then we can begin reorganizing and cutting.

Tonight I'm cleaning up most of the references with {{Harv|...}} and {{Citation|...}} templates and marking a few paragraphs as original research. ---- CharlesGillingham 11:28, 26 September 2007 (UTC)


I agree with all you have said but I cant help much as I dont understand much.
In the beginning it should be a definition or a paragraph that it should be clear for non specialist users coming to this page.
The "THAT WHICH WOULD HAVE TO BE SYNTHESIZED WERE CONSCIOUSNESS TO BE FOUND IN AN ENGINEERED ARTIFACT" is not exactly such thing.
The phrase "The brain somehow avoids the problem described in the Homunculus fallacy"
I dont think it should be here as I am pretty sure that ai researchers dont see Homunculus fallacy as a problem.
Computers do video recognition like human face recognition.
Using this recognition, computers do some actions like warning that there is a criminal
This is how brain works - is doing video recognitions and is acting accordingly to obey the human needs.
Homunculus argument is like that theory in astrology with the universe that is sitting on the back of a turtle :).
So I think the Homunculus argument is not a scientific thing to link Artificial consciousness to. Raffethefirst (talk) 12:59, 14 March 2008 (UTC)
The definition "THAT WHICH WOULD HAVE TO BE SYNTHESIZED WERE CONSCIOUSNESS TO BE FOUND IN AN ENGINEERED ARTIFACT" was taken from a peer reviewed paper by Igor Aleksander, which was one of the earlier papers about Artificial Consciousness. So there we have no choice, this is how the field was defined when it was founded. And i would say that this definition is good enough. Of course it would be clearer for an average visitor to explain it a little more, but then again there would be a problem that these explanations would then not be taken from a peer-reviewed paper, and here this would be considered an original research again. So there likely is no better solution, than to have only this definition there, as this definition is adequate, only maybe somewhat difficult to understand for average visitors.
I agree what concerns the homunculus argument, i also don't think that the homunculus argument is a serious science. But some people think otherwise, so their view should also be somehow represented. Maybe the solution would be to add link to the homunculus fallacy article to "See also", at least i think that this argument is not so important that it should be mentioned in the beginning of the article. I think that we would wait here the opinion of the others, and after some reasonable time would make some edits what concerns the homunculus argument.Tkorrovi (talk) 21:23, 16 March 2008 (UTC)
As I said above, I think this article should (1) reflect the views of the researchers who actually believe artificial consciousness is important, and (2) have references to their publications in every paragraph. In line with this, I would leave Aleksander's definition (but perhaps try to make it clearer) and cut the second paragraph, because it has no source. The same sort of standard has to be applied to the rest of the article. The biggest problem with this article is that it contains too much original research. ---- CharlesGillingham (talk) 17:20, 17 March 2008 (UTC)
OK, at least there is now a consensus to remove this sentence:
"The brain somehow avoids the problem described in the Homunculus fallacy (such as by being the homunculus) and overcomes the problems described below in the next section."
So i removed it, and added a link to homunculus fallacy to "See also". This sentence was wrongly placed between two other sentences, now this paragraph is how it likely was originally written, and makes much more sense. Also, this sentence seems to be wrong, as homunculus fallacy is about theories which try to explain how the brain works, not about the brain itself. If you or Raffethefirst would disagree with that, feel free to restore that sentence. Also, there is a lot written about dualism in the article, i don't think that dualism is so relevant to science, no matter what someone's personal view may be. Also, the Dennett's multiple drafts principle was meant to be a solution to Ryle's regress, but this is not mentioned in the article.Tkorrovi (talk) 06:01, 22 March 2008 (UTC)

Two Anticipation paragraphs

In the section Consciousness in digital computers, there are two separated paragraphs on the same topic, anticipation. I suggest someone go and delete or merge the paragraphs. 220.233.7.38 11:17, 14 October 2007 (UTC)

This article is marked for cleanup. I am hoping that some of the original authors will return to help pull this article back together. ---- CharlesGillingham 18:17, 15 October 2007 (UTC)

I removed the link to "Proposed mechanisms for AC implemented by computer program: Absolutely Dynamic Systems" as it is an unknown work made by a person without any affiliations and without a proper litterature review and without experimental validations. Moreover, this link has been spammed by its author in several internet forums as well as newsgroups. The bottom line is that this work is unknown and poorly written, and has been ignored by the scientific commmunity (nobody ever cited this). Thus it should not be in wikipedia.

Link restored. This is a link to open source software project, relevant to the article, links to open source software projects should not be removed.Tkorrovi (talk) 06:07, 5 February 2008 (UTC)
Accusations about spamming have no ground, who claims so, should provide an example.Tkorrovi (talk) 06:47, 5 February 2008 (UTC)

Planning on gutting this article

I marked several sections as unsourced a few years ago. I'm going to toss them soon unless someone wants to save them. ---- CharlesGillingham (talk) 09:14, 6 January 2009 (UTC)

will agents ever say "Please let me out of here!" ;)

... A COnscious MAchine Simulator – ACOMAS Salvatore Gerard Micheal, 06/JAN/09

Objects

  2 cross verifying senses (simulated)
     hearing (stereo)
     seeing (stereo)
  short-term symbol register (8^3 symbols arranged in 8x8x8 array)
  rule-base (self verifying and extending)
  3D visualization register
  models of reality (at least 2)
  morality (unmodifiable and uncircumventable)
     don’t steal, kill, lie, or harm
  goal list (modifiable, prioritized)
  output devices
     robotic arms (simulated – 2)
     voice-synthesizer and speaker
     video display unit
  local environment (simulated)
  operator (simulated, controlled by operator)

Purpose

test feasibility of construct for an actual conscious machine
discover timing requirements for working prototype
discover specifications of objects
discover implications/consequences of enhanced intelligence
 (humans have 7 short-term symbol registers)
discover implications/consequences of emotionless consciousness

... sam micheal &Delta (talk) 17:22, 7 January 2009 (UTC)

Why do you write it here? First of all this shows your poor understanding of Artificial Consciousness, you should at least clearly specify what aspects of consciousness your system supposed to model, and then find out whether it really does that. May I suggest that you write about your systems in some more proper place for that in the Internet and not here, like, if you would ever be able to create a working system, then you may create a project in SourceForge.Tkorrovi (talk) 13:41, 19 January 2009 (UTC)

Desperately seeking sources

The problem I have with this artice, except for the "reasearch approaches" section, is that there are almost no sources.We need to prove to the reader that they can trust this article. They're not going to trust us if we don't supply the sources for our information. We need mainstream articles (journal articles, major newspaper articles, major magazines, etc.) that prove that what we say is the normal, usual discussion of the subject. We need sources that say "artificial consciousness researcher Ron Sun argues that X". These unreferenced sections sound like someone just mulling things over. The reader needs to know who it is who thinks these things. ---- CharlesGillingham (talk) 14:34, 4 March 2009 (UTC)

We shouldn't be discussing what consciousness is on the page, we should be explaining what Bernard Baars thinks consciousness is.---- CharlesGillingham (talk) 14:37, 4 March 2009 (UTC)

The problem with artificial consciousness is that it is not exactly a very well-formed field yet, thus it is already quite difficult to have an article which would give readers at least some idea about the field. I understand the desire to improve, but the result here has often been some utter nonsense added to the article. I'm glad that we finally got rid of it, and the article is now quite satisfactory.Tkorrovi (talk) 01:07, 5 March 2009 (UTC)

Now we're getting somewhere! Your recent edits are exactly what this article needs. ---- CharlesGillingham (talk) 18:17, 5 March 2009 (UTC)

Strange why i failed to find a paper titled "Using statistical methods for testing AI" ;) Tkorrovi (talk) 09:50, 6 March 2009 (UTC)

Schools of thought

According to Persian Philosophers of 10th CEN The Conscience, The awareness is not an attribute nor it is out of our needs to comprehend- they are there like sky wind atoms rays of light all that is seen or not seen. These quantities are real and one can experience them independent of any other observer (as claimed by some quantum physicists). Even Avesina expressed it as a "hanging man in space" which knows at all times that he is there.

It is not easy to describe the source of all descriptions and the totality of it rather we should suffice us with skills-functions-calculations etc. and find a better term instead of AI.

Who added this, please reword, explain more clearly, and add sources. I understand that this is an attempt to say something important, but because it is worded poorly, it remains unclear to most of the readers. Tkorrovi (talk) 17:59, 21 March 2009 (UTC)

Framework for artificial consciousness

The roadmap to reach artificial consciousness does not appear in the article. A specific framework might be needed for this consciousness to appear, like the earth for human bodies. The difficulty also is to define a way to detect the presence of a conscious behaviour within a set of machines and numeric entities. Ivan Lovric ESECO SYSTEMS 2009 Lovric (talk) 09:30, 30 June 2009 (UTC)

There is no consensus as to what that roadmap should be. See the proceedings of some of the AGI conferences, for example. linas (talk) 21:55, 9 November 2009 (UTC)

List of other projects

I've got a list of other AGI-ish projects not mentioned here, listed at http://linas.org/agi.html . Some are serious, some less so. These include:

  • Pei Wang's NARS project
  • Stan Franklin's LIDA
  • MultiNet
  • SNePS
  • Project Halo
  • Nick Cassimatis's PolyScheme
  • OntoSem - Ontological Semantics
  • General Intelligence Research Group

Some of these should be added to this article. NARS, SnePS and polyscheme seem to be among the more advanced projects not yet listed on this page. linas (talk) 21:55, 9 November 2009 (UTC)

Are these are all directly concerned with replicating consciousness (as opposed to general intelligence, or strong AI?) ---- CharlesGillingham (talk) 18:47, 11 November 2009 (UTC)

AC and AGI

The terms "Artificial Consciousness" (or "Machine Consciousness") and "Artificial General Intelligence" should not be mixed, they do not mean the same thing. AGI is about replicating human intelligence completely, while AC is about modelling aspects of consciousness, not necessarily all of them, and not necessarily completely. Also intelligence and consciousness are different terms with different scope. Tkorrovi (talk) 17:01, 15 November 2009 (UTC)

Indeed they do not mean the same thing. The AC definition above is good. The AGI definition is too restrictive. "Building machines capable of broad (ultimately general) adaptive action by building an integrated set of intelligent traits (that will include some form of learning) " would be more accurate (see Ben Goertzel's books). Most individual projects within AGI have tighter scopes, and human level capability in the traits modelled is often (but not always) stated as an objective.
Some would argue that AC (trait) is part of AGI. AGI is most definitely not part of AC.
wrt to projects (also comment above) OpenCog - (not explicitly about consciousness) does not appear to belong on this page. Other relevant projects should only be included if they exemplify an important idea, or are notable in some other way. Simply being about artificial consciousness is not enough. Bitstrat (talk) 10:24, 31 March 2010 (UTC)

Strong AC vs Weak AC

No mention of the distinction between strong and weak approaches to artificial consciousness? —Preceding unsigned comment added by 86.151.16.194 (talk) 23:26, 17 May 2010 (UTC)

There are no such terms, and we cannot invent them. But what you call "Strong AC" is likely the same as "Strong AI", if you like it to be very "strong", like an exact copy of the human brain.Tkorrovi (talk) 20:15, 28 May 2010 (UTC)

Definition

The Igor Aleksander's definition is tentative, it is about finding that which have to be synthesized if there were consciousness in an engineered artifact. "Were consciousness to be found" there is only a conjecture, not an aim or final result.Tkorrovi (talk) 18:50, 28 November 2010 (UTC)


OK, I have read some of the previous comments, but can someone explain to me what this means:

"... whose aim is to define that which would have to be synthesized were consciousness to be found in an engineered artifact".

I do not care that someone has copied it directly, or from where, I just want to be able to understand what the heck it means?
  • Why would anyone want to synthesise anything after consciousness was discovered
  • An engineered arftifact? A TV? A robotic arm? A gearbox?
  • Synthesise?
Perhaps a definition should be given that defines the subject as clearly as possible, and then mention that it came from the definition originally given by Aleksander.
If I cannot understand it I am sure that 50% or more of the rest of readers will not either. It would make much more sense if it said: :"... whose aim is to define that which would have to be synthesized to (determine if/enable) consciousness (exists in/to exist in) an engineered artifact" (delete one set of the pairs, left or right)
I am pretty sure that the quoted source is not correct. It seems to me that the paper quoted should be "Towards a Neural Model of Consciousness", ICANN94, Aleksander, 1994 as the present linked ref is to an update to the original: "The concept of a theory of artificial neural consciousness based on neural machines was introduced at ICANN94, (Aleksander, 1994)"
Can someone find a definitive plain-English definition of artificial consciousness please? Chaosdruid (talk) 12:55, 18 July 2011 (UTC)


OK, then what is the problem? The definition in the article is the original Igor Aleksander's definition, we are not allowed to write our own definition there. Yes this definition is from the second part of the Igor Aleksaner's paper which is called "An update". So what does that minor detail change? The rest is just an interpretation based on the assumption that no one understands the meaning of the statement. Naturally all people do not understand it, the same as i may not understand some article on chemistry. Nevertheless there must be the original definition and not someone's own definition.Tkorrovi (talk) 20:47, 21 July 2011 (UTC)


First of all, if you are going to start moving peoples posts around to suit your own decisions on whether or not this is actually the same as the previous one please remember to indent (yes I can read and would have put it in there if I had thought it was the same).
Secondly, this is not the same as your post. Your post is a reply to someone else, or a statement, and should be kept separately to my post which is actually asking for better definition.
Thirdly, in answer to your questions:
  • Re-read my post and try and understand it.
  • I am not trying to write my "own definition" - I am seeking a correct definition that is either from the original paper, or a better one that accurately describes the concept, not some strange badly written definition that talks about synthesising something once it has been found. The steps to take once it is found is not a definition for it.
  • It is not a minor detail, the definition is fundamental to the understanding of the concept.
  • the rest is not "an interpretation based on the assumption that no one understands it" it is a fundamental issue that the definition is not correct and a definition for it is not here.
  • "Naturally all people do not understand it" is a ridiculous statement to make, arguing that people are too un-intelligent to nderstand it does not mean that the definition is correct.
  • That is not the original definition, it is a definition of "what steps to take to synthesise something (that is never explained what this means) once it has already been discovered.
I note that you are probably going to try and defend the statement and its inclusion again, however to avid going round in circles either find a better definition, the original paper definition, or explain the definition so that I can appreciate the reasons why both I am Charles Gillingham "I would leave Aleksander's definition (but perhaps try to make it clearer)" (see the above comments) think it is a little off and needs better definition.
Lastly, try and be less defensive and work more collaboratively, and certainly do not assume that other editors are simpletons. Chaosdruid (talk) 21:14, 21 July 2011 (UTC)


Chaosdruid said "It is not a minor detail, the definition is fundamental to the understanding of the concept".
Please read what i wrote, it is not possible to argue with you with your ignoratio elenchi. And if you have nothing better to offer, it makes no sense to argue here about minor particulars. Tkorrovi (talk) 19:43, 22 July 2011 (UTC)


My ignoratio elenchi? Lol, I suppose that is one way to say "I reject your argument as I do not understand it", but giving wonderful Latin insults (used incorrectly) is going to get you nowhere, as you are incapable of understanding the basic and simple problem - the definition is incorrect and incomplete. It is akin to saying "the study of death is that which we need to synthesise were we to discover someone were dead".
I suggest that you read it all again, open your mind, put away your boxing gloves, and try and be a little less confrontational.
An encyclopaedia is not something to keep a record of all the clever and intellectual ideas that are incomprehensible to the general reader, it is there to document and explain them.
  • the activity of creating systems which perform in ways which require consciousness when humans perform in those ways.
  • Weak Artificial Consciousness: design and construction of machine that simulates consciousness or cognitive processes usually correlated with consciousness.
Strong Artificial Consciousness: design and construction of conscious machines.
And here is some further reading for you [1]
Stop attacking now and try and get your head onto the task of better defining what artificial conciousness is, rather than trying to defend a definition of "what we need to build once it exists". Chaosdruid (talk) 02:43, 23 July 2011 (UTC)
Now you claim that i insulted you? I'm attacking you? This is not true. Your offensive writing style is not a way to make me to agree with changing the definition in the article. Unless there are better definitions, the definition must remain the same. Tkorrovi (talk) 12:38, 23 July 2011 (UTC)
Did not insult me? Calling my posts "Ignoratio Elenchi" is insulting.
Offensive? Lol, here we go again, insults as to my intent and behaviour. I will try and and explain it in as little words and as simply as is possible:
That is not a definition for Artificial conciousness - it needs a better one.
If the article was "Device to contain/allow artificial conciousness" it would be.
I am disagreeing with the definition, it seems there is no-one else joining in this discussion, so at the moment there is no consensus to leave it in there as "the definition", nor to remove it - the definition is being challenged though and you must respect that. Chaosdruid (talk) 16:33, 23 July 2011 (UTC)


I have to agree with Chaosdruid that the current definition is obtuse, at least for a native English speaker like myself. I just read Aleksander 1995 and I don't even think he is giving a definition of "artificial consciousness"; he says that the paper defines "that which would have to synthesized". That's a different thing.

My own view is that "artificial consciousness" has a straight forward definition: "consciousness in a machine". Ta-da. There is no need to obfuscate the definition by renaming a "machine" as "a man-made artifact", or to point out that every aspect of a machine is "synthesized." This is what "machine" means in ordinary English usage.

The thing which is difficult to define is, of course, consciousness. However, note that John Searle writes "it is not difficult to give a commonsense definition of consciousness", that most philosophers agree that we all know what consciousness "feels like", and that neurologists and philosophers have hammered out a list of basic properties (see the article on consciousness).

However, AI researchers, at least in my reading, often tend to stray from the definition of consciousness as it is understood in neuroscience and philosophy. So we also need to discuss what Gerald Edelman, Bernard Baars and, yes, Aleksander, think consciousness is because it may be different. This requires research, which, unfortunately, I do not have time to do. ----CharlesGillingham (talk) 01:44, 24 July 2011 (UTC)


CharlesGillingham wrote "He says that the paper defines "that which would have to be synthesized". That's a different thing."
Yes of course it is a different thing. Because it does not define consciousness, it defines artificial consciousness. Which is a different thing.
I asked help from Wikiproject Philisophy at 13:12, 23 July 2011 (UTC) [2], which was before Chaosdruid asked help from you at 16:41, 23 July 2011 (UTC), but no one came to help. In spite that it is their responsibility to help in such cases. But such are the things in Wikipedia.
The main point is that there must be official peer-reviewed definition in the article and this must not be replaced by a definition written by a Wikipedia user. Even if that definition written by user is better by someone's opinion, because this would not be official peer-reviewed definition. I sincerely hope that you understand what i'm standing for here. Tkorrovi (talk) 02:13, 24 July 2011 (UTC)


Charles has a said, quite clearly, "I don't even think he is giving a definition of "artificial consciousness"."
Yet here you are, saying that Charles is talking about consciousness and not artificial consciousness.
Perhaps your understanding of English is a barrier, or that the translation of Aleksander was not correct, either way, something that seems really obviously muddled and a non-definition to the two native English speakers is still being defended by you as an absolute truth.
It is not the responsibility of any project to weigh in on an article, even so, I am a member of the Robotics project, and Charles is a very well respected editor in regard to AI articles - though I have only had contact with him once or twice in the past three years I am familiar with his comments and discussions across a wide variety of articles involving AI and computer science. The concept of "only a peer-reviewed definition will be acceptable" seems to have missed the mark when we are your peers. Adding "must not be replaced" makes it begin to sound like WP:OWN? More importantly, anything here is going to be written by a "Wikipedia user" as it is us that write the articles, be they quotes from others or not, it is up to us to ensure that accuracy is maintained using quotes from others or defining things ourselves.
I will do the research tomorrow.
By the way Tkorrovi, are you involved in the AC field? Chaosdruid (talk) 04:35, 24 July 2011 (UTC)


Okay. First, both of you should calm down, and slow down. It is impossible to accomplish anything when you are firing angry responses back and forth at each other. Second, the Alexander paper should not be considered an "official peer-reviewed definition" -- it is not even a "reliable source" in the Wikipedia sense, since it comes from a volume of conference proceedings, which usually are not peer-reviewed. In this case what we clearly have is a poorly worded definition written by somebody who does not speak English as a first language. I agree that it should be replaced by something easier to understand. (The second paragraph of the lead is also a serious misstatement of what "neuroscience" believes, but we can deal with that later.) Looie496 (talk) 05:33, 24 July 2011 (UTC)

Conference proceedings are peer-reviewed. Igor Aleksander is a prominent scientist, he was born in South Africa and worked in UK, thus he is not "Somebody who does not speak English as a first language". Please read this long discussion above, don't you find that this user (Chaosdruid) already went too far? Wouldn't it be more reasonable to ask user Chaosdruid to stop writing his/her long comments here? His/her opinion is known. My opinion is known. I'm against changing the peer-reviewed definition unless someone proposes a better peer-reviewed definition. What do you have against that? This definition is important for the whole article, many people have seen it here for the last seven years and did not decide to change it. Considering the importance of that definition, i would like that the opinions of these people would also be taken into account. I now stop commenting here and let the others to say their opinion. Because i'm sorry but i'm not willing to participate in this frenzy argument initiated by Chaosdruid. Tkorrovi (talk) 07:57, 24 July 2011 (UTC)
His article says that he was born in Croatia and educated in Italy and South Africa. Also, I have published in a number of conference proceedings volumes -- most of them are peer-reviewed only to a minimal degree, and I have never seen one that is peer-reviewed to the same level as a a good journal paper. I fully agree that long comments are bad, especially when they are written in a highly emotional tone. However, the bottom line is that I do not think Alexander's definition is a good one -- not because it is wrong, but because it is very awkwardly worded. I bet there is a good chance that Alexander himself would not like it very much nowadays, if you asked him. Most academics treat conference proceedings as low-value publications and don't put much effort into them. Looie496 (talk) 16:55, 24 July 2011 (UTC)

Artificial Feelings

Computers are useful as tools. Their usefulness comes from their goal-orientedness in the tasks that we ask them to do. Each goal creates needs: some things need to be done in order to reach that goal. The tendency to answer those needs of the goal, to do those things, is perfectly analogious to human feelings which too answer human needs of the healthy functioning of the individual, of the society and of the living environment. So each computer can be seen as having task-oriented "feelings", and via holistic objectivity these computers' feelings become the same as those of holistically objective humans.InsectIntelligence (talk) 08:01, 27 January 2011 (UTC)

I don't think these ideas are right. But more importantly, this page is for discussion about the article and not for discussion about the subject, this has to be done somewhere else in the Internet.Tkorrovi (talk) 00:13, 6 February 2011 (UTC)

Would this be a plausible test for consciousness?

I have a question for anyone who knows their way around these topics. Say Ray Kurzweil's predictions in The Singularity Is Near for the 2030s decade turn out to be true, and we get nanobots that work as "experience beamers", that allows other people to experience someone's daily life like in Being John Malkovich. Would this be direct access into first person consciousness? If this is an appropriate method of testing, should it be added here? It just seemed like it could fit here when I read the article on Kurzweil's book.

Thank you in advance to whoever replies. 99.255.50.214 (talk) 16:32, 2 March 2011 (UTC)

Defining consciousness

As much as I sympathize with Chaosdruid's request: "Can someone find a definitive plain-English definition of artificial consciousness please?", a good definition of artificial consciousness is not possible without a good definition of consciousness. We all experience it and much has been written on it, but I suspect that understanding consciousness is not now possible, because the necessary concepts have not yet been discovered or at least not widely published. Consciousness science seems to be as confused and undeveloped now as chemistry was in 1774 when Joseph Priestley discovered oxygen, but called it "dephlogistonated air". There is no such thing as phlogiston and this false concept blocked progress in chemistry for several years. Likewise, people who now try to define consciousness make use of near-synonyms such as awareness and feelings which is just as counterproductive as confusing oxygen and air. New concepts are needed before further progress in understanding consciousness and designing an artificial consciousness system can be possible. Greensburger (talk) 18:54, 24 July 2011 (UTC)

A reliable source on this is a 1144 page book "Human Brain Function" (pub 2004) by Richard Frackowiak and 7 other neuroscientists. Chapter 16 is "The Neural Correlates of Consciousness" (32 pages). On page 269 is the author's apology: "We have no idea how consciousness emerges from the physical activity of the brain and we do not know whether consciousness can emerge from non-biological systems, such as computers... At this point the reader will expect to find a careful and precise definition of consciousness. You will be disappointed. Consciousness has not yet become a scientific term that can be defined in this way. Currently we all use the term consciousness in many different and often ambiguous ways. Precise definitions of different aspects of consciousness will emerge ... but to make precise definitions at this stage is premature." Greensburger (talk) 14:04, 28 July 2011 (UTC)

Addressing negative aspect of sentience

I think this article should mention the possibility of sentience resulting in negative consequences, for example, possible future rebellions against humanity. Maybe some article should include what people think about this or what people might do to prevent such events from happening in the future. I've been reading the AI page and it doesn't address any of this so maybe this page should include some info and analysis about it. M0rphzone (talk) 20:26, 28 August 2011 (UTC)

This problem has been widely explored in the science fiction literature, starting with Frankenstein's monster and Isaac Asimov's Three Laws of Robotics. Greensburger (talk) 02:25, 29 August 2011 (UTC)

Field-hypothesis for Consciousness

Consciousness may be a 'balanced electrostatic-field' as proposed in an article "A hypothesis for consciousness" by Hasmukh K. Tank, published in a journal of National Bio-Medical Society. Re-print of this article is accessible from: http://sites.google.com/site/theultimaterealitysite. We may have to generate not only the information-processing-circuits but also a 'balanced electrostatic-field', similar to the field produced by the collection of 10^10 neurons, to invoke consciousness in the artificial-brain; according to Hasmukh K. Tank.123.201.22.138 (talk) 15:17, 31 October 2011 (UTC)

Defining consciousness as an electrostatic field is as obsolete as defining life as an élan vital (vital force). Consciousness has complex structures which do not resemble electrostatic forces. Greensburger (talk) 01:29, 5 November 2011 (UTC)

The pre-conceived notion that: "Consciousness has complex structures" is the main obstacle in understanding consciousness by the present-day researchers. The information-processing-part of the brain is of course very very complex, but the physics of 'The subject' i.e. 'awareness' or 'consciousness' is not so complex, nor it is the information-processing.122.102.125.57 (talk) 08:45, 12 November 2011 (UTC)

That's all beside the point -- the paper does not merit discussion in this article because it was not published in a reputable journal and has not been discussed in any paper published in a reputable journal. For Wikipedia's purposes it is original research, which we do not use. Looie496 (talk) 22:56, 12 November 2011 (UTC)

Keep reading reputable journals from the west, for centuries. As far as the east is concerned, the problem of consciousness has already been solved before five thousand years.122.179.176.26 (talk) 09:13, 13 November 2011 (UTC)

Electrostatic field operates by very specific laws of physics. That these exact laws of physics are also the laws of consciousness, this is what one has yet to prove before claiming that electrostatic field is consciousness. Certain philosophical principles which can help to explain consciousness, such as principle of correspondence, have indeed been known for a long time. Tkorrovi (talk) 08:03, 22 November 2011 (UTC)

It will be benificial to spare a few minutes to read the article in question. Our ultimate goal is to understand consciousness. Whether an article was published in a reputable-journal as per our classification, is not so important; what is important is: whether it adds to our knoledge of consciousness.123.201.22.176 (talk) 13:38, 9 December 2011 (UTC)

Not so in Wikipedia. We do not aspire to write the truth. We aspire to report what reliable sources say is the truth. Please read WP:VER. And remember, there are other places besides Wikipedia where you can publicize these ideas. ---- CharlesGillingham (talk) 23:01, 10 December 2011 (UTC)

Subjective experience and the uncertainty principle

Someone wanted to remove the other half of this text (reference to uncertainty principle):

"Subjective experiences or qualia are widely considered to be the hard problem of consciousness. Indeed, it is held to pose a challenge to physicalism, let alone computationalism. On the other hand, there is a similar problem with uncertainty principle in physics, which has not made the research in physics impossible."

Please discuss the changes of this paragraph here and don't remove a part of the text without agreeing the change. The reason for deleting a part of the text was that the uncertainty principle is well understood, while in fact what is understood and what enables all the mathematics is that we know that there is something we don't know. If the uncertainty principle were so well understood as it was argued, then we likely already had quantum computers instead of conventional computers. Just removing text from the article for arbitrary reasons does not make the article better, if one finds that something is not expressed well, please propose how to explain it better.Tkorrovi (talk) 20:23, 6 January 2012 (UTC)

I think this is a poor analogy. There is no (scientific) evidence that the hard problem of consciousness is unsolvable. There is no "uncertainty principle" per se for consciousness. ---- CharlesGillingham (talk) 03:19, 7 January 2012 (UTC)
This is exactly why this text was there, to say that it is not certain that the problem of consciousness is unsolvable. To say that even if the subjective experience cannot be modeled, this doesn't mean that the problem of consciousness is completely unsolvable. If you just delete it, the text leaves an impression that the problem of consciousness is unsolvable. The text there was also meant to say "even if" as much as i understand, even if the subjective experience is an uncertainty, not that it completely is. So simply deleting it breaks what is there and leaves it unbalanced. Thus if one wants to edit it, one should find a better way how to change it. I think the biggest problem there is that it is difficult to say things shortly. For that purpose often something has to be omitted, and that which is omitted may well cause misinterpretations. The problem which can never be perfectly solved.Tkorrovi (talk) 19:09, 7 January 2012 (UTC)
At any rate, this point has been challenged. Per WP:VER, if this point is going to be in the article, it needs a source. We need something of the form "Donald Davidson argues that the hard problem of consciousness is similar to the the uncertainty principle in that ...", with a citation. --- CharlesGillingham (talk) 04:33, 7 January 2012 (UTC)
I see that the problem here is saying that the uncertainty principle is "similar", and this needs a citation. But if we say only that uncertainty principle has not made research in physics impossible, i think this would need no citation, because this is a widely accepted knowledge. It is thus of course possible to add citation to that too, but this would look like somewhat clumsy and unnecessary. Because as i see, the problem there is that "similar", i understand that "similar" is there only to say that by both principles there is something which we cannot know. But i have not thought that this "similar" can be interpreted more widely, and indeed it can. So my proposal by now is just to leave out the "similar", which is the simplest change, otherwise it should be explained more.Tkorrovi (talk) 19:09, 7 January 2012 (UTC)
BTW, if the problem is representing two principles as similar without a citation, then there is another such point in that same paragraph which should be challenged. There subjective experience is said to be not only similar, but the same as qualia, which is by far not accepted by everyone. I propose deleting qualia there, because subjective experience is the term more widely used in consciousness studies. But which of these is a wider term, i don't even dare to say with certainty.Tkorrovi (talk) 20:17, 7 January 2012 (UTC)
I'm not sure what you're saying. Qualia is defined as (individual) subjective experiences. I rewrote the section so that it begins to make sense. On closer examination, I don't know how to fix this. I can't even figure out why this paragraph is in this section. This article has huge problems, in both organization and accuracy, and it has failed to improve for very long time now. ---- CharlesGillingham (talk) 23:13, 7 January 2012 (UTC)
I don't know no place other than Wikipedia where qualia is defined as subjective experiences. Not everything what is said in Wikipedia is right, we are the editors of Wikipedia and have to make it so that everything what is said in Wikipedia is right.
I used to think that every paragraph which is there and says something informative, should be there. I once tried very hard to make this article more organized, but it appears that when making bigger changes to that article there is completely impossible to find any agreement. It has been very difficult to maintain this article, and not to talk about organizing it better. Now the initial definition is almost the only thing remaining which ties all things together, and they tried to change even that to something else, so that all consistency would be lost and there were good reasons to delete the whole article. I have already gone it through, certain people forced in changes which made the article worse, then they ascribed these changes to me, and then they attempted to delete large chunks of the article because they said that i made bad changes and now these parts of the article are worthless.
I have fought a lot to make it possible that this article even is in Wikipedia. I think now no one disputes any more that this article should be there, but there was a lot of trouble to achieve even that alone, without me no doubt it were completely deleted. There is one russian saying "tak ne bõvajet", which means something like, it cannot be so. But to understand what it really means, one has to understand what the russians mean by that.
The expert tag has been on the article already for who knows how long, then it was removed, because there was no use of it. Thus evidently this tag does not help to make anything better.Tkorrovi (talk) 23:41, 7 January 2012 (UTC)

Biological naturalism

The article summarises biological naturalism as "that consciousness can only be instantiated in biological systems" and cites Searle. But Searle is very clear that this is not the case:
"The fact that brain processes cause consciousness does not imply that only brains can be conscious. The brain is a biological machine, and we might build an artificial machine that was conscious; just as the heart is a machine, and we have built artificial hearts. Because we do not know exactly how the brain does it we are not yet in a position to know how to do it artificially."
I'm not really sure how to correct this section of the article as it stands, because it then contrasts biological naturalism with the type-identity theory, the latter being defined as pretty much what Searle's position actually is.MijinLaw (talk) 16:26, 30 January 2012 (UTC)

Removed. Indeed i have noticed that some people want to promote a certain one-sided view (cannot in theory be understood or implemented, neither fully nor partly) in this article, now also using false references for that.Tkorrovi (talk) 17:45, 11 February 2012 (UTC)

What is the basic need of

1 What is the basic need of consciousness? — Preceding unsigned comment added by 99.90.197.87 (talk) 03:04, 22 April 2012 (UTC) to be, it is in question. — Preceding unsigned comment added by 99.90.197.87 (talk) 03:07, 22 April 2012 (UTC)

Artificial consciousness? The basic need is that intelligence is a too narrow term, at least in the common sense it does not include all what we mean by awareness, like modeling the outside processes without prior knowledge about them, making predictions based on these models, etc. So there are or can be several practical or theoretical reasons why something that is meant to be Artificial Intelligence would be inherently restricted because of the scope of the term intelligence.Tkorrovi (talk) 19:25, 24 April 2012 (UTC)

Behaviourists and Turing test

AFAIK there are no way to check or test _real_ consciousness. I mean there are no ways to distinguish between a conscious being and a robot that just behaves like it's conscious. (phylosophical zombie argument).

Concerning Turing test. It just suggests that if we can't distinguish a programm from conscious decisioun making entity then we have the right to claim that the programm is conscious. "If it quacks like a duck, walks like a duck, looks like a duck, smells like a duck then for all purposes it's a duck." . There is no inherent thing-in-itself duckness.

Also behaviorists and positivists think that the question is meaningless. That's not reflected in the article.

As my english skills are limited I prefer not to edit the article myself. Comecra (talk) 04:39, 17 June 2012 (UTC)

> AFAIK there are no way to check or test _real_ consciousness.
No, but we can test artificial consciousness. Artificial consciousness may or may not be real consciousness, the difference is that we can never claim that it is.
> It just suggests that if we can't distinguish a programm from conscious decisioun making entity then we have the right to claim that the programm is conscious. "If it quacks like a duck, walks like a duck, looks like a duck, smells like a duck then for all purposes it's a duck."
Now this contradicts the previous statement. If we don't know what something is then for all purposes we can claim that it is what it seems to be based on some observations? I'm quite certain that it is possible to make a robot which quacks like a duck, walks like a duck, looks like a duck, and maybe also smells like a duck. Yet such robot does not lay eggs and we also cannot make a nice duck roast of it. I think we cannot claim anything about a system unless we know something about the system itself, about how it works. Like we can observe that the Sun orbits the Earth, yet this is not true for all purposes.
> Also behaviorists and positivists think that the question is meaningless. That's not reflected in the article.
I don't know who claimed so and where. I don't think it is right that someone would change the article only based on what some person here represents as self-evident and widely known.Tkorrovi (talk) 21:39, 28 June 2012 (UTC)
How do you know that you are conscious? Maybe this is an illusion. :) Can you set up an experiment to determine whether you are conscious or not? Can you imagine a device that will tell to you that you are conscious, once you push a button?
Can you teach a 6 years old child to distinguish robots with AI apart from robots remotely controlled by a "conscious" being? Can you test his understanding with questions, language or tasks?.
The subject is so unscientific that imho is considered self-evident by logicians or science oriented philosophers. It's like few scientists debate Lochness monster believers.
Since Wittgenstein and Vienna-circle such problems are called pseudoproblems. Any ideas about consciousness are not verifiable, not satisfy Popper falsifiability principle. It is like talking about artificial soul. Some people claim that they have a soul. Some people claim that they do not have consciousness.Comecra (talk) 10:11, 6 July 2012 (UTC)
Consciousness is thought to be the way how humans are aware of the world around them. But whether something is conscious or not, you may talk that on the talk page of the article about consciousness, this article is about artificial consciousness.
With artificial consciousness one can model a bacterium as well, never concerned whether it is conscious. Artificial consciousness just lifts the limits from "intelligence" which restricts Artificial Intelligence, as not everything can be called intelligence.Tkorrovi (talk) 14:43, 6 July 2012 (UTC)
"Any ideas about consciousness are not verifiable, not satisfy Popper falsifiability principle."
Please provide a reference to that statement, this looks like an unreasonable exaggeration. This doesn't follow from the fact that consciousness cannot be modeled or cannot be defined.
No matter what you argue, several scientists have explained consciousness. You cannot force the view of logical positivism to all people, this is not the only view. This article is about an existing field in science, based on what some scientists have written. You cannot punish people who write about it in Wikipedia or accuse them in pseudoscience. All views that are referenced which anyone wanted to add are provided in the article. This is an unreasonable attack.Tkorrovi (talk) 22:25, 6 July 2012 (UTC)

Read this about observation and understanding (modeling) Philosophy_of_science#Theory-dependence_of_observations . If one interprets Turing test as only observational, then this contradicts these philosophical principles and such Turing test does not comply with the scientific method.Tkorrovi (talk) 18:26, 29 June 2012 (UTC)

Definition

Please don't change the original definition by Igor Aleksander written in his paper. Read about the definition above. One should not add original definitions to the article, one can change definition only if one finds a source for new definition, and the definition has to be an exact text from that source.Tkorrovi (talk) 00:42, 10 July 2012 (UTC)

Falsely defining non-conscious entities as conscious

In the "Franklin's Intelligent Distribution Agent" sub-section, the following statements are made:

Stan Franklin (1995, 2003) defines an autonomous agent as possessing functional consciousness when it is
capable of several of the functions of consciousness as identified by Bernard Baars' Global Workspace Theory (GWT).
His brain child IDA (Intelligent Distribution Agent) is a software implementation of GWT, which makes it
functionally conscious by definition. ... IDA interacts with Navy databases and communicates with
the sailors via natural language e-mail dialog while obeying a large set of Navy policies. The IDA computational
model "consists of approximately a quarter-million lines of Java code..." It relies heavily on codelets, which
are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running
as a separate thread." In IDA's top-down architecture, high-level cognitive functions are explicitly modeled.
While IDA is functionally conscious by definition, Franklin does "not attribute phenomenal consciousness to IDA,
in spite of her many human-like behaviours.

But what produces those "human-like behaviours"? The "quarter-million lines of Java code." And where did those quarter-million lines of Java code come from? From the minds of conscious human programmers. Hence the conscious part of the IDA system was and still is in the brains of the human programmers and was never in the computer. Hence IDA is not conscious.

The "Franklin's Intelligent Distribution Agent" sub-section and others that follow it have no relevance to this article and should be deleted. Greensburger (talk) 22:50, 5 August 2012 (UTC)

I don't think that reasoning works. By that same logic, if humans were created by God, then we wouldn't be able to say that humans are conscious. But more to the point, it looks to me like the article adequately portrays what Franklin says, and doesn't assert the Franklin is right, so I don't see the passage as invalid. Looie496 (talk) 23:06, 5 August 2012 (UTC)
The objection I raise is relevance, not accuracy in reporting. Just because a computer can produce responses similar to conscious humans when programmed by conscious humans, does not make it relevant to this article which is about artificial consciousness. Equally non-relivant would be a paragraph on interactive voice response. Today I phoned my credit card company and a computer-generated human-like voice asked me questions and responded appropriately to my vocal responses. The company's IVR system did not contain any artificial consciousness. Years ago I programmed IVR systems and it was my conscious mind that created the appropriate vocal responses that were later performed by the non-conscious IVR systems. IVR systems and IDA systems are not relevant to this article. Greensburger (talk) 23:39, 5 August 2012 (UTC)
What you are saying may seem obvious to you, but there are many other people to whom it does not seem obvious at all. The root of the difficulty is that different people may understand the meaning of the word "consciousness" in quite different ways, and what is obvious for one meaning may be obviously false for another meaning. Looie496 (talk) 02:23, 6 August 2012 (UTC)

Archiving

There was no consensus in adding archiving label to talk page, yet it was done. Please don't do that without achieving consensus on this talk page, this cannot be done without consensus. And 90 days is not an acceptable period of archiving anyway, this too likely halts all ongoing discussions. Tkorrovi (talk) 02:32, 6 February 2013 (UTC)

Definition

A subtle attempt was made to change the wording of the definition slightly, changing the original wording by Igor Aleksander, but in the way that it changed the meaning. The two references added couldn't be checked. It was one more of the numerous attempts to change the definition. Definition is essential to this article and changing it damages all the structure and meaning of the rest of the article, so it can then be disregarded entirely or partly. Therefore if anyone wants to change the definition or provide a different definition, please discuss that on this talk page before making the changes, providing a reference to an article available online so that the exact wording can be seen. Thank you. Tkorrovi (talk) 10:59, 1 October 2013 (UTC)

Working On This Page for a Class

Hello to all those who frequent this talk page. I'm working on editing this page for a class, and have posted below some ideas and the sources for the paragraphs I've written. I am open to criticisms and edits, obviously, and would greatly appreciate them. Thank you!

Jonathan Underland Artificial Consciousness

Summary:

1.) Domenico Parisi, researcher at the Institute of Cognitive Science and Technologies, writes in his article "Mental Robotics" that in order for robots to possess this "artificial consciousness", they must also have what he calls "mental life". According to Parisi, mental life is, "To have internal representations of sensory input in the absence of the input." In order for a robot to think for itself, it must not be simply a reactive robot, whose internal representations of reality are incited by sensory stimulation/inputs from the world around it. Rather, it must be able to organically and mentally generate representations of consciousness, without the necessity of external sensory input.

I would like to add a paragraph to the philosophy of artificial consciousness based on Parisi's article. It will certainly be longer than this one paragraph, but the paragraph is meant to provide a sample of what will be added. It introduces new word phrases and ideas that aren't necessarily present in the article, or presented as clearly or comprehensively.

[1]

2.) A Rationale and Vision for Machine Consciousness in Complex Controllers

I would add this sort of information from this article to the Plausibility of Artificial Consciousness section in the Artificial Consciousness article. There is research and experimentation out there, in the form of the information presented in this article, that delves deeper into the need for autonomy in robot behavior. Qualia, at this point, seems to be an achievement too great to grasp. The idea of machine consciousness is discussed through the lens of autonomy in the correcting of system operations by the machine itself. This article also presents the ASys vision, which is the research group's project that aims to put into practice the idea that a machine could operate autonomously as long as it understands itself and the world around it.

[2]

3.) The article Artificial Consciousness: Utopia or Real Possibility by Giorgio Buttazzo from the Unviersity of Pavia also contributes to the debate of whether or not artificial (or machine) consciousness is possible to attain. In his article he says that despite our current technology's ability to simulate autonomy, "Working in a fully automated mode, they cannot exhibit creativity, emotions, or free will. A computer, like a washing machine, is a slave operated by its components." That is to say, The machine is not truly autonomous, as it is ruled by its components whose rules and settings are externally regulated and controlled. Buttazzo's article comes at the issue of artificial consciousness from all angles, including religion and philosophy. Mind/Body dualism is discussed - an idea that lacks any sort of representation on the artificial consciousness page.

[3]

Junderland (talk) 01:05, 19 October 2013 (UTC)


I now added descriptions of the articles 1) and 3) proposed by you, to the article. The article 2) is in my opinion of inferior quality and does not contain important enough information. In fact, even the article by Domenico Parisi is not a proper reference, it is evidently not a peer-reviewed article, also the name of the publisher and date of publication are both unknown. In spite that i like the idea expressed there and consider it important. If you can, please try to do some further work for finding a proper source of that article, or maybe a better article by Domenico Parisi where he explains the same concept. Tkorrovi (talk) 18:20, 11 November 2013 (UTC)

References

The debate over the plausibility of artificial consciousness

The existing section of the Wiki Article includes some information on this already but I have found further research to expound upon Shlangal’s underlining of the differences between human intelligence and AI stemming from the manner in which the two groups operate and the way in which current mathematical programming constrains machine consciousness from emerging. This is basically a response to functionalists


An Emergence Theorist’s response to the plausibility of artificial consciousness


Some theorists, such as Richard Shlagel of George Washington University, argue against the plausibility of artificial machine consciousness by challenging the view of the functionalists who would be inclined to argue that any machine able to preform inferential tasks is doing so through some form of consciousness. Shlangel, and followers theory, however, challenge this view, by underlining the manner in which the way computers vs humans process information qualitatively and experientially differs, and the manner in which the possibility of consciousness is effected by this process (Shlangel, 9).

Human beings interpret the world around them in terms of symbolic meaning. Stimuli is perceived and transformed into complex and often abstract intrinsic fluid conceptual representations whose cause is to enable understanding and whose consequences include complex hypothetical, theoretical, future, and philosophical contemplation as well as language, common-sense knowledge, and creative thinking (Shlangal, 19). This metaphorical awareness ability is where these theorists tie the emergence of thoughts and formal consciousness. In comparison, the mathematic designation-style processing programmed into machines does not allow computers to preform the same symbolic type of conceptual interpretation (Shlangel, 11). Digital computers are programmed with fixed definitional algorithmic functions which simply do not amount to the emergence of the same symbolic reflections necessary for true thought or consciousness to arise.

There is also some debate within the field over the importance organic matter plays in the development of consciousness (Shlangel, 24). Functionalists subscribe to the idea that if a machine can independently monitor and preform the necessary tasks of a sentient being, that machine must possess some level of consciousness. Emergence theorists, however, disagree with this theory, arguing that human-like consciousness is not attainable to man-made hardware on it’s own as it’s functions are derived in part from the organic material humans are composed of. Many believe that some implicit property of organic material is required to develop consciousness. Some call this characteristic a causal power. Jon Searle, a known emergence theorist, explains the importance of the presence of these causal powers of organic material in everyday functions of consciousness, “It is not because I am the instantiation of a computer program that I am able to understand English and have other forms of intentionality, but, because I am a certain sort of organism with a certain biological (i.e., chemical and physical) structure, and this structure, under certain conditions, is causally capable of producing perception, action, understanding, learning and other intentional phenomena.” (Searle, 367).


Schlagel, R. H. (1999). Why not artificial consciousness or thought?. Minds and Machines, 9(1), 3-28.

Searle, J. R. (1980). Minds, brains, and programs. Behavioral and brain sciences, 3(3), 417-457.

Cite error: A <ref> tag is missing the closing </ref> (see the help page).

Acaliseee (talk) 02:59, 19 October 2013 (UTC)


Oh please be brief. If you can add all that somewhere in a few sentences, that will be fine. But otherwise, this article is only an overview of the field, too long descriptions of some views distract the attention and make it difficult to read. One needs not copy here whole articles, as there are references, everyone can read them by oneself. Tkorrovi (talk) 07:23, 11 November 2013 (UTC)
Yes this information already is in the article, so i just added the references. Tkorrovi (talk) 18:38, 11 November 2013 (UTC)

Vague quotations

This article contains the paragraph: "Domenico Parisi, researcher at the Institute of Cognitive Science and Technologies, writes in his article "Mental Robotics" that in order for robots to possess artificial consciousness, they must also have what he calls "mental life". According to Parisi, mental life is "To have internal representations of sensory input in the absence of the input."

Parisi is equating mental life to a video recording, which adds nothing and detracts from the article. Greensburger (talk) 16:30, 20 March 2015 (UTC)

Edits by Transhumanist

The article was reverted to the first edit by Transhumanist on 15 April 2015, to clarify the description of the edit. This article will be fixed, it is not right that a single user changes most of the article from the single user's point of view. This article is a joint effort, and the efforts of editing this article by many people over many years will be honored.Tkorrovi (talk) 19:50, 28 April 2015 (UTC)

Revert not contested. The Transhumanist 08:55, 12 May 2015 (UTC)

I now looked through all the changes made by Transhumanist, and found only one paragraph which should be added. I added that information again, but the link to the summit's proceedings was missing in the edit.

Revert not contested. The Transhumanist 08:55, 12 May 2015 (UTC)

I cannot restore the extensive restructuring of the article, as this will cause too much confusion to editing the article, and how the many users who edited the article before, wanted it. So great change of the structure of the whole article should be done gradually, by discussing it before here on the talk page. Thank you for your understanding.Tkorrovi (talk) 20:47, 28 April 2015 (UTC)

Revert not contested. The Transhumanist 08:55, 12 May 2015 (UTC)

I now also looked through the edits by 2604:2000:cfc0:1b:b10e:f7cb:a745:be31 , all that were made after the edits by Transhumanist. All of these were additions to the links section. I added again Cognitive architecture to the links section, as i agree it is mentioned in the article and not referenced. But i found no other additions to the links section appropriate. No need to add there links to everything in consciousness studies, neuroscience and even AI, this is not appropriate to the article and these are all linked from the major pages, the links to which are present. Some additions were also links which were already linked from the article, these should not be made double, a care should be taken to avoid that.Tkorrovi (talk) 21:14, 28 April 2015 (UTC)

Off-topic. Though revert not contested. The Transhumanist 08:55, 12 May 2015 (UTC)

I now restored all what i found appropriate in the Transhumanist's edits and edits by 2604:2000:cfc0:1b:b10e:f7cb:a745:be31 after that. Transhumanist, if you can propose edits to the original version of the article without extensive restructuring of the whole article, then please do that.Tkorrovi (talk) 21:19, 28 April 2015 (UTC)

I'd rather follow WP:BOLD and WP:BRD. Under that system, one only needs to discuss reverts one wishes to get reverted. Preapproval for every edit is not required by Wikipedia rules. (Again, see WP:BOLD). The Transhumanist 08:55, 12 May 2015 (UTC)
It is not about preappruval. In case the edits made the article worse and there is a disagreement between editors, the disagreement should be solved by discussion, this is by Wikipedia rules.Tkorrovi (talk) 16:20, 12 May 2015 (UTC)
Yes, if there continues to be disagreement with the revert. If the editor who made the change doesn't mind the revert, there is no longer a disagreement, and no discussion is necessary. The Transhumanist 06:03, 13 May 2015 (UTC)

Transhumanist, i have to say that. In your edit, you added this to the article (the beginning of your edit):

"In 2010, at the H+ Summit at Harvard, George Dvorsky made the claim ..."

But the only reference that you provided there is that [3] , a web page about George Dvorsky. On the George Dvorsky's Wikipedia article is also the same text and only that same reference about it. This is not a peer-reviewed article, it is not even clear who is the author of that page, thus it is not valid for reference. But more than that, nowhere on that page is it written that such claim was made at the H+ Summit at Harvard. Thus this information has no reference at all, nothing which states that such claim was made at a summit at Harvard, or that George Dvorsky ever made that claim. So this information has no reference and should not be added to the article, yet i added it to the article. But please provide a proper reference for it.Tkorrovi (talk) 01:13, 29 April 2015 (UTC)

Revert not contested. The Transhumanist 08:55, 12 May 2015 (UTC)

Greedy reductionism is relevant to AC in that in AC one should avoid greedy simplification, to really model an aspect of consciousness, and not something which is only similar. Conceptual spaces may be important for some AC systems to make conceptual prototypes.Tkorrovi (talk) 05:04, 11 May 2015 (UTC)

Not clear to the reader why it is included. See WP:SEEALSO. The Transhumanist 08:55, 12 May 2015 (UTC)
These links should be clear when reading them. It is written that this is a proposed concept relevant to AC, this is true, and this is why it is included. So it is explained why it is included.Tkorrovi (talk) 16:20, 12 May 2015 (UTC)
But WP:SEEALSO states: "Editors should provide a brief annotation when a link's relevance is not immediately apparent". I think that means a brief explanation as to how the topic in the link is relevant, so the reader can decide whether to follow the link or not. It can be a pain to have to go to an article just to figure out how it is relevant. Annotations can help in topic selection (as a browsing aid). The Transhumanist 06:14, 13 May 2015 (UTC)
The links relevance is apparent, as it is said that they all are a proposed concepts or systems, so if anyone is interested in these, one can look them, but one also understands that these are not general theories. The problem in adding annotations to them is that it is impossible. You try to add an annotation to greedy reductionism, and you cannot do that otherwise than you define there the whole concept, and this is not brief.Tkorrovi (talk) 08:37, 13 May 2015 (UTC)

AC is a field. No system which emulates consciousness completely is not yet created. There are mosly only proposals for systems, but some systems that are created are also only for research purposes. So AC the way it actually exists today is only a field of research, not a system. And this is how these involved in it understand it as well, like see the Igor Aleksander's definition, it is about a field. No arbitrary opinions at the top of this article please, and please honor the work of other people who edited this article, and decided to leave it that way for many years. No one as much as i remember during all these years ever wanted to redefine it as anything else than a field.Tkorrovi (talk) 05:18, 11 May 2015 (UTC)

Tkorrovi has apparently ruled out hypothetical constructs. (E.g., see superintelligence). Artificial consciousness does not exist yet, making it hypothetical, and the meaning of the term is consciousness that is artificial (in the context of synthetic). The article is not named "Field that studies artificial consciousness", as it is about the hypothetical thing known as artificial consciousness. The question is: "Is the article primarily about the field or is it more about the thing being studied?" If it is the former, then that definition should probably come first. The Transhumanist 08:55, 12 May 2015 (UTC)
I just explained how it makes sense, when it is about hypothetical construct, then it is about study. One may argue its right because the artificial general intelligence is defined so in its article, but i think it is defined wrongly, and AI is not defined so in its article. No this article is not named "Field that studies artificial consciousness" and it starts with a definition. It is about creating certain kind of systems, and it is said what kind of systems. And yes it is primarily a field, see below.Tkorrovi (talk) 16:20, 12 May 2015 (UTC)
Some examples of articles about hypothetical things include:
To name a few. The Transhumanist 11:01, 12 May 2015 (UTC)
These give the most vague idea what artificial consciousness should study. Some science fiction characters may be good to illustrate AC systems, but only that, some are likely not relevant to AC at all.Tkorrovi (talk) 16:20, 12 May 2015 (UTC)
That's not why they were posted here. You missed the point. They are examples of how other Wikipedia articles define hypothetical topics. The Transhumanist 00:51, 14 May 2015 (UTC)

AC is a field related to artificial intelligence philosophy. That AC is a subfield of AGI research, there is no consensus about it. It may be related to AGI research so that AC is related to artificial intelligence and AGI is a part of artificial intelligence. AGI should "perform any intellectual task that a human being can", thus it depends on what one sees as AGI, and not by all views should it include consciousness.

True, it may be possible to construct AC in a system that is not AGI. Like in a simulated or robotic mouse. The Transhumanist 09:46, 12 May 2015 (UTC)

Some define AGI also as a reasearch and not a system, so a working system should be called an AGI system. Which i prefer, because the name often implies the field of science or research, such as nanotechnology. This is though a matter of opinion what concerns AGI. But as AC is related to philosophy, then it is definitely a field, the term AC research is also not correct, because philosophy is not necessarily a research.Tkorrovi (talk) 07:28, 11 May 2015 (UTC)

They're nouns with multiple contexts. What is the primary context to which the title of the article refers? If something is being studied, then it follows that the thing being studied is probably the primary topic. Artificial consciousness vs. artificial consciousness research vs philosophy of artificial consciousness. The latter 2 seem more appropriate for subheadings. The Transhumanist 08:55, 12 May 2015 (UTC)
No, when something is being studied then that being studied is not necessarily a primary topic. When it is more about study than it is about the result of the study, then study is primary. The study has not yet produced a result which can be subject of the study. Nanotechnology is a good example, so is AI, there are more results produced than in AC, but both are primarily fields, read the articles about them.Tkorrovi (talk) 16:20, 12 May 2015 (UTC)

Please discuss here before making uncertain substantial changes. Transhumanst has done a large number of edits to this article during the last couple of days, and never discussed anything on this page or on my or his talk page. In spite that i tried to contact him already a week ago, both here and on his talk page.Tkorrovi (talk) 05:30, 11 May 2015 (UTC)

See BRD. Revert anything you like. Most reverts I just let go, rather than spend time on discussing them. It's faster to just let other editors have their way. There is always plenty more to work on, like the fiction section, new sections not yet covered, etc. The Transhumanist 08:55, 12 May 2015 (UTC)
Right but reverts are only a first solution, if the edit is questionable and there is disagreement, then the solution is to agree by discussion. By the Wikipedia rules.Tkorrovi (talk) 16:20, 12 May 2015 (UTC)
No, there's no disagreement if I agree with your reverts, which I do categorically. ;) If you don't like what I've added, then I don't like it either. So feel free to chop away! The Transhumanist 06:03, 13 May 2015 (UTC)

This "dubious claim" at the beginning of the article that intelligence is too narrow in its commonly used sense, to explain the role of artificial consciousness, was agreed by several people, so no matter what one thinks is not right with it, it is not correct to call it "dubious". Transhumanist never discusses changes here on the talk page, and instead tries to discuss them as notes in the article, which does not enable to properly discuss them. So unless i misunderstood something, i apologize then, but based on that said here, it regrettably starts to look like that Transhumanist doesn't edit the article in good faith. Please discuss the changes here and not as notes in the article.Tkorrovi (talk) 04:17, 12 May 2015 (UTC)

I've readdressed the issue as a challenge per WP:CHALLENGE, below. The Transhumanist 08:55, 12 May 2015 (UTC)
Yes but, this is about something else, like referencing, this gives no reason to call it a "dubious claim", like even if true that everything should be more referenced, this by itself doesn't mean that that which was said was wrong. Calling something a "dubious claim" without explaining why that said itself is wrong, may be interpreted as an attempt of discrediting.Tkorrovi (talk) 16:20, 12 May 2015 (UTC)
Dubious (per WP:VER) in that it did not have verification. "Unverified" might have been a better word. ;) The Transhumanist 06:03, 13 May 2015 (UTC)

Transhumanist wrote a note next to the topic "Role of brain mapping" which he himself recently added to the article "!--Need explanation of how brain mapping research pertains to artificial consciousness. What is the role of brain mapping in the effort to create or design of an artificial consciousness?--". This inevitably begs a question, why did he add this topic to the article when he doesn't know what the role of the brain mapping in the artificial consciousness is?Tkorrovi (talk) 04:29, 12 May 2015 (UTC)

This is very misleading. The original content of the section has been replaced by Tkorrovi. The section no longer explains the role of brain mapping to the development of AC. The note was to remind me to find referenced material on the search for consciousness in the human brain mapping endeavors. Apparently, Tkorrovi thought the note was directed at him. Also, he removed the note, which I had placed in accordance with WP:COMMENT. I'd like to know why he is disrupting other editors' efforts to improve the article. He should leave such comments alone. The Transhumanist 08:55, 12 May 2015 (UTC)
The problem is that it never explained the role of brain mapping to the development of AC, and it never contained a single reference to anything where is said anything about using brain mapping for AC. I tried to improve that by editing, yet because no clear connection was written, there was no way how i could solve that problem. And the problem is that when you don't know what is the role of brain mapping in AC, then you should not add a topic about it to the article. Your should not understand what you write in the article, otherwise these are not correct edits.
The original edits did explain it, without references. But based on the amount we read, we'll likely come across some good references eventually. It'll make a nice addition to the article once we do. No hurry. The Transhumanist 06:03, 13 May 2015 (UTC)
Yes find these references, these have to mention that brain mapping has been actually used in AC research or AC agents. AC may also be called machine consciousness or synthetic consciousness. Sometimes the name of the field is even not given, but it has to do with the research of emulating consciousness in a computer. Just studying consciousness in the brain is not about AC, this is about neuroscience or even medicine. If you find such references, then reinsert the topic. Who are we btw, i thought you are a single editor. Or do you represent some group of people?Tkorrovi (talk) 09:35, 13 May 2015 (UTC)
If you or I happen to come across the references, I'm sure whichever the two of us it is will be the one to insert them. ;) The Transhumanist 00:51, 14 May 2015 (UTC)
The other comment was removed because it was erroneous. It claimed that artificial consciousness is defined in the article as "Field of artificial consciousness research". This was a completely nonsensical statement which had nothing to do with anything written in the article.Tkorrovi (talk) 16:20, 12 May 2015 (UTC)
An academic field is a field of research, as is a field of science. But if you say it is nonsensical, then I believe you. It's good you got rid of it. ;) The Transhumanist 06:03, 13 May 2015 (UTC)

In the artificial intelligence article AI is defined as a field "It is an academic field of study which studies how to create computers and computer software that are capable of intelligent behavior." It is also defined as a system, but AI is different from AC in that there are completely working systems created, and AI is not itself a philosophy, while AC is also partly a philosophy. Why then does Transhumanist think that it is not allowed to use the term artificial intelligence as a name of the field, while AI can be used that way? Is it a lack of understanding, or is it confusing or misleading? Then why should we call the field, artificial consciousness study? Sounds very confusing, and no one has named it that way. Because it is not correct to call the entire field artificial consciousness research, as it also includes philosophy, which is a study, but not necessarily a research.Tkorrovi (talk) 05:32, 12 May 2015 (UTC)

What is the primary meaning of the term? Should the primary meaning come first at the beginning of the article? Also, for a sample topic naming scheme, see Artificial intelligence, Artificial intelligence#Research, and Philosophy of artificial intelligence. The Transhumanist 08:55, 12 May 2015 (UTC)
I explained, here, above and below. The primary meaning of AC is a field, and this is said in the beginning of the article.Tkorrovi (talk) 16:20, 12 May 2015 (UTC)
That's your answer, surely, but is it the primary answer? The question remains, but let me restate it: What is the primary meaning of the term as used in the big wide world? What sources did you check? Dictionaries? Britannica? Encyclopedia of Cognitive Science? Others? Just curious. The Transhumanist 06:03, 13 May 2015 (UTC)
Igor Aleksander is considered the most prominent, so his definition is of primary importance. In his article referenced in the article, he writes:

"Here the theory is developed by defining that which would have to be synthesized were consciousness to be found in an engineered artefact. This is given the name "artificial consciousness" to indicate that the theory is objective..."

If you understand from that, that named a theory artificial consciousness. Theory is in a field of study, not any system that would be made by that theory.
Now if you want to know how is the name used in other publications, then this may give you an idea [4] . They name a field machine consciousness. This is a web site, but it is an important web site, like it includes articles about the work of Haikkonen and others which are also published in journals, but journals are more difficult to obtain. So this may give you an idea.
In some articles also the name artificial consciousness or machine consciousness is not mentioned at all. Such articles mostly talk about a research of emulating consciousness or some aspects of it in a computer, in a way or another. So it is clear that they talk about artificial consciousness, though no name of the field is mentioned. Name of the type of the system is not mentioned either. I have never seen anywhere a system or agent said to be artificial consciousness or machine consciousness. The systems or agents mostly just have some names given to them, and it is described what is tried to achieve by them.
Other than that, that AC is a field, is also the agreement of quite many users who have edited this article. I think some several dozen people have edited the article over the years, most of them though only did a few edits. None has disputed calling AC a field, there has been disagreements about other things though, which are now solved.Tkorrovi (talk) 09:49, 13 May 2015 (UTC)
The quote is a citation for the naming of a theory, not a field. Also, he said it was objective, that is, aimed at something: an objective. That objective is artificial consciousness, more specifically, engineering an artifact with consciousness. If you were to define that which would have to be synthesized were consciousness to be found in and engineered artifact, you would essentially be designing a consciousness and the artifact it was resident in. And since it doesn't exist yet, that makes it a hypothetical construct. The Transhumanist 00:51, 14 May 2015 (UTC)
Theory yes, a field is about a theory, or theories. Elsewhere it is called a field. He said the theory is objective, the sentence is about the theory, he didn't say the artefact is objective. You messed it, it doesn't matter what you say, this is not what Igor Aleksander said. The artefact is yes a hypothetical construct, objective when the theory is objective. But by what he said, the artefact is not named artificial consciousness.Tkorrovi (talk) 01:38, 14 May 2015 (UTC)

It may not make sense to explain anything here. As Transhumanist evidently ignores everything written on the article's talk page, and likely never reads anything here. In spite that he has been asked to do that both in the summaries of the edits and in the message written on his talk page.Tkorrovi (talk) 05:32, 12 May 2015 (UTC)

This is misleading. Tkorrovi has not asked me to read this page, and as per WP:BRD, in dealing with Mr. Tkorrovi, I prefer not to discuss after reversion, and simply let his reversions stand. The post on my talk page was in response to a revamping of the article (via WP:BOLD and WP:BRD), which Tkorrovi reverted, and which I respected (by not challenging his revert). Among what Tkorrovi posted was "Such great change of the structure of the whole article should be done gradually, by discussing it before on the article's talk page." I have not since endeavored to change the whole structure, and my edits to the article are no where near as prolific as those of Mr. Tkorrovi (check the contributions on his user account). It appears that Tkorrovi may be exhibiting behavior tending toward WP:OWN and WP:Single-purpose account. The Transhumanist 08:55, 12 May 2015 (UTC)
This has nothing to do with WP:OWN and there is absolutely no reason to claim that. I wrote on your talk page after reverting your edits. A part of which was also a completely unreferenced claim about what someone said, with no evidence whatsoever that that person ever said that. As because of that and because you a lot of edits in a short time, i had a reason to think that you make edits somewhate carelessly. So i suggested to discuss them before on the article's talk page to avoid that.
I didn't claim it, but I suggested it may be a tendency, because I am wondering about it. I posted the link to your contributions in case others wish to take a look for themselves. The Transhumanist 06:03, 13 May 2015 (UTC)
But the fact is that i asked you to discuss any possible disagreements on the talk page, on your talk page and also in the summaries of your edits. But you never discussed anything on the talk page, and instead tried to start discussion in the form of notes in the article, which is not a proper way of discussion.Tkorrovi (talk) 16:20, 12 May 2015 (UTC)
Because I chose to agree with you. No discussion needed. Remove from the article whatever you disagree with. I have no problems with that. The Transhumanist 06:03, 13 May 2015 (UTC)

The artificial intelligence article contains a list of references confirming using the term artificial intelligence as a name of the field, please see. And it contains *no* reference to defining AI as a system. Thus AI is *primarily* a field, if not only a field. Has been since McCarthy. The systems which are made as a result of the research and development, are called AI systems or AI agents. So does this article provide a reference where AC is defined as a field. AC grew out of AI, is similar to AI by its aim, and is an extension to AI or AI philosophy, so it is also a field. This is well established and cannot be ignored.Tkorrovi (talk) 06:03, 12 May 2015 (UTC)

Artificial intelligence starts out with this definition: "Artificial intelligence (AI) is the intelligence exhibited by machines or software." Then it goes on to describe the field that studies it. The Transhumanist 08:55, 12 May 2015 (UTC)
Wrong. Immediately next it says that "It is an academic field of study". So it says that AI *is* a field, not describes what field studies AI systems or AI agents. The former and the latter are completely different ways of putting it. When it were merely described there what fields study AI systems, then it means that it were not stated there that AI *is* a field. And that claim is wrong, it does not correspond at all to what is said in the AI article.Tkorrovi (talk) 16:20, 12 May 2015 (UTC)
Immediately next. Second definition. So the AI article covers artificial intelligence, and the field also called "artificial intelligence". Works for me. The Transhumanist 06:03, 13 May 2015 (UTC)

I also want to say, i don't want to feel like i were in the court of law every day, just because of editing one article in Wikipedia. This indicates that something is wrong, i feel like it were some attempt to exhaust me or something. By that i don't claim anything, i'm just talking about what i feel, which i know is subjective. But it should not be that way, the editing of the articles should be a collaborative effort with friendly co-operation of the users. Here by that written above, the attitude of Transhumanist looks different. It is not a claim but, for me it indicates that something is wrong.Tkorrovi (talk) 16:20, 12 May 2015 (UTC)

I feel the same way. Keep in mind, you summoned me to this discussion, not the other way around. When an edit of mine is reverted, I usually just make new edits on the same page or elsewhere. I've found that reverting reverts can get edit-war-like. The Transhumanist 06:03, 13 May 2015 (UTC)
There is this. I showed tolerance, agreed with unreasonable edits, in the hope to finally have an agreement. But now when this is challenged, like more help is invited to help against that supposedly bad editor, then it would be considered that i never showed any tolerance, had no flexibility ever, never wanted to agree. That is, that i never did anything by myself, to solve the problem, of disagreement. Which is not true at all, but it's very easy to start from "blank". And then from me would be required compromises of compromises, when i can have any hope to prove my willingness to solve the problem. So this is why i used the WP:CHALLENGE, i don't want to use such blunt methods, i want agreement, but there is no use of it. None of my tolerance, flexibility and desire to agree would never give any benefit.
I have edited this article for more than 10 years, i have done a lot of work with it over so many years. I know that nothing in Wikipedia should give anyone any respect, editing Wikipedia should always be a lose-lose situation. And of course mere saying that fact would open me for accusations that i want to own the article, like it will be interpreted that the only reason why i said that, was to establish my authority or anything such. No, i said it only because maybe some who read it don't know. Like for them to know that i know a lot about this article, for example if one has some question about the history of this article, i can answer, because i have seen it.
And the things like WP:CHALLENGE, how far can they go? I mean, i don't doubt their rightness and necessity and everything. But is there any limit to how far these methods can go? Or is it just always good, the more challenge to the correctness of the articles the better? No limits, do these things then become like ouroboros, snake eating itself?Tkorrovi (talk) 20:57, 12 May 2015 (UTC)
First of all, I don't think you should agree with edits you believe are unreasonable. If I make an edit you don't like, simply revert it (like you have been). If I feel like challenging a particular reversion, I'll discuss it on the talk page per WP:BRD.
Second, don't be so melancholy. The article is improving, and will continue to improve. And latest round of edits today showed good interplay. That's the kind of collaborative editing that I'm used to. By the way, I thought you might like that link to Philosophy of artificial intelligence. ;)
I think of editing like natural selection, where reverting (by others) is the selection method. Some edits survive, and the population of contributions presented in the article continue to grow.
WP:CHALLENGE appears to be meant to encourage focus upon providing reliable citations. Wikipedia needs all the reliable citations it can get. How far can it go? I've seen extreme cases where every statement lacking a citation was stripped from an article, per WP:CHALLENGE. And then there are even more extreme cases, with no sources at all, that can't even prove notability, that go to WP:AfD.
I see, this was polite.Tkorrovi (talk) 08:21, 13 May 2015 (UTC)
Your passion for the subject does come across a little like territoriality. There's more than a hint of possessiveness. But, ten years on one article. Wow. One would have to be blind not to see your dedication to improving this page. The Transhumanist 06:03, 13 May 2015 (UTC)
Ok, so dedication to the article is also a reason to accuse? I think you are someone else, not a friend of mine, was banned from editing this article, it was found that he did not do that in good faith. I think it by your behavior, but maybe i'm wrong, and be who you like. You are a new user, maybe or may not be a new user after every little while, and new users have many advantages over old ones, that is, old ones have much less real rights.Tkorrovi (talk) 08:19, 13 May 2015 (UTC)
No, I changed the subject with "But," and paid you a compliment. Caught you off-guard, eh? After our long deliberation, I have come to the conclusion that you are genuinely concerned with the article and that you wish it to be as good as it can be. However, I do believe you are a bit POV on the definition of the term artificial consciousness, as am I, but that's a small difference of opinion, since whether it is a field of study or the thing being studied, an article on either of them would necessarily include the other, so content-wise nothing is lost. It's a small point of contention, about what order the contexts should be presented in the lead, and that can be resolved through an ongoing search for citations.
I'm not your enemy, and I don't believe we've ever met (assuming you have only this one account). I've never been banned from editing a subject. If you check my contributions, you will see that I'm not a new user and (if you include my previous accounts) I have over 100,000 edits. I work mostly on Wikipedia's navigation systems (especially outlines), and on filling gaps in coverage within the encyclopedia. Articles I've worked on lately include Athlete's foot, Trichophyton, Polymath, Emerging technologies, and various articles related to artificial intelligence.
I've done many revamps, totally restructuring articles. Usually these go uncontested, because there are so many articles on Wikipedia that have been neglected. Here's the one I did on Tinea in January: before and after.
Getting back to you, I find you're willing to discuss things, and that is the Wikipedia way. And in those discussions you show how strongly you believe in a particular view, but you are willing to consider new evidence — and that's good. Our styles differ (I prefer intertwining edits and communication via edit summaries, because it is faster), but you are alright by me. Keep up the good work. The Transhumanist 02:22, 14 May 2015 (UTC)
P.S.: Note that this discussion has grown to almost as long as the article. I think the effort is better applied on edits. The Transhumanist 05:06, 14 May 2015 (UTC)
Definition is vital to that article, this is why i'm so concerned. And it is a good definition, this holds all article together. Change it and all will lose meaning. Artificial consciousness is very critical in how to say things, it is very complicated, you lose a nyance, and everything lose sense. It is very different from other articles in that sense. Some people change some small things and don't even understand. Then as a result, other people come and find inconsistencies which there were not before, and certainly fix them. Such way by many improvements, the wheel of destruction starts to turn, and finally all is lost. Done by people who honestly only want to improve things. Like a greedy reductioctionism, if we make things simpler, clearer or easier to understand, by removing that tiny subtle essential which no one even notices.
It seems that artificial intelligence is primarily a field, though some, such as Kurzweil, also use it in a meaning of a system. But to call it a "hypothetical artefact", as is done in the artificial general intelligence article, is incorrect. Because if to think, a system or agent is just some particular system, a program, it has its own name, it is known by that name. It is not called artificial intelligence or artificial consciousness, it is not called intelligence, artificial or not. Even if it is only planned to develop, thus hypothetical. Because intelligence is a general term, so is consciousness, also artificially made or emulated. The same as we don't say that a person is intelligence or a person is consciousness, we say that a person has intelligence or has consciousness. Therefore calling an AI or AC system a "hypothetical artefact", is entirely incorrect. Writing in the article that artificial consciousness is also a system, this does not damage the article entirely. But will be confusing and not correct, and may cause further damage to the article. Furthermore, writing it there is not backed by evidence, because knowingly no artificial consciousness system is called artificial consciousness, also no hypothetical or proposed system. So it cannot be done in Wikipedia.Tkorrovi (talk) 05:33, 14 May 2015 (UTC)

Removed challenged contribution

Per WP:CHALLENGE:

All content must be verifiable. The burden to demonstrate verifiability lies with the editor who adds or restores material, and is satisfied by providing a citation to a reliable source that directly supports the contribution.

Based on this policy (part of WP:VER), I removed the following passage from the article's lead:

Artificial consciousness can be viewed as an extension to artificial intelligence philosophy, assuming that the notion of intelligence in its commonly used sense is too narrow to include all aspects of consciousness.

In order for the passage to be restored to the article, WP:CHALLENGE must be adhered to. The Transhumanist 06:24, 12 May 2015 (UTC)

So what reference do you think is necessary to show that artificial consciousness can be considered an extension to AI philosophy concerning consciousness? I now restored the paragraph in a changed form, adding the reference to the Russell and Norvig book "Artificial Intelligence: A Modern Approach". The philosophical foundations of AI are explained there, and this includes the questions of consciousness as relevant to AI. Unfortunately the book is not entirely awailable in the Internet, but its table of contents shows that it includes these questions, and the link of that is added to the reference. AC deals with the same matters of possible implementation, so it is an extension to that philosophy. Artificial consciousness is also mentioned as a part of philosophy of AI in the article Outline of artificial intelligence, not that this is enough for evidence, just confirms again that this is how AC can be viewed. It is only said that AC can be viewed as such, not that it is always viewed so, just to give to the readers more idea of what the place and importance of AC in relation to other fields is, or potentially is.Tkorrovi (talk) 19:02, 12 May 2015 (UTC)
(Continued in edit summaries)... The Transhumanist 06:18, 13 May 2015 (UTC)

Removed unreasonably made contribution

Based on WP:CHALLENGE.

The topic "Role of brain mapping" was removed from the article because there is not a single reference in that topic which says anywhere that brainmapping is used in AC, not even in theory.

The author who originally inserted that topic also acknowledged that he didn't know whether that topic is about AC at all when he made the contribution, and thus that he had no reason to insert that topic to the article at the time. Because of that it is reasonable to remove the topic from the article, and it can be restored when a reference is added which provides evidence that brain mapping is actually used in AC, and thus there is a reason to add that topic to the article. And also based on that evidence, it will be explained how brain mapping actually has been used in AC. It is desirable that there also would be at least one example of it.

The only references added to the topic "Role of brain mapping" were this:

  • "Fact Sheet: BRAIN Initiative". White House Office of the Press Secretary. April 2, 2013. Retrieved April 2, 2013.
  • Kandel, E. R.; Markram, H; Matthews, P. M.; Yuste, R; Koch, C (2013). "Neuroscience thinks big (and collaboratively)". Nature Reviews Neuroscience 14 (9): 659–64. doi:10.1038/nrn3578. PMID 23958663.
  • Markram, H (2013). "Seven challenges for neuroscience". Functional neurology 28 (3): 145–51. PMC 3812747. PMID 24139651.
  • "The Vital Role of Neuroscience in the Human Brain Project" (PDF). The Human Brain Project. July 2014.

None of these do contain any mention of artificial consciousness, machine consciousness or synthetic consciousness and also no mention of using brain mapping in artificial consciousness. I could only access the abstract of the second of these, as i have no access to the journal, but this was also only about the aims of the project, thus not about any actual use of the brain mapping in AC. Specifically were mentioned only medical challenges, and that also was not about actual use. Tkorrovi (talk) 17:37, 12 May 2015 (UTC)

References

Stephen Thaler

I removed the whole section based 100% on primary sources from the same author (not especially notable, as I see in Google) As such they constitute original reseacrh and undue weight. Staszek Lem (talk) 22:55, 21 May 2015 (UTC)

Staszek, please take another look for secondary references. I see several.Also, I disagree with your assessment of notability. Google is teeming with references to this guy.Periksson28 (talk) 03:51, 22 May 2015 (UTC)

I would also have to honestly say that most of this article lacks secondary sources and that relevant sections need attention. The solution, my friend, is not to fiendishly amputate content, but to make constructive criticisms. That's what makes for a better Wikipedia. Furthermore, most of the people here are relative unknowns!Periksson28 (talk) 06:17, 22 May 2015 (UTC)

I agree with Staszek Lem in that it is not right to write too much about any one proposal or theory. This emphasizes too much a certain individual or certain ideas. I don't agree though with removing the entire section. But the description of any one proposal or theory should not be very long, because this makes the article too long, difficult to read and the readers don't get a good overview. As i have said, one paragraph about each proposal is reasonable i think. Because if it is something really notable, then there should be a separate article about it.

I removed the last part of the section, which didn't seem to be very informative, so that only one paragraph remained. Which i thought was a good solution, and as it happened, undoing it resulted in flip flop. Please try to make it more concise, not more than one paragraph, then it will be consistent with the other descriptions.Tkorrovi (talk) 18:30, 22 May 2015 (UTC)