This is an archive of past discussions about Artificial consciousness. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 5 | Archive 6 | Archive 7 | Archive 8 | Archive 9 | Archive 10 | → | Archive 14 |
Back to Prediction
- "'One aspect is the ability to predict the external events in every possible environment when it is possible to predict for capable human. Ability to predict has been considered necessary for AC by several scientist, including Igor Aleksander.'" These sentences make no sense, why are you reverting to them? What does "every possible environment" mean? Will it have to predict events on Antartica, 15 feet inside an obscure cave in the Arctic Ocean, or on Pluto? You are using the term "all" and "every" too liberally. Also, "AC must be theoretically capable of achieving all known objectively observable abilities of consciousness of capable human" doesn't make sense either. What's the point of saying "theoretically capable... all known"? What does "objectively observable" mean? ugen64 23:58, Apr 4, 2004 (UTC)
- First, the environment must be possible.
- > Will it have to predict events on Antartica, 15 feet inside an obscure cave in the Arctic Ocean, or on Pluto?
- Yes, because every capable human can predict, or can learn to predict, at least some events even in these conditions.
- The difficulty here, I think, is with Tkorrovi's usage of predict. The questioner is asking whether the requirement implies that the AC implementation has omniscience, since Tkorrovi says, though I don't think he means, that the AC implementation must be able accurately to predict future events amd have knowlege about matters that are effectively unknowable. What is meant by predict in this context is, I think, just the ability to formulate an abstract model (or extrapolation) from sensory and other inputs (e.g. recollection: matching currently observed patterns with previously observed patterns) which will enable the AC machine to operate coherently with respect to passing events. The coherentness of this operation will be one of the criteria that the tester of AC will be expected to adjudge against their own model of consciousness. I still think predict is the wrong word to use here, because of the confusion with omnicience, or at least it should be qualified to indicate that we are talking about a predictive model whose purpose is to arrive at conclusions about the AC machine's internal and external environments that will affect its behaviour. Matt Stan 09:28, 14 Apr 2004 (UTC)
- Incidentally, if I misunderstand you, does that mean I am any less conscious? If not, then one could have an AC implementation that constantly misunderstood it's environment, i.e. it's predictive model was completely wrong, but this would not affect its credentials in terms of its claim to be artificially conscious. So, whilst a predictive model is probably a prerequisite for AC, I still question the level of sophistication that it would require in order to qualify. I suppose I'm trying to extend the boundary and ask whether sanity is a requirement for AC, i.e. whether an incoherent predictive model would qualify, so the tester will say "It's completely mad, but it's definitely conscious!" Matt Stan 09:28, 14 Apr 2004 (UTC)
- The rest is copied from my usenet post:
- 1. ?artificial system? ? ?artificial? here means ?formed in imitation
of something natural? (Concise Oxford Dictionary). ?artefact? means ?product of human art and wormanship? (Concise Oxford Dictionary), what otherwise is correct, except some possibility discussed that artificial consciousness may be created by other such system.
- 2. ?theoretically? ? concerning the time it takes for the system to
develop, ?theoretically? would not be necessary, as ?capable of achieving? says by itself that time it takes to achieve certain abilities is not relevent for system to be artificial consciousness. But the other possibility is that system is potentially capable of achieving these abilities, but lacks resources like enough memory in computer, certain sensors etc. Then ?theoretically? says that the system is a kind of system what is capable of achieving certain abilities, but not necessarily capable of achieving them in some case when there are not enough resources.
- 3. ?capable of achieving? ? means that the system shall achieve
certain ability necessary for it after being in particular environment enough time, where does not matter how long is that time.
- 4. ?all? ? the most necessary condition for it to be artificial
consciousness. Even with the word ?all? included, artificial consciousness is only a subset of consciousness, how big this subset is depends on how much we objectively know about consciousness. This way artificial consiousness becomes closer to consciousness when we now objectively more about it. When we exclude the word ?all?, then for it to be artificial consciousness, ?all? must still be assumed, but it may be misinterpreted as demanding only to achieve one ability of consciousness, in which case any simple ability of thinking like boolean logic would be enough for artificial consciousness, for it to be even less than weak AI. In comments of the article omitting ?all? was substantiated by argument that if some system is never capable of achieving certain ability of consciousness, it may still be conscious, what is very doubtful if we consider consciousness to be the totality of person?s thoughts and feelings (Concise Oxford Dictionary). If we consider average person here, then by that he can not be considered to have full consciousness if he can never achieve certain ability what average person usually does.
- 5. ?known and objectively observable? ? this was thoroughly discussed
in ai-forum and included only after agreed with people there (Ribald et al). This makes artificial consciousness objective, what consciousness is widely known not to be. ?verifiable? is not that comprehensive, as it may demand to verify only one aspect of some ability, maybe even only determine that it is certain ability, what in some cases may be done even if the ability is otherwise subjective like feelings.
- 6. ?abilities of consciousness? ? here ?ability? is somewhat more
determined as one phenomenon as ?aspect?. ?ability? means capacity or (mental) power (Concise Oxford Dictionary). Maybe ?aspect? is not wrong, but for me it seems somewhat more natural to objectively observe abilities, not aspects, ?apect? also is more likely to refer to something static, not to process. Tkorrovi 00:18, 5 Apr 2004 (UTC)
- Sorry for bad format, it was usenet post copied from web page, this is how it appears. I try to make it better later. Tkorrovi 00:51, 5 Apr 2004 (UTC)
- This is completely wrong. If you say AC beig "not real" you effectively say that computer model is not real. But computer model is a program and it is real. It would be the same to say that Linux operating system is not "real".
- I did not use "weak" to refer capability, but how close it is to real consciousness/intelligence. Obviously calculator is very far from human intelligence.
- AC can not be real or not real, if it is really implemented then it is always real. And it is model if it is implemented on computer, even if it is equivalent to real consciousness (what I think though never happen),
- Quantum computing was not invented by Penrose, and I also think that Church-Turing thesis apply to quantum computing.
- It is just your way to put things what you insist and I, and maybe others, don't agree with.
Tkorrovi 15:13, 5 Apr 2004 (UTC)
Strong vs Weak
Please lets not confuse "strong" and "weak" with "good" and "bad" or "more capable" and "less capable". Something can be really conscious - say a cat (e.g.). Yet a cat cannot do (much) mathematics. A computer may be not really conscious yet prove a theorem. Please let's find some mutually acceptable form of words to distinguish the reality of consciousness from its capability. Paul Beardsell 02:33, 5 Apr 2004 (UTC)
I acknowledge that some here believe that AC will not be real. Some think that this is highly unlikely (tkorrovi?) or impossible (Matt Stan?). As I understand it (forgive me for paraphrasing your arguments) consciousness is such an unknown quality that each of you thinks that we must model AC on the abilities of humans - because they (we!) are conscious and nothing else is known to be conscious. I respect that view. But I think that if you hold that view you will not want me to distinguish real from capable. For you the term "capable" means a little closer to human. That is why you do not like me using the term "weak" to describe your view. Paul Beardsell 02:50, 5 Apr 2004 (UTC)
- First, concerning capacity, there is indeed problem that AC system may have some additional capacity, what doesn't bring it closer to consciousness, so strong and weak don't exactly indicate the capacity. Second, I don't think that AC is impossible, I just don't agree with the concept of "Strong AC". Third, capable doesn't necessarily mean closer to real consciousness, but capable of achieving the abilities of consciousness does. Tkorrovi 03:03, 5 Apr 2004 (UTC)
- Re your third point: I have now fixed a mistake - I did not mean to say "real" but "human". Paul Beardsell 07:32, 5 Apr 2004 (UTC)
I think you are redefining terms. What I would like to do so that each of us has a chance of understanding the other is to try and use the dictionary meaning of terms if possible. We have run into a difficulty here because those who discuss artificial intelligence use the terms "Strong" and "Weak" not in relation to capability. They chose bad terms: They did not use the dictionary! When they say "Strong" they mean REALLY INTELLIGENT, maybe even more intelligent than humans but not necessarily: What they mean is capable of REAL thought - the thought could be STUPID yet, if it is REAL, the believers in Strong AI will consider the issue proven. When they say "Weak" they mean not REAL: They are not saying that the AI device can not be impressive: They are just saying the "thought" is not REAL.
When I started using Strong and Weak in the AC article I used capitals. Perhaps we should return to that or use ""quotation marks to show that the words "strong" and "weak" do not have their usual meanings and connotations.
I think I am right in characterising your belief as being the "Weak" one. You also think that AC might be highly capable. This is an entirely consistent POV.
Paul Beardsell 03:17, 5 Apr 2004 (UTC)
- I agree that the distinction between Strong and Weak AC, analogous to the distinction made regarding AI, is a useful starting point. I think the difficulties about terminology arise because artificial consciousness is not elsewhere defined, so we have no authority. We can look up artificial and consciousness, but put them together and we have ourselves to define the synergy that the use of these words together implies. I think an oxymoron arises in relation to Strong AC, because we are saying that Strong AC is both real, and yet artificial, which is a linguistic contradiction. That difficulty does not arise with Weak AC. But Weak AC is actually more difficult to define than Strong - we know what the latter is because we experience it ourselves and don't really need to be told about it. Weak AC, which I have also dubbed simulated consciousness, needs to be defined to reflect that its possessor will not have self-awareness or thought, because it won't have a self to be aware of, nor is there a computer model for thinking that it can emulate. Therefore we are stuck with those aspects of consciousness that can be simulated, and these are only its external manifestations. I am not so pessimistic as to think that AC is impossible, rather that if we can define the goals that define AC per se (leaving out AI because that is something different) then I'd be keen to contribute to a project to actually build a candidate artifact. I think that if we can devise the necessary bootstraps, and in particular to take note of what cognitive psychologists understand as personality, as well as providing suitable heuristic routines, then the Weak AC machine, in its later incarnations, might find that it needed to think in order to meet its design goals. That's the serendipity point at which the implementation could cross over from being Weak to Strong; that's the spooky point at which Pinnocchio comes to life; that's the point where AC implementations communing with each other risk incurring the suspicions of humans that they are about to take over the world. That's when legislators start to get involved and try to ban artificial stem cell research'! Matt Stan 08:16, 5 Apr 2004 (UTC)
What I'd like is a flat screen and small camera mounted on a dynamic angle-poise contraption, with muscles to allow the screen to orient itself so that its viewing area was always directly facing me. If I move to the left, it will turn slightly; if I stand up it will raise itself. In a limited way it will be conscious of me, dedicating itself to serving my need to be able to see my screen regardless of whereabouts in the room I am. Would that qualify as an AC implementation? Matt Stan 08:16, 5 Apr 2004 (UTC)
- It would satisfy your criterion of attentiveness but I think it would much more likely be conscious if it looked at itself in a mirror all the time. Self-awareness is consciousness. Not awareness or attentiveness. Paul Beardsell 08:22, 5 Apr 2004 (UTC)
- As I am sure you know such systems exist. They "follow" cars to focus on the license plates for the London congestion charge. They track individuals in sophisticated burglar alarm systems. If only we can get some self reference in so as to give the impression of some self-awareness. If the AC thermostat had to wander accross the room to find the heater switch. Or the robot which finds an AC (alternating current!) socket and plugs itself in when its batteries get low. On slashdot there was a story recently about an optional extra on BMW cars in Germany: They will supply additional small steering inputs to keep the car between the dotted lines. Apparently the car feels as though it is being driven in a concave lane: It takes a slightly more assertive driver to change lanes! Paul Beardsell 16:28, 5 Apr 2004 (UTC)
- My problem with self-awareness in respect to AC, is that it implies that the implementation necessarily must have a self to be aware of. Now I am claiming that self is itself a psychological construct, and even psychologists claim that self is an illusion, i.e. it doesn't really exist. Therefore I do not understand the requirement that an AC implementation must have self-awareness, and find it difficult to determine how one could know whether an AC implementation had self-awareness or not. Now it would be different if we stipulated that the AC implementation needs a component that manifests itself as an ego, i.e. it had the capability of using the first person (I) when expressing itself, but I'm not sure we've got as far as defining clearly how the AC implemetation needs to manifest itself to observers. Matt Stan 09:55, 14 Apr 2004 (UTC)
- "Self", the experience of the "self", is subjective experience, the one Thomas Nagel talks about in his "what is it like to be a bat". This is why consciousness is considered to be subjective. It may simply mean that different people experience the same thing differently. Not necessarily a "magic spark" or violation of Church-Turing thesis, just the problem that we cannot completely model such experience. We cannot objectively observe such human experience, as one person cannot confirm evidence given by another person. (If we don't talk about copying the whole brain, what is likely unfeasible, and also we cannot then do that separately from its environment, what even may lead us to impossibility that the universe cannot contain a copy of itself.) This is why the Genuine AC is questionnable, there are aspects of human consciousness, what are subjective and therefore we cannot objectively model them by any processes, no matter how clever we are or how much we know. Tkorrovi 12:25, 14 Apr 2004 (UTC)
"Real" and "artificial" are not contradictions. I am renting a Ford Falcon which is both. "Real" and "simulated" are contradictory. You wrongly have implied more than once that "simulated" and "artificial" are synonymous. Paul Beardsell 08:20, 5 Apr 2004 (UTC)
I am claiming no "synergy" in placing the words "artificial" and "consciousness" together. I want to use the term artificial consciousness in the same way I might one day have to use natural consciousness to distinguish it from the artificial variety and as a separate subset of consciousness. You must not be allowed to impose some other meaning on the term than what it literally does now mean. I want a term which concisely describes a man-made (or at least not naturally occurring) object which has self-awareness. "Artificial" "consciousness" or "artificial consciousness". What other term will you allow me if you hijack that one. Words have meanings so use them: Come up with your own term. "Simulated consciousness" is what you need. Not personally. Paul Beardsell 08:29, 5 Apr 2004 (UTC)
I could (attempt to) build a virtual AC entity which will be real consciousness. You can build a real object which will simulate consciousness! Paul Beardsell 08:45, 5 Apr 2004 (UTC)
Joking aside, the question of real vs not real I deeply believe will eventually be seen to be a red herring. That I am a cyborg will not be discovered in your lifetime. (Now there is a statement that is undeniable!) Paul Beardsell 08:45, 5 Apr 2004 (UTC)
- Unless you are rumbled early. Matt Stan 20:12, 5 Apr 2004 (UTC)
- I'm programmed to think I will be the last to be judged obsolete. Paul Beardsell 04:16, 6 Apr 2004 (UTC)
I don't see a problem here. Strong AC means AC what is equivalent to real consciousness, though I am not the only one who considers this impossible. Weak AC means as far from real consciousness as AC can be defined (requiring only one or few abilities, whatever these abilities are), though by loosening requirements so much it is pretty much equivalent to Weak AI and is not even enough related to consciousness to be called AC. And Objective Stronger AC is closer to consciousness than Weak AC in that it requires the ability to achieve all certain objective abilities (this is why it differs from Weak AC), but is not as close to consciousness as Strong AC. So it is between "Weak AC" and "Strong AC" in how close to real consciousness AC should be. "Simulated" may mean "artificial", one definition of "simulate" is even "produce a computer model of a process" (Concise Oxford Dictionary) what is pretty much what AC is all about - a computer model of human consciousness. "You must not be allowed to impose some other meaning on the term than what it literally does now mean" -- what??? You alone decide in what meaning a term must be used? Concerning the example of Matt Stan, I think it may only then be possible to considered it AC when it has awareness of the processes, for example it learns how you normally move and may turn in right direction even before you actually move, to do it quickly enough (say when it see that you are frustrated, it would suppose that you would go to the refrigerator and take something to drink). Tkorrovi 11:04, 5 Apr 2004 (UTC)
I thought you didn't see a problem?
- Weak AC does not mean "as far away from real consciousness as possible" or that it has restricted capabilities. It just means "not real". Which is why your "Objective Stronger AC" is a Weak AC and why, in my opinion, it need not be distinguished from Weak AC. Weak AC can be highly capable. Strong AC can be pathetically incapable.
- "Objective Stronger AC" is a term which I think you have coined - I have not seen it elsewhere.
- The meaning that "artificial consciousness" already has is the one I am using and the one that corresponds with the dictionary definitions.
- AC might be just a computer model of consciousness when it is Weak but not when it is Strong. Even then, should AC be biologically based (when we develop the techniques to do this) then it would not be a computer model.
- I can be conscious in an environment where anticipation is impossible so presumably another machine, an artificial one, could be too. I know you insist on the prediction thing but I just don't see it: We'll have to agree to differ.
- I've been to the fridge but it was empty. I had not anticipated that.
Paul Beardsell 11:20, 5 Apr 2004 (UTC)
- Weak AC also means no Strong AI, while Objective Stronger AC is intended to be Strong AI. BTW all the terms "Strong AC" and "Weak AC" are also most likely coined by you, I don't know that anybody else use them. "Objective Stronger AC" is also a preliminary term I used after you coined these terms, it is AC between "Weak" and "Strong". AC, when implemented on computer, is computer model even if it is "Strong". "Anticipation" can be used in full sense of "prediction", but the problem is that it may be understood in narrower sense "deal with before proper time", but not every prediction is followed by action. Also in AC scientists started to use the term "predict", I don't know "anticipate" used in any paper. Tkorrovi 11:56, 5 Apr 2004 (UTC)
- We have "Objective Stronger AC" defined by you as not being real consciousness. And yet you now assert it is "Strong". I think you mean "capable" not "real". Please clarify. If we take all occurrences of "Strong" and replace with "real" and replace "Weak" with "not real" (or possibly "simulated") then your contradiction is obvious.
- The terms "Strong" and "Weak" are used widely in discussions of AI to have precisely the "real" and "can never be real" meanings I use here. I am not sure if I first used those terms in respect of AC or not but I have read a lot on the subject so I might have picked them up from a book. The usage is at least consistent with AI - a field which many consider related to AC, you included. I do not mind what terms we use as long as we do not invent new terminology for the sake of it or usurp the perfectly good meanings of other words mangling them in the process. I have already invited the use of other terms but you have not suggested any yet.
- It became obvious early on that there is one obvious question on which those interested in AC are divided: Can AC ever be real consciousness? By using the "strong" and "Weak" terms to judge capability rather than realness we risk losing sight of this, the foremost question. I suggest capability and realness are not necessarily linked and gave the mathematical cat vs theorem proving computer as an example to back up this view, that capability and realness are not necessarily linked.
Paul Beardsell 13:12, 5 Apr 2004 (UTC)
Searching for '"artificial consciousness" anticipate' at Google:
- http://www.cs.bham.ac.uk/research/cogaff/moc/chrisley-moc-report.pdf
- http://www.heise.de/tp/english/special/mud/6140/1.html
- http://www.warwick.ac.uk/cep/first_annual_conference.html Aleksander!
Just three examples on the first returned page. Paul Beardsell 13:21, 5 Apr 2004 (UTC)
- I asserted it is intended to be "Strong AI", not "Strong AC". What distinguishes it, is that it has some complete criteria, to be capable to achieve all certain abilities, so "not just anything" contrary to Weak AI, what may as well be a calculator. Dividing AC only to be "real" or "not real" is not appropriate because there are different varieties of AC what differ in how close they are to real consciousness. Then we should also coin a new name to what was "Weak AC", like "Minimal AC". There is exactly no contradiction, it all just depends on how we classify the different forms of AC. I see one essential difference between AC and AI, and this is that it is questioned by many of whether we can never fully understand such subjective term like consciousness in its widest meaning (totality of all mental abilities of capable human), and therefore can ever build any truly "Strong" AC, while intelligence is only a subset of consciousness, usually considered objective, and so some (also not many) think that Strong AC might be possible. Honestly, I don't know anybody other than you who thinks that "Strong AC", how you define it, is possible. So "Strong" means somewhat different thing in AI than in AC, in AI it is objective in spite that it is a very complicated problem. So it's by far not obvious that we can transfer the terms "Weak" and "Strong" from AI to AC, they may not remain consistent with the terms used in AI.
- The question may also be asked as "how close to real consciousness AC may become", what is a question as obvious as the question of whether it would ever become a real consciousness or not.
- I agree that capability and realness are not necessarily linked, the question was not in that. And AC by itself cannot be "real" or "not real", the term AC is not equivalent to consiousness, if AC is really built an can be considered AC, then it is real. The question may only be of whether AC is equivalent to human consciousness, or how close it is or can be to human consciousness.
- Also one thought about whether AC is a computer model. We used to think about computer as digital serial computer, but there were, are and would be other kind of computers. One example is an obsolete analog computer, what has a difference though that it is simultaneous, not serial. And most recently there is quantum computer, what is claimed of being theoretically able to model any physical process (asserted by Deutch in 1985 based on the work of Faynman). It's also not excluded that some future computers would be built as brain. So even if it is "Strong", if it is ever possible, it may be computer model nevertheless.
- The word "anticipate" was used once in the first and third paper, but not in connection with AC in the second paper. I saw "predict" much often. Of course "anticipate" can be used in AC, but in proper meaning, if it has the same meaning as "predict", then "predict" would be better. As "predict" is used in papers, then it should remain. "Predict" is also more unambiguous term because "take action before proper time" has a very different meaning than for example "foresee", but these both are the meanings of "anticipate".
Tkorrovi 14:11, 5 Apr 2004 (UTC)
- In what sense is the word predict being used? One can make a prediction that may be right or wrong. It is still a prediction. Does the AC machine have to make correct predictions, or will incorrect predictions do equally well? Matt Stan 16:37, 10 Apr 2004 (UTC)
- It just means that the system must be able to model the external processes, but based on the information it gets from its "senses", not that there are pre-programmed models, and by "running" a model of an external process to see how it develops. For example it has no idea who you are, but after it gets to know you, it builds a model of you, knowing what are your habbits, desires etc. This helps to predict your behaviour in certain cases with certan accuracy. So prediction is part of being aware of the processes in the environment, what means to model them and "running" these models. This is different from static awareness (perception) what Chalmers talks about and what enables to suggest that thermostat is conscious. In fact thermostat is only the most primitive regulator, in more complicated regulators exactly the prediction is the biggest obstacle. For example they built a regulator for oil refinery, and used neural network to predict, ie to recognize the pattern of how the parameter changes and then to take action or not to take action based on how the parameter was expected to change. So we may consider it to be some awareness of the process, but this is still static because neural network cannot model the process, it can only recognize a static instance of the diagram. Tkorrovi 18:04, 10 Apr 2004 (UTC)
- Maybe I should explain more precisely why prediction is important for regulator. Every object has certain inertia, this is not always an inertia of the mass, but just for example that it takes certain time to increase or decrease the room temperature, change the course of the space to air missile etc. So if we want to regulate more precisely, we must be able to know beforehand based on the circumstances in the environment, what the result of certain change might be, and then take only such action what takes as closer to the desired result faster and more precisely (in order not to overregulate), just for being able to do it fast enough. Tkorrovi 18:30, 10 Apr 2004 (UTC)
Well:
- Tkorrovi: I don't know "anticipate" used in any paper. Any paper.
- I'll do an edit changing "strong" to "real" and "weak" to "not real". "Stronger" would become "realer" which is not English so "Objective Stronger AC" will be transformed into "Objective More Real AC". This is a little awkward so you may wish to suggest a different phrase.
- Also your continued use of "weak" to mean not capable (as in the calculator) will be more obvious. I repeat: Believers in weak/not-real AC will not believe that an artificial entity which has "all the capabilities of the average human" is truly/REALLY conscious. It will be a simulation: You have said so yourself (and it is a position which I respect but which I believe is wrong). That is how I proposed the term Weak should be used and its replacement by not-real will make that obvious.
- I agree that a computer model is the most likely way for Weak / Not Real AC but if the AC is Real then it can not be said to be a "model". Model means simulation, model means not real. Strong/Real AC is also most likely to be realised by a digital computer but the point is that it would not be a model.
- Quantum computing is the way Penrose tries to escape the Church-Turing thesis but unfortunately it is not obvious that the thesis does not apply to quantum computing.
- I agree the real / not-real question is not the only question. But it is the issue taking the most space here.
Paul Beardsell 14:54, 5 Apr 2004 (UTC)
- This is completely wrong. If you say AC beig "not real" you effectively say that computer model is not real. But computer model is a program and it is real. It would be the same to say that Linux operating system is not "real".
- This is comprehensively refuted below. Paul Beardsell 15:39, 5 Apr 2004 (UTC)
- I did not use "weak" to refer capability, but how close it is to real consciousness/intelligence. Obviously calculator is very far from human intelligence.
- I think you did. Paul Beardsell 15:39, 5 Apr 2004 (UTC)
- Tkorrovi: Weak AC means as far from real consciousness as AC can be defined (requiring only one or few abilities, whatever these abilities are)
- AC can not be real or not real, if it is really implemented then it is always real. And it is model if it is implemented on computer, even if it is equivalent to real consciousness (what I think though never happen),
- Is your impelementation of AC real consciousness. I ask you the very first question I asked you several weeks ago: Can AC be real consciousness? Yes or no.
- Quantum computing was not invented by Penrose, and I also think that Church-Turing thesis apply to quantum computing.
- I never said he did. Paul Beardsell 15:39, 5 Apr 2004 (UTC)
- It is just your way to put things what you insist and I, and maybe others, don't agree with.
- You are not happy with Strong and Weak. We all agree that by these terms we meant Real and Not-real. You have just said so again in this posting! Paul Beardsell 15:39, 5 Apr 2004 (UTC)
Tkorrovi 15:14, 5 Apr 2004 (UTC)
Example 1:
- A book containing a story is REAL.
- The story can be a REAL story.
- The story does not nesessarily describe real events.
- But the story is not necessarily REAL.
Example 2:
- A computer program is a REAL program.
- The computer game Sim World is a REAL program.
- But the Sim World is a NOT-REAL world.
Example 3:
- Tkorrovi's AC program is a REAL program.
- Tkorrovi himself says that his program is a simulation of consciousness. He says it is NOT-REAL consciousness.
Paul Beardsell 15:39, 5 Apr 2004 (UTC)
Two examples:
- Tkorrovi: Then we should also coin a new name to what was "Weak AC", like "Minimal AC".
- Tkorrovi: Weak AC means as far from real consciousness as AC can be defined (requiring only one or few abilities, whatever these abilities are)
These are examples of using Weak AC in such a way as "weak" means "lacking capability". "Weak" does not mean "minimal" in this context. The term "Weak AC", as was explained when I introduced it and as is defined in the article itself, is the term for the school of thought or the belief or the argument that AC can not be real consciousness. It can be highly capable, but never real.
The continued use of the term not as defined is the reason I have changed the term for one where the dictionary definition is close to the meaning intended.
Paul Beardsell 16:08, 5 Apr 2004 (UTC)
I meant *what formerly was called Weak AC". I disagree naming AC "not real" or "more real", and I explained this above. I think this is not a different view, but nonsense. Tkorrovi 16:36, 5 Apr 2004 (UTC)
- OK, but you are not succeeding in writing what you mean and I am finding it difficult too - it is difficult because we have to keep on reminding ourselves what we mean by these terms. I think this is because the words we were using for AC which is real C ("strong") and AC which is not real C ("weak") have other dictionary meanings. Now there can be no mistake but you are not happy. What can we do to make you happy? Paul Beardsell 17:21, 5 Apr 2004 (UTC)
The recent terminological change (viz. "Strong" -> "Real" and "Weak" -> "Not-real") certainly does demonstrate that some of the things suggested and discussed are indeed nonsense. There we agree. What we disagree about is what is the nonsense! It is precisely the question of whether AC can be real consciousness that is at issue here. Some say yes, others say no. Those who say yes believe in "Real AC" and those who say no believe in "Not-real AC". Of course you are welcome to suggest alternative terms. I have asked you for your suggestion but none is forthcoming. Paul Beardsell 17:03, 5 Apr 2004 (UTC)
I have thought of a good example as comparison. One of the ways that books are divided into two groups is into Fiction and Non-fiction. There are Fiction Books and Non-fiction Books. There is Real AC and Not-real AC. Tkorrovi does not like the term "Not-real AC" because, if I undertand correctly, he thinks that it calls the existence of AC into question. No. There are Non-fiction Books. These are books which are not true stories. There is Non-real AC. This is AC which is not true Consciousness. Hopefully this helps. Paul Beardsell 17:12, 5 Apr 2004 (UTC)
You are not suppose to call other views nonsense. I, and maybe some others, don't agree with your view, what you called "Strong AC" and now changed it to "Real AC", what in essence argues that AC must be equivalent to real consciousness. Doubtful for many, they consider consciousness so subjective that it even cannot be defined. But I never called it nonsense, just said that I don't agree with that. So can it be understood so that now you said that you deliberately made nonsense edits? Tkorrovi 17:17, 5 Apr 2004 (UTC)
You have just said "But I never called it nonsense, just said that I don't agree with that." But just a few minutes ago:
- I meant *what formerly was called Weak AC". I disagree naming AC "not real" or "more real", and I explained this above. I think this is not a different view, but nonsense. Tkorrovi 16:36, 5 Apr 2004 (UTC)
Paul Beardsell 17:26, 5 Apr 2004 (UTC)
Don't you understand that I said that about names, not about your "Strong AC" view? Tkorrovi 17:31, 5 Apr 2004 (UTC)
And it is my view about names you have called "nonsense". Not that I mind, but you are taking offense at me subsequently calling some views expressed nonsense. I did not even identify which ones so you took offense prematurely. But the point is here you are telling me off for something you had just done. I should no longer be surprised. Paul Beardsell 17:46, 5 Apr 2004 (UTC)
- We talked then about calling views on AC nonsense, what I never did, but you did. The only thing what I called nonsense was how you call these views. And that way of calling these views I don't consider a view. Tkorrovi 18:04, 5 Apr 2004 (UTC)
- I think I would rather have a view which exists than one that doesn't. Paul Beardsell 18:23, 5 Apr 2004 (UTC)