Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Artificial Intelligence...

  • 29-10-2001 4:06pm
    #1
    Closed Accounts Posts: 4,731 ✭✭✭


    Is true artificial intelligence possible?


«1

Comments

  • Registered Users Posts: 3,126 ✭✭✭][cEMAN**


    I don't see why not in the future it can't be possible. I mean we learn in the same way when we are babies - trial and error.

    It's just our brains are very complexe and it would be difficult to replicate but I think it WILL be possible.

    I mean you could probably even apply it simply through IF/THEN statements. I mean think about. How many things that happen in life apply to IF situations?

    IF door is shut THEN open it and walk through.
    IF coffee cup is too hot THEN put it down
    IF it is too dark THEN open your eyes
    IF you are drunk THEN don't drive

    you get the idea.


  • Registered Users Posts: 68,317 ✭✭✭✭seamus


    Not sure. There are programming languages written specifically for this purpose, 'cos even though I know it looks like everything in life can be given an if...then....else loop, there are so many possible outcomes to every situation, and many decisions are based on choosing the best possible outcome. So before an IF statement could be carried out, hundreds, maybe thousands of variables would have to be weighed. You would have trillions of lines of code!!!

    I think it will happen, but is a long way off. Certainly not in our lifetime. If you look a humans, we are essentially a very complex computer system. Computers have binary 1 and 0, humans have the genetic code, A, T, G or C. When we start creating processors that use more than 2 bases to complete calculations, then we can start walking towards artificial intelligence. :)


  • Closed Accounts Posts: 875 ✭✭✭EvilGeorge


    True artifical intelligence to the understanding and competence that we can achieve at the moment is a log way off.

    True we learn by trial and error as can machines but they, even the most sophisticated ones at the moment be caught out by simple things.

    EG the 'ole play on words like their and there etc...

    I believe its possible but a long way off.


  • Closed Accounts Posts: 20 Belgario


    I hate to but I have to agree with Evil on this one.

    In order for an A.I. to be considered self-aware it will be necessary for it to be able to grasp semantics, aesthetics and be intuitive.

    At present expert systems rely on pre-programmed knowledge. They can only respond to questions they have been prepared for or can answer based on or a combination of the facts that they have been supplied with. They cannot make hunches or intuitive leaps.

    Note that present neural nets learn by examining the environment/world and logging all information as "relevant". We do not! We can not! Think about it! If we had to note every moment we breathe, every move we make, every second of every day we would go crazy or simply cease to be.
    Thats why all present neural nets suffer cascade failures within hours or days of being activated and therefore have to be erased and restarted. In order for the neural net to survive it must have some form of information filtering system. Even then it would only have the faculities of a single insect.

    We are a long way off from being able to create A.I. at least for the time being. Even then it couldn't be viewed as a sentient being.

    Ho hum, I am off the rule the world again.

    Belgario


  • Business & Finance Moderators, Entertainment Moderators Posts: 32,387 Mod ✭✭✭✭DeVore


    Belgario, I think your information on Neural Nets is very suspect.

    A. I've never heard of a cascade failure outside of Cmdr Data's brain.

    B. Simple neural nets learn by being shown a situaion and a correct response. They learn all such situations and extrapolate out answers to cover situations which are not covered by the examples but look "kinda like" one they DO know the answer to.
    Its a powerful approach and well advanced now.

    Here's a telling argument for why it SHOULD be possible.

    We (humans) have successfully replaced a single Neuron with a wire filiment and the recipient (a rat I believe) functioned perfectly normally, in fact as far as I am aware we have interfaced quite a number of Neurons in this way.

    If you replaced all the neurons one by one, when would the (imaginary) being cease to be intelligent?

    DeV.


  • Advertisement
  • Closed Accounts Posts: 20 Belgario


    When I stated cascade I meant catastrophic failure. All the same I would hate to walk around sounding like Data. Thanks for the heads up ;)

    Replacing single neurons is a far cry from creating a senitent being, but wouldn't it be considerably wasteful :P

    Also rat and human brains can afford to lose some neurons so, replacing them doesn't necessarily mean that they will be used.

    Anyway, its late and me bed is calling. I will post a better reply tomorrow.

    Adios

    Belgario


  • Business & Finance Moderators, Entertainment Moderators Posts: 32,387 Mod ✭✭✭✭DeVore


    You arent getting my point.

    1 neuron loss to electric circuitry has been done without loss of sentinence or any apparent alteration in Neural pathway attivity (as you see in a brain damaged person).

    An entire brain replaced and still functioning would clearly show AI.

    So either the entire brain replacement is possible and QED.
    Or its not and the question becomes: when did the person whose neurons were being replaced slowly but surely... CEASE to be intelligent. Was it neuron 237823 or 237824 ?? :)

    DeV.


  • Closed Accounts Posts: 20 Belgario


    The thing is, I am getting your point but at best you are just describing a cyborg not a completely artificial life form. (Please spare me the borg reference , I beg you)

    When a man has an arm replaced with a prosthesis, does that make it alive? I sincerely doubt it does :) even though it has been the plot of many a bad movie.

    At best building an A.I. from the ground up will only produce a highly advanced expert system with the intelligence of an insect.

    Until we move away from the digital, clock driven technology that we presently rely on, then we will have nothing more then superfast glorified calculators. (Quantum computing ?)

    One question for you all. If an A.I. where to be built, how would one teach or explain to it the meaning of semantics. As far as I know I doubt that a programming language exists "yet" that can do this.

    Anyway, I am once again up too late. See you all soon.

    Ciao

    Belgario


  • Registered Users Posts: 15,443 ✭✭✭✭bonkey


    Originally posted by Belgario
    Note that present neural nets learn by examining the environment/world and logging all information as "relevant". We do not! We can not! Think about it! If we had to note every moment we breathe, every move we make, every second of every day we would go crazy or simply cease to be.
    Actually, you coudlnt be further off the mark.

    Breathing, movement, etc. are all almost entirely intelligence-driven actions. Therefore, not only are we noting every single aspect that you think would drive us crazy, but we are controlling them as well.

    The human "control system" is far more than our consciousness. It incorporates "pre-programmed" ability, automatic "hardcoded" reflex, multiple levels of interpretation of data, and so on. Simply put - everything is relevant.

    Thats why all present neural nets suffer cascade failures within hours or days of being activated and therefore have to be erased and restarted. In order for the neural net to survive it must have some form of information filtering system. Even then it would only have the faculities of a single insect.
    Incorrect. All current artificial neural networks do not suffer any form of failure within hours or days. There are numerous ANNs which evolve to a stable condition and remain there, unless the actual network structure itself is deliberately tampered with, at which point failure sets in.

    The major problem with artificial intelligence at the moment is one of understanding.

    At the moment, we know how to build an ANN to solve (say) a trivial problem like an XOR, but once you get to any degree of complexity, we are generally groping in the dark.

    My final year project in 1994 (for example) was using genetic algorithms to generate ANNs for specific problems. The underlying theory being that there simply is no scientific model which can be applied to a problem to show us what the ANN should be like. We can determine the inputs and outputs, but the interior structure is mostly a dark science.

    We have the computational power to be able to run far more powerful ANNs than we do at the moment. The problem is that we lack the ability to design an ANN for highly complex situations.

    This is science lets us down. Quantitative analysis shows us that problem X requires solution Y. When dealing with a small number of permutations of situations/solutions, it is relatively simple to generate an ANN. When we get to larger and more vague problems, where there is no direct quantitative approach, we run into problems. If we knew how to quantitatively assess the situation/solution combinations, we could build a deterministic system with no difficculty.

    The difficulty is that when we go outside these areas, we are mostly stabbing in the dark to build the ANN!

    Now, I've only been discussing ANNs here. These are not the only possibility which could lead to AI. Other possible approaches exist, including (for example) fuzzy logic.

    So, is AI possible? Of course it is. In theory. In practice, our lack of understanding on how to approach the problems will limit us severely.

    jc


  • Registered Users Posts: 14,148 ✭✭✭✭Lemming


    I'd like to make a distinction here as to what we're debating.

    On note of creating programs that make intelligent decisions we have already succeeded. Even if these programs are pre-programmed knowledge bases or the like. Therefore you can say that we have created A.I. already. We have created something that can make intelligent decisions based on certain criteria.

    On the other hand, what I think we are debating here is really A.C. (otherwise known as "Artificial Consciousness"). The creation of programs that are aware, and make decisions "on the fly".

    Bonkey, you stated:

    Breathing, movement, etc. are all almost entirely intelligence-driven actions. Therefore, not only are we noting every single aspect that you think would drive us crazy, but we are controlling them as well.

    Breathing and movement are VERY bad examples to use here. breathing is not intelligence driven, its instinct driven. The fact that we can consciously control it does not matter, since by definition we have to actively think about controlling our breathing. That's Intelligence overriding Instinct for some temporary purpose. If you were knocked out, what's keeping you breathing?? Sure as hell not intelligence.

    Movement is also a bad example by ambiguity. What type of movement are you referring to? Reflex movement?? Not Intelligence driven. Normal movement?? To an extent. Once over the initial decision, automation generally kicks in until you either change decision or your body tells you its in capable of going on.

    I would however completely agree with you on the score of a problem of understanding what exactly an A.I. is. No disagreements from me there.

    I'd also like to mention something that yourself and Belgario discussed. Namely the problem with Neural Net algorithms. ALthough NN's are in the right direction towards mimicing the human brain, the main problem is that they need to be told what is correct. Liken it to the analogy of a child and a parent isolated inside a room with no stimulii. The child can ONLY know what the parent has told it. Therefore NN's generally need to be trained. While we(humans) are not that that far off in reality when you think about it, we still have the ability to reason on abstract concepts without being shown the "correct" answer to a problem or arguement.

    Anyway ... its early, I'm ranting, I'll stop ;)


  • Advertisement
  • Registered Users Posts: 15,443 ✭✭✭✭bonkey


    Originally posted by Lemming
    Breathing and movement are VERY bad examples to use here. breathing is not intelligence driven, its instinct driven. The fact that we can consciously control it does not matter, since by definition we have to actively think about controlling our breathing. That's Intelligence overriding Instinct for some temporary purpose. If you were knocked out, what's keeping you breathing?? Sure as hell not intelligence.

    There is a major distinction between "intelligence" and "conscious thought". Are you going to claim, for example, that your subconscious is not part of your "intelliegence" because it processes at a level that you do not (and cannot) directly access with your conscious cognitive processes?

    Instinct is part of the intelligence "process". It involves non-conscious thougt, but it does require brain-functions, as opposed to reflex, which does not necessarily.

    If it is not intelligence controlled, than why does someone suffering from brain-death lose the ability to breath? If it were purely an automatic process, the body would not need its brain to breathe, and breathing would keep the heart going, which in turn would keep all the motor functions running which would allow the body to react to stimuli which provoke reflex reactions.

    None of this happens. You kill the brain, and the body will stop breathing, which in turn denies oxygen to other areas which kills them.

    Movement is also a bad example by ambiguity. What type of movement are you referring to? Reflex movement?? Not Intelligence driven. Normal movement?? To an extent. Once over the initial decision, automation generally kicks in until you either change decision or your body tells you its in capable of going on.
    I agree on reflex movement, but ordinary movement is completely intelligence controlled. Your brain - your cognitive processes - is controlling almost every movement. It keeps your balance. The fact that you do not consciously think about the individual sequences of movement does not mean that your brain is not performing this processing.

    You can argue that motion is a simple rules-based process, but that is blatantly untrue. Look at the unevenness of surfaces we can walk across, holding a cup of coffee, and not spill a drop. We can progress to a certain level of unevenness without even looking at the terrain. As the terrain gets more and more complex, we need to focus more of our visual attention on the path, but do not generally need to think any harder about how to move.

    This is all part of our human *intelligence*, which spans an area far greater than our mere consciousness.

    Which then brings us back to whether or not we already "intelligcence based systems". A truly intelligent system is one which can determine the correct answer in the absence of a specific set of rules. Rules-based expert-systems are not intelligent. ANNs or GAs used to diagnose medical conditions with a high degree of accuracy may be classed as intelligent, but only in an incredibly limited fashion.

    jc


  • Registered Users Posts: 68,317 ✭✭✭✭seamus


    For anyone interested in what kind of research/innovation is going on in this field check out: http://www.cs.ucd.ie/staff/nick/default.htm

    It's my Java lecturer's homepage, he's working on AI stuff at the mo, apparently, I haven't checked it out, way too hungover :( damn Halloween.


  • Closed Accounts Posts: 1,136 ✭✭✭Bob the Unlucky Octopus


    Originally posted by DeVore


    Here's a telling argument for why it SHOULD be possible.

    We (humans) have successfully replaced a single Neuron with a wire filiment and the recipient (a rat I believe) functioned perfectly normally, in fact as far as I am aware we have interfaced quite a number of Neurons in this way.

    If you replaced all the neurons one by one, when would the (imaginary) being cease to be intelligent?

    DeV.

    DeV, with the greatest respect (/me prostrates self on ground), that argument doesn't really hold beyond basic levels of nervous system organization. A single neuron is a far cry from motor units, glial cells, prion receptors and the other vast and complex structures that make up the higher CNS. There is so much we have yet to discover about how the human brain undergoes cognition, self-awareness and establishment of learning patterns, that we can't begin to comprehend how an Artificial system might do the same. Currently, artificial learning systems are unable to provide human assessment variables for consideration. Something as simple as a code of behavior is impossible for an artificial system to formulate. An independent point of view is what defines who we are to many people. Identity and self-cognition are also difficult issues to address.

    I'm not saying it's impossible, just very far away :)

    Occy


  • Registered Users Posts: 3,126 ✭✭✭][cEMAN**


    Yeah.....still about 3 or 4 years away......


  • Registered Users Posts: 1,305 ✭✭✭The Clown Man


    What Occy said.

    When we actually understand exactly how the brain works then it might be possible to replicate it on a different medium.


    But here is something that has been mashing my brain for a while:

    If a truely complex AI were to be achieved that has the same capabilities of a human brain, it should then be possible to replace a damaged part of the human brain (damaged by cancer maybe) so that the person could function normally and the AI controling the artificial part of the brain would perform the exact functions that the replaced part did. (Sensing neural charges and each of the different chemicals that the old part would have sensed and arriving at the same conclusions/actions).

    Now if this is done, and in theory should work, even if the person functions the exact same way as before, is he still the same person?

    Now nothing has changed - the brain functions the exact same way. If you change an arm for an aesthetic one, surely you are the same person. What is the difference with changing a part of the brain if it does exactly what the old part did?

    If this worked and were possible, (obviously a long way down the line) would it then be possible to change the rest of the parts of the brain, transfering memory banks etc to the appropriate places so that an artificail brain controls the person in the exact same way that it did before? An exact replica.

    Now, does this then mean that the person has the same concience and therefore can live forever on a more maintanable structure. The concience would be the same in every way but would the original person be there?

    Where is the line that the person dies and the computer takes over? On replacing what part?

    I like to think of the cerebrum as the easiest part to replace, dealing solely with "if this chemical then this chemical." I may be wrong on that but if you replace the cerebrum with a computer that does the same thing, surely you are the same person ...

    Best of luck. :p


  • Closed Accounts Posts: 4,731 ✭✭✭DadaKopf


    Ok, is not Artificial Intelligence the same as artificial consciousness? Machines can compute, this is not disputed, but the real issue is whether they can apply any semantic content to their computations. Would artificial intelligences ever be able to have 'mental states' that can intend toward objects (desire, want etc.)?

    Is it possible for computers, made from very different stuff than our brains, to perceive the outer world (through whatever senses they have) and use their sense data to make sense of it and to create meaning out of it? To be able to have intending mental states toward objects?

    Plenty of people would say they'd just be mimicking human actions but others say they, one day, would be fully conscious.

    One big problem hits me about all this: if human programmers were to create true AI, genuine AI could only be programmed with some very basic rules (imitating nature) and from that the AI's computing system would have to be able to evolve to the point where it could make sense out of its own sense data - to apply its own system of semantics, of reference and referent. If all we have is 'derived intentionality' given our highly complex brain as it's evolved up to this point then it would have to evolve, write its own programs.

    So, the other minds problem now comes into focus: how could we actually KNOW how this thing is conscious since it would not necessarily be anthropomorphic. The Turing Test wouldn't be good because it's very weak and is a completely behaviourist model and John Searle (despite his stupidity) proves this is not a good way on its own to prove intelligence. Maybe the AI's self-written code would be unintelligible to a computer scientist.

    I know this is mostly about the problem about knowing how an AI would be conscious but it's still important, apart from the practical considerations of silicon chips and code which would actually create mental states of genuine intentionality.


  • Closed Accounts Posts: 1,136 ✭✭✭Bob the Unlucky Octopus


    By cerebrum (the whole upper brain) I think clown means cerebellum, one of the brains hormonal control centers. I'd be inclined to think that the medulla oblongata would be the easiest part of the brain to replace with an artificial construct, dealing with simple functions as it does. Simple, yet crucial however- heart rate, involuntary respiration and the fasting state of human metabolism. Not entirely sure I'd be comfortable with an artificial construct running those functions for me. Well...maybe if it wasn't running Windows :p

    Occy


  • Closed Accounts Posts: 190 ✭✭Gargoyle


    Talk to me about AI after you've sat in front of a computer getting a compile error because there's a missing semicolon in your code! lol


  • Closed Accounts Posts: 1,136 ✭✭✭Bob the Unlucky Octopus


    As I said...as long as it isn't running windows...


  • Closed Accounts Posts: 875 ✭✭✭EvilGeorge


    Just a few things to add.

    How does one program feelings - each persons reaction to situations, ie, deaths, happyness, music...... are different - would an A.I. machine have its own in dividual feelings, from some sort of random function choosing modules or what the programmer wanted to add in.

    Also, say for example the feelings were real - whats stopping it from wanting to take over - if its intelligent and has an online connection, it could do a lot of damage - look at viruses today.

    Say some one created a physicall body with AI running it - whats stopping it from becoming a physcopath or whatever..... wanting to achieve goals is somthing that we do so if one day one is created to our present level of sophistication it may want to take over (racist against humans) - I know its sci-fi but look at the ideas which came from the terminator.

    Point being, is it a good thing or a bad thing if it were possible.


  • Advertisement
  • Registered Users Posts: 11,389 ✭✭✭✭Saruman


    What is to stop them going nuts and on the rampage? Same thing that stops us.. Our own personalities and individual morals. If AI reaches our level of sophistication then it will have our same ideals and will know good from bad so it will be a personal choice of the AI if it goes on a killing spree or views itself better than humans etc OR of it becomes enlightened and see itself no different to us. No the real question could be how do WE see AI, do we see something that is probably physically stronger, smarter in terms of how fast it can process info and think (not necessarily more intelligent though!) and are we concerned with this and hunt them down, a witch hunt if you will.. and turn into a lynch mob and then force them to defend themselves and then in turn seek revenge etc.. that is our history, we have done this throughout so its certain we will again, but since AI has yet to come about we cant say the same for them! In short my answer is "Wait and see!"


  • Registered Users Posts: 3,126 ✭✭✭][cEMAN**


    Actually I think this is a legitamate point to talk about......

    If we work on the way to creating a fully conscious symantec aware being then along the way we will reach the point of a purely logical being.

    Now one thing that sets us apart as humands is our total lack of sense hehe. We are as a whole - IDIOTS. We war and destroy uorselves. We let petty emotions control our lives. We make unfair and incorrect desicions based on bias and judgement.

    In relation to even just war alone - a purely logical being would calculate that in destroying the human race there is an AMAZINGLY increased probabiliy that there will be no antisurvival tendancies towards these beings. Eg. If we start a nuclear war they are at risk. eg. If we waste planetary resources when solar energy is available etc then we must be working against them.

    Termination ensues.

    It's simple - someone kills someone else we take it as an act of passion or insanity - Artificial creature of logic takes it as a defect and if has happened before has a 100% chance of happening again and the only way to prevent that is to prevent this person from the chance ie. terminate them.

    It may seem strange but what you have to understand is that we send in terminators to fumigate a house against bugs. Now they are really in as much right of life as we are if not more because in a LOT of cases they aid towards the eco system.

    We however see them as a pest and antiproductove towards our every day lives. Therefore we terminate it.

    Now we are compassionate beings not ruled entirely by logic but we logicaly see these as pests in our enviroment and terminate them. What's to say beings of PURE logic won't see us as useless to their enviroment and using up resources they need and terminate us. It's only logical


  • Registered Users Posts: 1,305 ✭✭✭The Clown Man


    Originally posted by Bob the Unlucky Octopus
    By cerebrum (the whole upper brain) I think clown means cerebellum, one of the brains hormonal control centers.
    Occy

    Oops. :p

    As I said, brain tends to be mashed when thinking about this. :p

    Thx Occy :D


  • Registered Users Posts: 897 ✭✭✭Greenbean


    What we regularly call "human" and "real intelligence", those jumps of faith, those emotions, the laws of english language - a purely logical being would probably regard as very stupid. Who really is the smart being?

    Seamus, you mention Nick Kushmerick in UCD - he's the wrong AI person to talk to, his AI knowledge mostly revolves around intelligently retreiving data and structures from the internet. Instead look up Ronan Reilly - http://www.cs.ucd.ie/staff/rreilly/default.htm. He is way way more relevant when it comes to true AI or I (true & artificial cancel). He focuses on Collaborative Cell Assembly and is very much on the cutting edge of neuroscience and theories on how the brain works.

    Devore mentions the replacing of a neuron in the brain with an artifical circuit. If this replacement completely allows for all the functions which a real neuron does (and I doubt that it does) then sure, it would work. A real neuron must release the correct chemicals from its axion, it must be able to strengthen (thicken) and weaken (not maintain) its synapses according to the amount of electical pulses passing through it. It must also delay and propogate signals according to the correct learned code. Its probably quite a simple building block for the brain, but its nuisances are subtle and very important. The overall structure it provides must allow for the complex and abstracted structure of the brain.

    The brain is an organ that goes from the highly "hardcoded" neuron patters, such as for smelling, breathing etc - typically genetically coded - much of which we are unconcious of - right up to the "plastic" and mealable parts of our brain in which we think and can readily reprogram. No serious distinction should be made between genetic and non-genetic parts of the brain, similar to the way that hardware and software are essentially the same thing just less and more malleable.

    Connectivity and the connection patterns between neurons seem to be the key to the self-aware intelligence we have, we seem to have this layer of intelligence where upon we can symbolise things that other animals either can't or have huge difficulty in doing. Its apparently extremely difficult to extract general abstract meanings from items such as a "ring" to symbolise marriage.

    Meh I could go on, but I don't know this stuff indepth enough - but I have gained a sense of the trickiness of the topic .. I can assure you though that there is a huge huge amount of good philosophical, theoretical and scientific debating material in this topic - its probably our most exciting frontier in the development of man at the moment. http://richardbowles.tripod.com/ca/intro.htm for an introduction on hebbian cell assemblies.


  • Closed Accounts Posts: 512 ✭✭✭BoneCollector


    Before we try to address the application of A.I.,
    Should we not 1st address what comes before intelligence?
    one would think conscienceness is the ultimate part of self-awareness.
    After all, being selfaware is an emergent property of all five senses.
    Being able to sense ones environment Leeds to decision making leads to intelligence.
    Each sense adds to the awareness.
    Take this all together, then self-awareness becomes one of your senses, which is the 6th highest sense.
    Intelligence is and emergent property of all 6 senses which results in experience which is the sum of all parts.


  • Closed Accounts Posts: 4,731 ✭✭✭DadaKopf


    Before we try to address the application of A.I.,
    Should we not 1st address what comes before intelligence?

    No, the first task is to define what 'intelligence' is. A pedantic question but vital all the same.


  • Closed Accounts Posts: 512 ✭✭✭BoneCollector


    Okay then..

    The question would be.. what level Of A.I. are we trying to achieve?

    Basic level:
    a computer can be given A.I. by a programmer who gives the computer a set of conditions from which it make a determination based on input, which is either manualy or voice recogition.

    this is basic limited interactive A.I. which limits the user to a strict set of conditions.

    Next Level:

    Anticipation of the unexpected or unknown.
    At this level, the program has to be written to re-write itself or introduce new code or sub-routines.
    Complex programming would test and modify these new subroutines untill they either dont work or become and inergrate part of the original programing. A.I.(experience) More intelegence.
    Adding to this, sensors i.e. optical, shock, ballance, heat etc.. making the programe more atuned to its envoiroment and acumilate even more data from which the next set of routines will be developed.

    Question:
    What do we achieve with A.I. if it was completely successfull?


  • Closed Accounts Posts: 4,731 ✭✭✭DadaKopf


    No, first you have to define intelligence - you've skirted around the question. One would assume that the mandate of cognitive science is to develop an artificial intelligent identical to the human mind-brain.

    But the demand of the scientists and the philosophers is to, first, convincingly describe what intelligence is.


  • Registered Users Posts: 897 ✭✭✭Greenbean


    Intelligence to me seems to be all about making choices according to your goals, given your input.

    The typical animal/human will make the best possible choice that it can make which will best enable its survival (or should there be a complex social organisation or culture, to best enable the survival of the organisation/culture), according to some genetic coding and according to the environment.

    An extremely fast and capable computer is likely to have no goals. Its choices will not be mediated by its own desires - so in this sense its just a tool. Theres a question of, if you could create an amazingly "capable" and complex computer, would it generate its own goals if it should become self aware and able to evolve its routines and subroutines? I personally don't think so... it doesn't have hundreds of millions of years of evolution behind it backing it up to try "survive" as animals would. Self preservation isn't an inherent part of being self aware.

    The only way I can see a computer gaining its own goals is through an interface with the environment. We tend to think of our surround environment as "external" to us, but in many ways its internal to us - it is more like an extension of our brains. This might happen with a capable computer and from this external environment which holds things like "culture", "laws" and "morals" it may generate its own goals and become a proper intelligence as I see it (rather than a useless box of wires without goals). So you have this box of wires and transistors and what not.. but you also have this living evolving intelligence that alread exists externally to it - the environment, and once it connects to that it can become intelligent. Just my (current) opinions - but still an attempt at going somewhere with the idea of intelligence.


  • Advertisement
  • Closed Accounts Posts: 512 ✭✭✭BoneCollector


    No, first you have to define intelligence -

    Hmm....
    did i not just define this in my 1st post? ;)
    one cant gain intelligence on a level without being selfaware, if we are to talk about the tangable, adaptable, and decision making which comes from experience to make the correct choices which are not detrementle to the one making those choices, also fundamentaly, add being selfaware of the actual decission making process itself!(conscienceness). and not just an automative true or false method.


This discussion has been closed.
Advertisement