Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

The Origin of Specious Nonsense. Twelve years on. Still going. Answer soon.

Options
18889919394106

Comments

  • Registered Users Posts: 9,788 ✭✭✭MrPudding


    J C wrote: »
    Mr P, I don't need or ask for your respect ... and just as well, because you never accord any respect to me ... but your disrespect of me, says a lot more about you (and any argument you are trying to make) than it does about me.
    No, it really just says something about you and your arguments. There are a number of christian posters that I have huge amounts of respect for. I believe they are wrong in their beliefs, obviously, but I respect them and how they argue, I even respect their arguments to an extent. There is nothing about your arguments or you method of argument that deserves respect.
    J C wrote: »
    It would be nice if, some time, you were to substatiate your unfounded personal slurs on me with some valid points about the content of what I actually post ... and not merely your blatantly biased views of my abilities ... which we are all already familiar with ... due to your constant repetition of them.
    Seriously? I don't need to substantiate anything. You provide all the substantiation required. For example, I would suggest that your response to oldrnwisr, if it ever comes, will be another prime example of exactly what I am talking about.
    J C wrote: »
    ... but then, of course, you would have to debate my ideas ... and as they are incontrovertible ... this might result in you having to 'break sweat', for quite some time ... with little hope of success.
    People have been debating your ideas for over a decade. People have been showing your ideas to be flawed and delusional for over a decade. That you refuse to recognise this is not something I or anyone else can do anything about.

    I appreciate that you have a lot invested in your world view. Perhaps like your old buddy, wolfsbane, without your religion you would become a rapist and murderer, so therefore simply cannot afford to accept how wrong you are. I get that. But I don't need to debate with you. I probably know more about evolution than the average person on the street, and apparently considerably more than you, but there are other on this board far more capable than I, so I will admit, I leave the work to them. And besides, you simply ignore their arguments, I am not going to give you more to ignore.

    I think we all know you cannot or will not change your views but every post that shows your views to be idiotic nonsense is useful in that it helps to prevent others being conned and poisoned by your nonsense.

    So I am afraid I will never respect you, nor will I respect your idiotic ideas, but I do pity you. That is as good as it will get I'm afraid.

    MrP


  • Registered Users Posts: 14,615 ✭✭✭✭J C


    MrPudding wrote: »
    So I am afraid I will never respect you, nor will I respect your idiotic ideas, but I do pity you. That is as good as it will get I'm afraid.

    MrP
    I don't need your pity or your respect ... and repeating that you disrespect me and my views does nothing for the debate ... other than prove that you cannot take an objective view of anything I might say.

    On the other hand, as a Christian, I love you, even though you clearly express emnity towards me at every opportunity.
    Is this lack of empathy and continuous expression of emnity towards me, due to your personality or your Atheist worldview?

    ... anyway enough about me and you ... I'll now crack on with responding to oldrnwisr's post.


  • Registered Users Posts: 14,615 ✭✭✭✭J C


    oldrnwisr wrote: »
    Apologies JC (and everyone else) for the delay in responding to this. Work commitments (and the steam sale) got in the way.
    No problem ... and apologies, for the same reasons, for my delay in responding to you.
    oldrnwisr wrote: »
    Before I get to your specific "repsonses" to my last post, I want to revisit specified complexity because it is clear from your post that, like previous occasions on this thread, you lack a fundamental understanding of the topic under discussion and also the primer I will go through below should nicely obviate any of your previous "objections". A first principles primer on specified complexity should help to clarify things. I'm going to show how basic probability can be expressed in information theory terms and how it is related to specified complexity and how this concept gets bastardized by Dembski and how the overall idea of specified complexity is not valid as an argument against evolution.

    OK, so everyone understands basic probability I suppose. If you have a deck of cards and you pull an ace there are 52 total cards and 4 aces therefore the probability is 1/13 or 0.077. This can also be expressed in terms of information. The amount of information (B) in a given probability (P) is:

    B = -log2(P)

    or alternatively

    P = 2^-B

    So, if you toss a coin 4 times and get 4 heads then the probability is 1/16 or 4 bits of information. Another way to look at this is there is a 1/16 chance of guessing any 4-bit number at random. The advantage of using bits comes from the size of numbers we're talking about. Expressing large (i.e. 100+ bits) of information as a basic probability can get very long very fast.

    Expressing a single probability as bits of information is useful but in practice we're not dealing with a single instance of probability. The creationist argument deals with a sequence of probabilities, namely the probability of a, for example, 100aa chain protein. So we need to quantify the complexity of the sequence. The physical complexity, C of a sequence of length T C(T) is simply the probability of the sequence. So, for a coin toss, the probability of a specific 1000 toss sequence is 21000.
    The probability of getting one specific 1000 toss sequence of a normal head and tail coin is the inverse of the combinatorial space of such a toss sequence and it isn't 21000 ... it is actually 9.33 x 10^-302 which is so far beyond the inverse of the Universal Probability Bound (1.00 x 10^-150) as to be an impossibility to achieve.
    For those interested, the combinatorial space of such a toss sequence is 1.07 x 10^301.

    oldrnwisr wrote: »
    However, here we can see the limitations of using physical complexity as an argument for or against evolution. If you toss a coin 1000 times and it comes up with 1000 heads then the probability is 2^1000. Similarly, a random 1000 toss sequence also has a probability of 2^1000. Now, when you toss a coin 1000 times you expect it to have a random sequence. If it comes up heads 1000 times you are going to suspect that there's something wrong with the coin. So you need to quantify exactly how unusual the sequence under consideration is. Since the probability of a random and non-random sequence are the same, physical complexity is a useless descriptor. This is where specificational complexity (not to be confused with specified complexity) comes in.


    Specificational complexity, put plainly, is used to measure how easy it is to describe the sequence under examination. So, in the example above of the random sequence, there's no way to write a program to spit out that sequence that is any shorter than just writing out the sequence for yourself. However, in the case of the 1000 heads sequence, you could write a computer program that repeatedly spits out heads 1000 times. This could be achieved using a vastly smaller amount of information using something like this:

    printf("%s",std::string(1000,'H').c_str());

    So the specificational complexity K(T) is the amount of information that is needed to describe a sequence with physical complexity C(T). So a random sequence will have a high value for K(T), while a "deliberate" sequence will have a low value for K(T). It's also worth noting at this point that K(T) isn't an exact measurement, it is simply an upper bound. If you know of a way to describe a 1000 coin toss sequence using 50 bits of information then K(T) can never be larger than 50. It's possible you may someday find a way to compress the description even further but K(T) should never increase in value.
    The efficiency with which one can express a probability, although important in the communication/transmission/storage requirements for such data (which Shannon Information Theory is primarily concerned about) has no effect on the actual probability of the event occurring ... and when an event that is beyond the Universal Probability Bound (UPB) occurs, like 1000 heads being achieved in 1000 tosses of a coin, this is indicative of either a non-random i.e. deterministic system in operation e.g. a one headed coin is being used or the appliance of intelligent action i.e. somebody deliberately ensures that heads are produced every time, by turning over any tails.
    oldrnwisr wrote: »
    Now, we can begin to quantify how unusual a particular sequence is. In the example of the coin toss sequence, most of the sequences turn out to be random. Therefore,
    K(T) is, for practical purposes equal to T (i.e. uncompressible). In the case of our 1000 heads sequence, we can describe that sequence using, let's say 50 bits. This means that there are 2^50 possible sequences which can be described using 50 bits. Therefore, specified complexity (i.e. the measure of how unusual the 1000 heads sequence is) is:

    2^50/2^1000

    or to make the maths easier

    C(T) - K(T)

    So, for our example above, the specified complexity of a 1000 head sequence is 950 bits.
    Once again, I have to say that the efficiency with which one can express a probability, although important in the communication/transmission/storage requirements for such data has no effect on the actual probability of the event occurring.
    oldrnwisr wrote: »
    This is where William Dembski and his notion of a universal probability bound comes in. The UPB has been discussed at length before but the basic idea goes something like this.

    Let's say that you have a lottery game with odds of winning of 1 in 100,000 or P = 0.00001. Now let's say that you hire 10 people to each go and buy 10 tickets each. We'll call the number of people A and the number of tickets bought by each as B. Therefore the odds now becomes:

    1 - (1-P)^A*B

    or using Boole's inequality this can be expressed as

    A*B*P > 1 - (1-P)^A*B

    So, going back to our coin example for a second, we found earlier that the specified complexity of 1000 heads was 950 bits. Now, let's say that we have 1,000,000 people each performing 10,000,000 coin tosses. Would a 1000 heads sequence still be unusual. Well in this case we can see that

    log2(A*B) = 43

    Therefore, the new probability is 950 bits minus 43 bits or 907 bits which is still highly improbable.
    It's not just 'highly improbable' ... its impossible. The probability against a specific 1000 coin toss sequence being produced with 1 million people performing 10,000,000 coin tosses each is 9.33 x 10^-289 ... which is so far beyond the UPB as to be an impossibilty, if the coins are fair and the tosses genuine i.e. a truly random system.
    oldrnwisr wrote: »
    Dembski's UPB is long-winded and wrong in every important respect but the basic idea follows Boole's inequality above. In Dembski's example he considers the number of possible interactions as the product of the number of elementary particles (10^80) and the number of seconds since the big bang (10^70) or 10^150 total chances. Using Boole's inequality we can see the total number of bits attributable (according to Dembski) as 498 bits (often rounded to 500). Therefore, if the specified complexity of a particular protein sequence is above this threshold then it is, in Dembski's opinion, likely the result of design and not chance. Other creationists have since attempted to cite relevant biological examples of such design inference as I noted in my last post:

    "Complex specified information is a specified subset of Shannon information. That means that complex specified information is Shannon information of a specified nature, ie with meaning and/ or function, and with a specified complexity.

    Shannon's tells us that since there are 4 possible nucleotides, 4 = 2^2 = 2 bits of information per nucleotide. Also there are 64 different coding codons, 64 = 2^6 = 6 bits of information per amino acid, which, is the same as the three nucleotides it was translated from.

    Take that and for example a 100 amino acid long functioning protein- a protein that cannot tolerate any variation, which means it is tightly specified and just do the math 100 x 6 + 6 (stop) = 606 bits of specified information- minimum, to get that protein. That means CSI is present and design is strongly supported.

    Now if any sequence of those 100 amino acids can produce that protein then it isn't specified. IOW if every possible combo produced the same resulting protein, I would say that would put a hurt on the design inference."


    Unfortunately for the creationist movement, there are several problems with Dembski's approach which invalidates specified complexity as an argument against evolution.


    1. Chance


    This is the Arcturan MegaElephant in the room. Specified complexity is a very useful tool to determine if a particular sequence is likely to have been the result of chance or a deliberate action. Of course, if the basic process you're examining is not governed by chance then specified complexity becomes absolutely meaningless. So it is with biology. The creationist example cited above describes the specified complexity of a 100-amino acid sequence arising by chance. It's probability is so low that design is inferred. However, chance or design are not the only options. See, creationists seem to be working from this awful strawman of evolution where they think that this primoridal sea of amino acids that they've heard about from "evolutionists" is like some gigantic tank of velcro balls and that they just stick to each other randomly and eventually a protein is formed. In the real world, however, we can see that chance doesn't govern and isn't involved in protein synthesis.
    I fully agree that chance isn't involved in the production of functional proteins ... highly specified systems tightly controlled by equally highly specified genetic code information ensures this ... and it is the ultimate creator / source of this highly specified genetic code information that is at issue ... was it produced by natural/unguided systems (which are always observed to be either deterministic in nature (and therefore incapable of producing different functional specified series of information) ... or random (and therefore incapable of producing any significant amounts of specified information,especially information series with combinatorial spaces beyond the UPB).
    The only known way of overcoming the limitations of deterministic and random i.e. natural spontaneous sytems is observed to be the use of intelligent action.
    oldrnwisr wrote: »
    Even if we take modern protein biosynthesis we can see this to be the case.

    Proteins are synthesised by means of a process called translation where the ribosomes in a cell's cytoplasm assembles amino acids in a particular sequence according to a set of instructions from messenger RNA as seen below:

    350px-Peptide_syn.png
    Again the issue is what is the ultimate source is of the set of instructions, by which a cell's cytoplasm assembles amino acids in a particular sequence. Again deterministic or random natural/spontaneous processes are incapable of producing these instructions ... and the only known sytem for doing so is intelligent action - so the best available explantion for how these tightly specified instructions came about originally, was through the appliance of intelligence.

    oldrnwisr wrote: »
    In this process the tRNA bound amino acids are assembled together by the ribosome using mRNA as a guide like a kind of biological zipper. Now to go into more depth on translation and cell biology and ribosomes and mRNA would take way too much time and I've bored everyone for long enough already so I've included wikipedia links above for anyone who wants to explore the topic in more depth. Long story short however, the process is not random and is instead controlled by the mRNA. Well, so what I hear you say. Doesn't that just push the question back a step? Aren't we just talking about the probability of mRNA being assembled by chance? Well, no. The origin and synthesis of RNA is not a chance process either and can be demonstrated from first principles. This has been discussed at length before but it bears repeating here (apologies to anyone who was around for this the last time for repeating myself):

    We begin with a primordial earth with just two basic assumptions:

    1) abundant hydrothermal activity

    2) the presence of a few basic organic compounds, particularly cyanamide, cyanoacetylene, glycolaldehyde, glyceraldehyde and also any inorganic phosphate.

    Anyone who wishes to challenge these basic assumptions, be my guest.

    Now, given these intial conditions, here is a sequence of how RNA is formed from basic compounds using only thermodynamics and basic physical chemistry.
    These are highly speculative ideas ... but, for the sake of argument, I'll accept that they could occur spontaneously ... but this doesn't explain the tightly specified nature of the information also in functional RNA. Your explanation is something like trying to explain the spontaneous production of the specified functional information on an Abba CD by theorising that plastic could potentially be spontaneously produced in a tar pit somewhere ... without explaining how it could be moulded into an optically active disk with a highly specified functional recording of an Abba concert on it.
    oldrnwisr wrote: »

    Step 1 - Formation of activated ribonucleotides


    1. Cyanamide and glycolaldehyde form a peptide bond to produce 2-amino-oxazole.
    2. 2-amino-oxazole combines with glyceraldehyde to form a pentose amino-oxazoline.
    3. Pentose amino-oxazolines combine with cyanoacetylene to form anhydroarabinonucleoside.
    4. Anhydroarabinonucleoside undergoes (in the presence of an inorganic phosphate) phosphorylation to become B-ribocytidine-
    2',3'-cyclic phosphate (an activated ribonucleotide).

    Now before, we continue, here's a graphic illustrating the process and the science supporting it:

    Fig1_Orlife.jpg


    Synthesis of activated pyrimidine ribonucleotides in prebiotically plausible conditions


    Step 2 - Vesicle formation



    The next step is the formation a montmorillonite bubble to act as a temporary cell wall. In 2011 a team from Harvard, Princeton and Brandeis universities showed experimentally that a stable, semi-permeable vesicle can form from natural montmorillonite clay around air bubbles present in the ocean.

    GA?id=C0SM01354D

    Semi-permeable vesicles composed of natural clay

    This is an important development for three reasons:

    1. The montmorillonite vesicle provides a stable compartment protecting anything in the interior from external reactions.
    2. Montmorillonite catalyses the polymerisation of ribonucleotides to form RNA.
    3. Montmorillonite catalyses the formation of fatty-acid vesicles leading to the development of a more stable and long-lasting cell wall inside the clay wall.

    With regard to the first point, the study above shows the stability of the montmorillonite cells.

    As for the second point, it has been demonstrated experimentally:

    Oligomerization of ribonucleotides on montmorillonite: reaction of the 5'-phosphorimidazolide of adenosine

    that montmorillonite catalyses the formation of oligomers from the activated ribonucleotides which we have already demonstrated above. These oligomers can reach as much as 50-mer lengths

    ja061782kn00001.gif

    One-Step, Regioselective Synthesis of up to 50-mers of RNA Oligomers by Montmorillonite Catalysis

    These ribonucleotides can permeate the vesicle but once formed are trapped within the protocell membrane.

    As for the third point, it has also been shown experimentally that montmorillonite catalyses the formation of fatty-acid vesicles.

    Mineral Surface Directed Membrane Assembly

    Once fatty-acid vesicle is produced the growth of the vesicle is autocatalytic which has also been demonstrated experimentally:

    Autopoietic Self-Reproduction of Fatty Acid Vesicles

    Once this self-sustaining reaction has begun (sustained by the attraction of nearby lipids), the growing fatty acid vesicle begins to exert an outward pressure on the montmorillonite shell. From basic materials science we know that montmorillonite being a ceramic material has good strength when in compression (hence protection from external forces) but weak in tension. As a result the growing vesicle shatters the montmorillonite shell and the resulting protobiont is free to float in the primordial ocean. So now we have a protobiont consisting of a fatty acid membrane which is permeable to monomers and small molecules but impermeable to the oligomer now trapped within.

    The next step in the process is the growth of the oligomer to form RNA and other more complex biological polymers.

    The basic reaction sequence that is followed is similar to that used in polymerase chain reaction (PCR) testing.

    840px-Polymerase_chain_reaction.svg.png

    Now, here's where it gets interesting. We have above a mechanism for a reaction by which the oligomer inside the protobiont can form larger and more complex structures. However, what we are currently missing is something to kickstart this reaction. This is where the conditions of the early earth. Given what we know from basic geology, physics and geography, it is likely that the early Earth was populated with a vast number of hydrothermal vents.
    Firstly, it has been shown that the protobionts described above are thermally stable at temperatures of up to 100 degrees:

    Thermostability of model protocell membranes

    At these elevated tempeatures the strands of polymer begin to denature while being trapped inside the vesicle while the vesicle itself expands allowing more monomers to cross into the cell whereby the current carries the cell away to a lower temperature where the nucleotides acquired at high temperature can bond to the denatured polymer backbone allowing for growth of the RNA. It can also lead to copying of the RNA. This is an important development. As the RNA inside the vesicle grows/copies it increases the osmotic pressure inside the cell. This causes the vesicle to attract nearby lipids at an even greater rate thus creating a larger cell. As these membranes grow they develop a tubular branched shape which can be divided by external forces such as shear stresses from thermal differentials in the ocean. Here's a nice little graphic to demonstrate what I mean.

    Fig2_Orlife.jpg

    As the authors note in the paper above:

    "The strands of encapsulated double-stranded DNA can be separated by denaturation at high temperature while being retained within vesicles, implying that strand separation in primitive protocells could have been mediated by thermal fluctuations without the loss of genetic material from the protocell. At elevated temperatures, complex charged molecules such as nucleotides cross fatty-acid-based membranes very rapidly, suggesting that high temperature excursions may have facilitated nutrient uptake before the evolution of advanced membrane transporters. The thermostability of these membranes is consistent with the spontaneous replication of encapsulated nucleic acids by the alternation of template-copying chemistry at low temperature with strand-separation and nutrient uptake at high temperature. "


    So now we have a cell containing RNA which is capable of growth and reproduction using only basic chemistry and relying only on thermodynamics and physical forces.
    Again, you're effectively using 'a tar pit and other unlikely events' to explain how an Abba CD arose, with all of high quality and functional tightly specified information on it, that can be translated into music and song by an equally tightly specified CD player ... the inference of intelligent action is the only logical inference ... because the probability of any spontaneous process doing this, will be vastly beyond the UPB ... for both the specified functional information on the Abba CD and the functional specified genetic information encoded within RNA in living organisms.
    oldrnwisr wrote: »

    2. Redundancy


    We've already covered this in a previous post but it should be reiterated here. As the creationist example cited above states:

    "Now if any sequence of those 100 amino acids can produce that protein then it isn't specified."

    Because Dembski is a mathematician and not a biologist this is another point that he gets completely wrong. Proteins aren't unique sequences of amino acids where even one change in an amino acid will completely upset the protein. As I've shown in my cytochrome C example, there is a massive amount of redundancy in protein structure. Also, it should be pointed out that this is true even if we restrict the protein to a definite number of amino acids. However, even this too can vary. This means that the algorithmic specified complexity of most proteins is very low indeed, and this would be true even if we had to take chance into account.
    Like I have said in my previous post #2634, I think that you may be confusing redundancy, with critical amino acid sequences. Redundancy is a good thing and an example of further levels of intelligent design in organisms ... as such redundancy systems are themselves intelligently designed (i.e. have complex specificity) and integrate perfectly with the systems they are designed to replace, in the event of failure/mutation of the primary system.
    What Dembski was talking about are critical amino acid sequences within proteins and other biomolecules where any change in the sequence is observed to make the protein disfunctional.
    Equally, redundancy in codons for amino acids per se doesn't affect the fact that specificity is still vital, for example, take the codons for Leucine and Valine.
    The codons for Leucine are CTT, CTC, CTA, CTG, TTA, TTG
    while the codons for Valine are GTT, GTC, GTA, GTG
    As can be seen, just one change in the first nucleobase changes the codon from one for Leucine to one for Valine ... which, if it were in a critical aa sequence, could destroy the functionality of the protein, thereby possible leading to a lethal or semi-lethal issue for the organism concerned.

    So, even though there is considerable redundancy (as one would expect with an intelligently designed system) ... that redundancy is itself highly specified (and specificity is the hallmark of intelligent action/design).
    oldrnwisr wrote: »
    OK, time to wrap this up. The TLDR is this. Specified complexity is a useful tool to determine if a particular sequence like coin tosses, bingo balls or votes are rigged or random. However, if the basic process isn't governed by chance then specified complexity is meaningless. In a biological context, protein synthesis doesn't happen by chance. It is governed by messenger RNA, a compound which is formed through simple chemistry. Protein synthesis isn't about chance it's about the basic rules of physical chemistry and that's why Dembski, and the creationists who cite him, are talking through their arses.
    RNA may be debatably formed by relatively simple chemical processes ... but the tightly specified information rich nucleotide sequences observed in RNA are indpendent of any deterministic chemical processes and contain functional specified information, which is the hallmark of intelligent action ... and not just any random or chemically determined sequence. Your description of RNA is like claiming that because an optically versatile plastic just might be formed spontaneously ... that this explains how the tightly specified high quality functional information on a Windows 10 CD, could also arise spontaneously.:)

    You are confusing the possible source of the medium with the possible source of the complex specified functional information it contains.


    I would also point out that, unlike me, you haven't actually addressed any of my points in my last post ... and instead you have just posted statements addressing issues of your own choosing ... with no quotes from any points that I have previously made.

    If you wish to invalidate my points, you need to address them ... by quoting what I have said and pointing out where and why you believe that I am in error.


  • Registered Users Posts: 8,573 ✭✭✭spacecoyote


    J C wrote: »
    So, even though there is considerable redundancy (as one would expect with an intelligently designed system) ... that redundancy is itself highly specified (and specificity is the hallmark of intelligent action/design).

    I'm not going to attempt to comment on the remainder of the post...will leave that to the in-house experts, but the above statement feels incongruous to me.

    Why, if you are designing something "intelligently" would you have an abundance of redundancy? Surely something that is designed would be devoid of redundancy.

    Like if I write a macro, I could add in a whole pile of extra code that adds no value to the design, but isn't adversely affecting it. But that doesn't strike me as an "intelligent" method of designing it. So, in my design, I'd strip out all of the redundant elements.


  • Registered Users Posts: 9,788 ✭✭✭MrPudding


    J C wrote: »
    I don't need your pity or your respect ... and repeating that you disrespect me and my views does nothing for the debate ... other than prove that you cannot take an objective view of anything I might say.
    I have been reading your nonsense for over a decade. Any objectivity I may have had in relation to anything you say on this subject died years ago when I realised that no matter what evidence was presented to you, you would be incapable of changing your mind, and you simply repeat the same delusional nonsense.
    J C wrote: »
    On the other hand, as a Christian, I love you, even though you clearly express emnity towards me at every opportunity.
    Is this lack of empathy and continuous expression of emnity towards me, due to your personality or your Atheist worldview?
    I neither want nor need your love, so I guess we are even. I don't have a problem with my personality, I am a very empathetic person, nor is there any issue with my worldview. Both allow me to have a wide circle of friends and acquaintances with varied beliefs. The problem is you JC, not me. Perhaps I am guilty of a lack of patience, but you have been spouting your delusional nonsense for years, and it is getting pretty irritating. In addition to that, I do believe to get special treatment on this forum, which I believe you are totally undeserving of.

    Whilst you do have some use as a tool to show the idiocy of certain beliefs, and countering your delusional nonsense is an excellent opportunity to show others exactly why pretty much everything you say is nonsense, and, hopefully, save them from making the same life mistakes you have made, I personally think that at this stage your utility is not worth the hassle of having to put up with your delusional nonsense.

    Patience... anyway enough about me and you ... I'll now crack on with responding to oldrnwisr's post.[/QUOTE]Yeah, I'm sure it will be great. :rolleyes:
    Don't forget to reference anything you steal from another source. I would want you being accused of plagiarism, again.

    MrP


  • Advertisement
  • Registered Users Posts: 14,615 ✭✭✭✭J C


    I'm not going to attempt to comment on the remainder of the post...will leave that to the in-house experts, but the above statement feels incongruous to me.

    Why, if you are designing something "intelligently" would you have an abundance of redundancy? Surely something that is designed would be devoid of redundancy.

    Like if I write a macro, I could add in a whole pile of extra code that adds no value to the design, but isn't adversely affecting it. But that doesn't strike me as an "intelligent" method of designing it. So, in my design, I'd strip out all of the redundant elements.
    Redundancy is a critical part of design, to avoid the consequences of critical components failing ... e.g. including a spare wheel in a car. providing life jackets on a plane and lifeboats on a ship ... backing up data in case a server should fail, etc.
    In living systems, there are various auto-correction and back up proceses within the genome and cellular infrastructure ... and on a more mundane level ... we have two eyes, and should one be blinded, the other one could still provide us with sight.


  • Registered Users Posts: 14,615 ✭✭✭✭J C


    MrPudding wrote: »
    I have been reading your nonsense for over a decade. Any objectivity I may have had in relation to anything you say on this subject died years ago when I realised that no matter what evidence was presented to you, you would be incapable of changing your mind, and you simply repeat the same delusional nonsense.
    ... or so you say (repeatedly) ... without ever explaining why you believe what I say to be either deluded or nonsense.
    Because of this, it would seem that you are actually the one 'with your fingers stuck in your ears' refusing to even listen to what I say, in case the evidence I provide, might cause you severe cognitative dissonance with maintaining your Atheistic worldview.
    MrPudding wrote: »
    I neither want nor need your love, so I guess we are even.
    Please consider it a free gift to you then.:)
    Trading your disrespect of me for my love of you ... yes this is some kind of 'even' allright.
    MrPudding wrote: »
    I don't have a problem with my personality, I am a very empathetic person, nor is there any issue with my worldview. Both allow me to have a wide circle of friends and acquaintances with varied beliefs. The problem is you JC, not me. Perhaps I am guilty of a lack of patience, but you have been spouting your delusional nonsense for years, and it is getting pretty irritating. In addition to that, I do believe to get special treatment on this forum, which I believe you are totally undeserving of.
    I actually get away with nothing on this forum (nor do I want to) ... and I debate fairly, courteously and strictly in line with your charter at all times.
    MrPudding wrote: »
    Whilst you do have some use as a tool to show the idiocy of certain beliefs, and countering your delusional nonsense is an excellent opportunity to show others exactly why pretty much everything you say is nonsense, and, hopefully, save them from making the same life mistakes you have made, I personally think that at this stage your utility is not worth the hassle of having to put up with your delusional nonsense.
    Once again, you make these accusations and ad Hominems about me ... but you fail to back them up with one scintella of evidence.
    ... and I would have thought that the self-styled 'free-thinkers' on the A & A wouldn't be so afraid of my ideas, that they would deem it necessary to only have me here while I serve some ulterior motive for them. You do your atheist collegues a great disservice with this claim, although, now that I think about it, a somewhat similar point has been made by at least one other poster, here before.

    Imagine the Christianity forum stating that they will only tolerate atheist comments whilst they serve the promotion of Christianity ... and if they ever seriously threaten the faith of those on the forum, they will be banned !!!:eek:
    ... now that I think about it ... some folks over there, have been making similar noises ... but I'd have though that the guys on the A & A are made of sterner stuff !!!
    wrote:
    Originally Posted by J C
    ... anyway enough about me and you ... I'll now crack on with responding to oldrnwisr's post.

    MrPudding
    Yeah, I'm sure it will be great. :rolleyes:
    Don't forget to reference anything you steal from another source. I would want you being accused of plagiarism, again.
    MrP
    ... accusations ... and more unfounded accustions !!:eek:

    ... but my faith says that I gotta love you Mr P.:)

    ... and all this emnity from you, is because I'm providing the evidence for Intelligent Design ... and pointing out the lack of evidence and supporting logic for spontaneous evolution ... it's not like I'm threatening your religious beliefs ... or anything ... or am I ?? !!!:eek:


  • Technology & Internet Moderators Posts: 28,804 Mod ✭✭✭✭oscarBravo


    J C wrote: »
    Redundancy is a critical part of design, to avoid the consequences of critical components failing ...

    [...]

    ... we have two eyes, and should one be blinded, the other one could still provide us with sight.

    One would imagine a truly intelligent designer would have concentrated on spare hearts, rather than eyes.


  • Registered Users Posts: 14,615 ✭✭✭✭J C


    oscarBravo wrote: »
    One would imagine a truly intelligent designer would have concentrated on spare hearts, rather than eyes.
    As two hearts would grossly interfere with each other's operation, and if one was suffering from cholesterol plaques, the other would be equally affected ... and if the blood was tending to clot this could produce two heart attacks, instead of one ... thereby adding to the pain ... without any gain.:)


  • Registered Users Posts: 25,580 ✭✭✭✭Timberrrrrrrr


    J C wrote: »
    As two hearts would grossly interfere with each other's operation, and if one was suffering from cholesterol plaques, the other would be equally affected ... and if the blood was tending to clot this could produce two heart attacks, instead of one ... thereby adding to the pain ... without any gain.:)

    A bit hypocritical when you said
    Redundancy is a critical part of design, to avoid the consequences of critical components failing


  • Advertisement
  • Registered Users Posts: 14,615 ✭✭✭✭J C


    A bit hypocritical when you said
    Sometimes when an essential component fails there is no practical back-up ... and it makes more sense to have the redundancy focussed on preventing failure, rather than coping with it.
    In the case of an engine having oil and coolant levels way beyond ongoing short-term needs is how redundancy is built into its design to prevent engine failure ... and with the heart, the organic ability to counteract bad diet and circulation issues, with cholesterol regulating mechanisms as well as organic automatic repair mechanisms, should a minor heart attack occur, are the built-in redundancy factors.
    ... this is a much better design strategy than having two hearts, possibly working against each other.:)


  • Registered Users Posts: 148 ✭✭Mick_1970


    J C wrote: »
    Sometimes when an essential component fails there is no practical back-up ... and it makes more sense to have the redundancy focussed on preventing failure, rather than coping with it.
    In the case of an engine having oil and coolant levels way beyond ongoing short-term needs is how redundancy is built into its design to prevent engine failure ... and with the heart, the organic ability to counteract bad diet and circulation issues, with cholesterol regulating mechanisms as well as organic automatic repair mechanisms, should a minor heart attack occur, are the built-in redundancy factors.
    ... this is a much better design strategy than having two hearts, possibly working against each other.:)

    Ah, now it all makes sense, your designer has thought of everything in his perfect design.
    Shame he wasn't intelligent enough to provide a separate channel for us to breath through, a third eye to see behind us, stronger legs/back to support our upright stance amongst a host of other blatantly obvious improvements.

    Even a trainee universe designer would have considered this, while he was screwing around with an appendix.


  • Registered Users Posts: 8,450 ✭✭✭AllForIt


    I don't know why God would allow eye diseases like Glaucoma and then design fali-safe mechanisms just to counteract the failure in his design in the first place. Why not just design the eye to be perfect in the first place. If God is responsible for all this then it is true that he certainly does work in mysterious ways.


  • Moderators, Society & Culture Moderators Posts: 24,417 Mod ✭✭✭✭robindch


    MrPudding wrote: »
    Perhaps I am guilty of a lack of patience, but you have been spouting your delusional nonsense for years, and it is getting pretty irritating.
    If it's annoying you, then feel free to put JC on ignore. Left-click on JC's name in the panel to the left of the post and select 'Add JC to Ignore List'. Your life will immediately improve!

    421737.png
    MrPudding wrote: »
    In addition to that, I do believe to get special treatment on this forum, which I believe you are totally undeserving of.
    As per previous posts on this topic, JC is permitted to post in A+A not on account of anything which he/she brings to the party, but for the quality of some of the responses which other forum posters produce. So long as JC comports himself/herself within the general rules - soapboxing aside - he/she will be permitted to post in the forum generally. Any excessively tedious posting will see him/her restricted again to the creationism thread, and any more unbecoming posts after that will see him/her carded or banned from the forum.


  • Moderators, Society & Culture Moderators Posts: 24,417 Mod ✭✭✭✭robindch


    J C wrote: »
    ... this is a much better design strategy than having two hearts, possibly working against each other.:)
    Rubbish.

    There are plenty of organisms out there with more than one heart - octopuses and squid have three hearts. And if memory serves, earthworms have more than three.


  • Registered Users Posts: 8,573 ✭✭✭spacecoyote


    J C wrote: »
    Redundancy is a critical part of design, to avoid the consequences of critical components failing ... e.g. including a spare wheel in a car. providing life jackets on a plane and lifeboats on a ship ... backing up data in case a server should fail, etc.
    In living systems, there are various auto-correction and back up proceses within the genome and cellular infrastructure ... and on a more mundane level ... we have two eyes, and should one be blinded, the other one could still provide us with sight.

    Is that applying a more human definition of engineering though, in terms of redundancy. I get that if a fallable human is designing a piece of tech, like a car, then fail-safes make sense.

    But if you're a divine, infallible, creator, who engineered the human form, why design something imperfectly?

    Is the idea of redundancy not more in line with an evolutionary model?


  • Registered Users Posts: 1,594 ✭✭✭oldrnwisr


    J C wrote: »
    No problem ... and apologies, for the same reasons, for my delay in responding to you.

    No problem.

    J C wrote: »
    The probability of getting one specific 1000 toss sequence of a normal head and tail coin is the inverse of the combinatorial space of such a toss sequence and it isn't 21000 ... it is actually 9.33 x 10^-302 which is so far beyond the inverse of the Universal Probability Bound (1.00 x 10^-150) as to be an impossibility to achieve.
    For those interested, the combinatorial space of such a toss sequence is 1.07 x 10^301.

    Yes, I know it's not 21000. But I thought even you would be able to recognise a typo when you saw one. If we're having a conversation about a coin toss which is a 1 in 2 chance, and we're talking about 1000 such tosses then the logical inference is that the chance for the sequence is 2^1000. I originally tried to put the exponents in superscript but it wouldn't appear correctly so I had to go back and use a caret instead. Guess I missed one. However, since 2^1000 = 1.07x10^301, I'm not sure what this minor display of pedantry adds to your argument.

    J C wrote: »
    The efficiency with which one can express a probability, although important in the communication/transmission/storage requirements for such data (which Shannon Information Theory is primarily concerned about) has no effect on the actual probability of the event occurring ... and when an event that is beyond the Universal Probability Bound (UPB) occurs, like 1000 heads being achieved in 1000 tosses of a coin, this is indicative of either a non-random i.e. deterministic system in operation e.g. a one headed coin is being used or the appliance of intelligent action i.e. somebody deliberately ensures that heads are produced every time, by turning over any tails.

    OK, either you didn't read the point that I (and Dembski) made or you failed to understand it.
    To clarify, physical complexity (i.e. basic probability of an event) doesn't actually inform us very much. The odds of tossing 1000 heads is 1 in 2^1000. But then the probability of any 1000 toss sequence is the same. Therefore physical complexity is a poor metric for assessing design. Dembski's complex specified information idea depends on matching the probability of an event to a given search space. Therefore, calculating the algorithmic specified complexity (i.e. the efficiency of description/transmission of the physical complexity) is a necessary step.
    In the context of the coin toss sequence, the probabilities (or physical complexities) of the following two sequences are both the same:

    HHHHHHHHHH

    HTHHTTHTHT

    So, the probability of the sequence doesn't tell us anything about how deliberate the sequence is. However looking at the sequence most people would recognise the first sequence as deliberate and the second as random. So what we need is a way to quantify this degree of intent. This measure is algorithmic specified complexity which measures the efficiency of description/transmission of the sequence. It is a necessary step in Dembski's explanatory filter.

    J C wrote: »
    Once again, I have to say that the efficiency with which one can express a probability, although important in the communication/transmission/storage requirements for such data has no effect on the actual probability of the event occurring.

    Yes, we heard you the first time.

    J C wrote: »
    It's not just 'highly improbable' ... its impossible. The probability against a specific 1000 coin toss sequence being produced with 1 million people performing 10,000,000 coin tosses each is 9.33 x 10^-289 ... which is so far beyond the UPB as to be an impossibilty, if the coins are fair and the tosses genuine i.e. a truly random system.

    Well, yes in the coin tossing sequence we're talking about relatively small numbers just 10^6 actors and 10^7 attempts. However, in the amino acid sequence the numbers get much bigger. So in the case of amino acids, even if we took chance into account, we're talking about the number of actors being 10^50 and the number of attempts being 10^30. So even if we were talking about modern protein sequences, we are talking about a situation which is only slightly improbable. If we are talking about shorter sequences of the kind found in abiogenesis experiments then the probability falls dramatically (i.e. 10^40 or less).

    J C wrote: »
    Again the issue is what is the ultimate source is of the set of instructions, by which a cell's cytoplasm assembles amino acids in a particular sequence. Again deterministic or random natural/spontaneous processes are incapable of producing these instructions ... and the only known sytem for doing so is intelligent action - so the best available explantion for how these tightly specified instructions came about originally, was through the appliance of intelligence.

    OK, the set of instructions as you call it is the mRNA. Also I don't know why you're being so vague by using the term cytoplasm.

    300px-Animal_Cell.svg.png

    The cytoplasm, for those who didn't study biology is everything inside a cell membrane, excluding the nucleus (1,2 on the diagram above). Protein synthesis occurs in the ribosome (3 on the diagram, the tiny dots you can see in the image). The ribosome acts like an assembly line, stitching together amino acids to create a protein. The instructions come from the messenger RNA. As I've shown in my previous post, the development of RNA from abiotic chemical compounds can be demonstrated theoretically and experimentally. There's no need or place for intelligence in the mechanism.

    J C wrote: »
    These are highly speculative ideas ... but, for the sake of argument, I'll accept that they could occur spontaneously ... but this doesn't explain the tightly specified nature of the information also in functional RNA. Your explanation is something like trying to explain the spontaneous production of the specified functional information on an Abba CD by theorising that plastic could potentially be spontaneously produced in a tar pit somewhere ... without explaining how it could be moulded into an optically active disk with a highly specified functional recording of an Abba concert on it.

    Speculative? Hardly. Like I said there are only a few basic assumptions in my process and each of them are supported by the data. The hydrothermal activity and implications for chemosynthesis are well discussed in the literature. You can read a very nice summary article here.
    As for the other assumptions regarding the chemical composition of the early Earth, you are more than welcome to refute any or all of the papers where these assumptions come from:


    J C wrote: »
    Like I have said in my previous post #2634, I think that you may be confusing redundancy, with critical amino acid sequences. Redundancy is a good thing and an example of further levels of intelligent design in organisms ... as such redundancy systems are themselves intelligently designed (i.e. have complex specificity) and integrate perfectly with the systems they are designed to replace, in the event of failure/mutation of the primary system.
    What Dembski was talking about are critical amino acid sequences within proteins and other biomolecules where any change in the sequence is observed to make the protein disfunctional.
    Equally, redundancy in codons for amino acids per se doesn't affect the fact that specificity is still vital, for example, take the codons for Leucine and Valine.
    The codons for Leucine are CTT, CTC, CTA, CTG, TTA, TTG
    while the codons for Valine are GTT, GTC, GTA, GTG
    As can be seen, just one change in the first nucleobase changes the codon from one for Leucine to one for Valine ... which, if it were in a critical aa sequence, could destroy the functionality of the protein, thereby possible leading to a lethal or semi-lethal issue for the organism concerned.

    So, even though there is considerable redundancy (as one would expect with an intelligently designed system) ... that redundancy is itself highly specified (and specificity is the hallmark of intelligent action/design).

    OK, I think this is as good a point as any to clear up some obvious misconceptions you have about redundancy and specificity.

    Firstly, redundancy. The first thing to clarify is that when we're talking about redundancy here we're not talking about functional redundancy (i.e. backup mechanisms like having two eyes as you mention in your response to spacecoyote). We're talking about structural redundancy (i.e. that a given protein can do the same job even if you make massive changes to its amino acid sequence). I'm not sure you've quite grasped the nature of protein structural redundancy and the implications for the idea that genetic information is composed of "tightly specified functional information" to borrow a phrase of yours. You see, the structure of a protein is related to its amino acid sequence, but only loosely so and the function of a protein is related to its structure, but again, only loosely so. Let's take one of our previous examples, cytochrome c.
    Cytochrome C is a protein found in many species which is about 100 amino acids long and is the kind of sequence that many creationists talk about. There are several points of interest about cytochrome C:

    1. It's length varies between 103 and 112 amino acids (Most higher order organisms have 104aa versions).
    2. In over 30 species with 104aa versions only 34 amino acids are conserved (i.e. do not change from species to species).
    3. The general structure of the protein follows a CXXCH (cysteine-any-any-cysteine-histidine) pattern. However, the variation in the overall sequence is so great that biologists have had to develop 4 different classes of fold structure to group the different variants.
    So, what does all this mean? Well, to put this information in context, let's look at a claim previously made by JC on multiple occasions:

    J C wrote: »
    Please remember that what we observe in living organisms is the equivalent of completely ordered lists all over the place ... and even one or two 'disorders' in a critical list will completely destroy it's functionality (and ability to reproduce at all) ... thereby destroying the ability of NS to make selections that work towards functionality.
    In a 100 aa critical sequence if 98 aa are in the 'correct' sequence for a particular functionality, this biomolecule will be just as non-functional as a situation where none of the aa's are in the correct sequence for functionality ... so nature cannot gradually work up to produce functional novel biomolecules.


    When we compare JC's claim with hard biological evidence we see the flaw in JC's argument. Contrary to his claim of "completely ordered lists" we find that there is, in fact, a massive degree of flexibility in the ordering of these lists. Secondly, even though there may be critical sequences (and that's not at a given) these sequences constitute a small fraction of the overall sequence of the molecule/organism. Therefore, the idea of tightly specified information which is destroyed by mutation is, on the whole, ridiculous.

    You see, JC, while you're so busy accusing me of being confused you have conflated two entirely different creationist arguments about specificity, one dealing with abiogenesis and one dealing with evolution.
    The abiogenesis specificity argument, first outlined by Henry Morris in "Scientific Creationism" (pgs. 59-69 if you're interested) and also used by people like Dembski and Stephen Meyer (whose version of the argument is quoted below) goes like this:

    "First you need the right bonds between the amino acids. Second, amino acids come in right-handed and left-handed versions, and you've got to get only left-handed ones. Third, the amino acids must line up in a specified sequence like letters in a sentence. Run the odds of those things falling into place on their own and you find that the probabilities of forming a rather short functional protein at random would be one chance in a hundred thousand trillion trillion trillion trillion trillion trillion trillion trillion trillion. That's a 10 with 125 zeroes after it!"

    There are several mistakes of fact in the argument.

    • You don't only have to get left-handed amino acids. There are bacterial species which use right-handed amino acids.
    • The necessity of left-handed amino acids is presented as a problem. It isn't. The overwhelming majority of amino acids, even those formed abiotically, are left-handed. This is due to a circular polarisation of UV light in the early solar system.
    • The amino acids, don't have to line up in a specified sequence. As we've seen above, the amino acids don't even have to be the same, for the most part, nor do there have to be the same number of them. This is due to the function of a protein being related to its structure and the flexibility of said structure compared to its aa sequence.
    • One minor point but one hundred thousand trillion trillion trillion trillion trillion trillion trillion trillion trillion is actually 10 with 113 zeroes after it not 125 zeroes. Even on his own ridiculous probability estimate he manages to be off by a trillion. Maybe he should proofread his arguments in future.

    The second argument deals with mutation of a pre-existing organism or sequence.This argument is outlined by Henry Morris, again in "Scientific Creationism" (pgs.55-57). JC has also outlined this claim before in several places:
    J C wrote: »
    ... and the observed scientific fact is that mutagenesis degrades genetic information ... and the argument that such degredation is a plausible mechanism to provide the increase in genetic information objectively required to 'evolve' from pondkind to mankind is never going to win ... as it is arguing that an observed impossibility occurred.
    J C wrote: »
    As for longevity being linked to 'mutational load' ... this is a logical inference from our current experience, whereby deleterious mutations (and they are almost all deleterious) shorten the lives of their hosts.
    J C wrote: »
    I'm not married to a cousin and I don't believe in the power of mutagenesis to do anything except degrade and destroy genetic information ... apart from that, I thank you for your comparison of myself to Darwin!!biggrin.pngsmile.png

    The invocation of specificity when it comes to evolution has been the argument that the genetic information in organisms is so "tightly specified" that it is very fragile. Even one or two mutations will destroy this specified information and since most mutations (according to JC) are deleterious, this is what we find to be the case.

    Except that it's not what we find. At all.

    The real answer is that most mutations do, well, nothing. Most mutations are neutral. At the moment, each human is born with about 175 mutations compared to their parent's genomes. Of these only 3 result in any change in functionality. Of mutations which do result in functional change (1.7%), nine out of 10 will be deleterious and 1 will be beneficial. This isn't a human specific result, of course, the same result has been found in bacteria.
    The key implication of complex functional specified information is that you can't go around messing with it. If you change even one or two nucleotides then you'll break everything. Something will either stop working or not work as well as it should. Except that we find time and again situations where a mutation produces something even better. Like a resistance to AIDS or a resistance to heart disease or increased bone strength or resistance to malaria or the acquisition of a new ability to digest a new food source. To quote Steve Rogers, I could do this all day but I think you get my point.
    If creationists were right about specificity then we wouldn't see just 3 functional mutations out of 175 and we wouldn't see beneficial mutations. If the Christian creationist narrative were correct, then we would only see degradation from an original state of perfection. But we don't. What we see is a case of regression to the mean. Thanks to the Red Queen effect, there are times when a powerful beneficial mutation will arise but then will get reset as it spreads through the population. There have been numerous examples of this throughout history. For example, the sickle-cell trait (mentioned above) is a mutation which alters the hemoglobin beta gene. The heterozygous (i.e. one copy of the mutation) form of the trait results in a high degree of resistance against malaria. However, people with a homozygous form (i.e. two copies of the mutation) suffer from sickle-cell disease. This is the Red Queen effect, a series of advantages and resets or setbacks and recoveries. It is named for the Red Queen in Alice in Wonderland who never gets anywhere because no matter how fast she moves the scenery moves right along with her. For those interested there is a terrific pop-sci book on the subject by Matt Ridley.

    J C wrote: »
    I would also point out that, unlike me, you haven't actually addressed any of my points in my last post ... and instead you have just posted statements addressing issues of your own choosing ... with no quotes from any points that I have previously made.

    If you wish to invalidate my points, you need to address them ... by quoting what I have said and pointing out where and why you believe that I am in error.

    You're right I didn't. Instead I directed my efforts at the core of your arguments rather than chasing you down more rabbit holes. I short-circuited the need to deal with your spurious points and instead refuted the central premise of your argument.


  • Registered Users Posts: 7,643 ✭✭✭Doctor Jimbob


    J C wrote: »
    As two hearts would grossly interfere with each other's operation, and if one was suffering from cholesterol plaques, the other would be equally affected ... and if the blood was tending to clot this could produce two heart attacks, instead of one ... thereby adding to the pain ... without any gain.:)

    Not if they were intelligently designed


  • Registered Users Posts: 14,615 ✭✭✭✭J C


    robindch wrote: »
    Rubbish.

    There are plenty of organisms out there with more than one heart - octopuses and squid have three hearts. And if memory serves, earthworms have more than three.
    None of these organisms have the type of circulatory system that one-hearted mammals have.
    ... your're comparing a Ferrari with a wheelbarrow, Robin !!!:)


  • Registered Users Posts: 14,615 ✭✭✭✭J C


    Mick_1970 wrote: »
    Ah, now it all makes sense, your designer has thought of everything in his perfect design.
    Shame he wasn't intelligent enough to provide a separate channel for us to breath through,
    Efficiency and elegance of design.
    Mick_1970 wrote: »
    a third eye to see behind us,
    We see enough with the two eyes we already have ... and we have a neck and body that can turn in a split second giving the benefit of binocular vision in any direction we desire.
    Mick_1970 wrote: »
    stronger legs/back to support our upright stance amongst a host of other blatantly obvious improvements.
    A healthy back and legs are more than adequate for bipedalism.
    Mick_1970 wrote: »
    Even a trainee universe designer would have considered this, while he was screwing around with an appendix.
    The appendix quietly does its work protecting infants from gastroenteritis ... and in most adults, it never gives any bother.


  • Advertisement
  • Registered Users Posts: 14,615 ✭✭✭✭J C


    AllForIt wrote: »
    I don't know why God would allow eye diseases like Glaucoma and then design fali-safe mechanisms just to counteract the failure in his design in the first place. Why not just design the eye to be perfect in the first place. If God is responsible for all this then it is true that he certainly does work in mysterious ways.
    Human Beings were designed perfect ... and we remain 99.999999% perfect still ... but due to the Fall, imperfections crept in ... and that 0.000000001% of imperfection ... will sicken us ... and it will eventially kill us all.


  • Technology & Internet Moderators Posts: 28,804 Mod ✭✭✭✭oscarBravo


    J C wrote: »
    Efficiency and elegance of design.

    Like the efficient and elegant design of the giraffe's laryngeal nerve?


  • Moderators, Society & Culture Moderators Posts: 24,417 Mod ✭✭✭✭robindch


    J C wrote: »
    None of these organisms have the type of circulatory system that one-hearted mammals have.
    Yes, because they have more than one heart.

    Your original claim was that multiple hearts were a bad idea as they could be "possibly working against each other". I've shown that evolution reckons they're just fine to the extent that many organisms have many hearts.

    Your original point has therefore been refuted.


  • Moderators, Society & Culture Moderators Posts: 24,417 Mod ✭✭✭✭robindch


    oscarBravo wrote: »
    Like the efficient and elegant design of the giraffe's laryngeal nerve?
    Or, in a human, the silly design of the vertebral vein which passes through the vertebrae, and the cervical artery which passes close by, which means that sharp movements of the neck can cause the either a tear possibly resulting in stroke, or sometimes, a rupture causing a relatively instant death.

    Intelligent design? My ass!

    http://www.webmd.com/stroke/news/20140807/could-chiropractic-manipulation-of-your-neck-trigger-a-stroke#1

    421781.jpg


  • Registered Users Posts: 14,615 ✭✭✭✭J C


    oldrnwisr wrote: »
    Yes, I know it's not 21000. But I thought even you would be able to recognise a typo when you saw one. If we're having a conversation about a coin toss which is a 1 in 2 chance, and we're talking about 1000 such tosses then the logical inference is that the chance for the sequence is 2^1000. I originally tried to put the exponents in superscript but it wouldn't appear correctly so I had to go back and use a caret instead. Guess I missed one. However, since 2^1000 = 1.07x10^301, I'm not sure what this minor display of pedantry adds to your argument.
    It's vastly beyond the Universal Proability Bound (UPB) ... whereas 21000 most definitely is not.
    I see now that it was a typo ... and I apologise for not seeing this.

    oldrnwisr wrote: »
    OK, either you didn't read the point that I (and Dembski) made or you failed to understand it.
    To clarify, physical complexity (i.e. basic probability of an event) doesn't actually inform us very much. The odds of tossing 1000 heads is 1 in 2^1000. But then the probability of any 1000 toss sequence is the same. Therefore physical complexity is a poor metric for assessing design. Dembski's complex specified information idea depends on matching the probability of an event to a given search space. Therefore, calculating the algorithmic specified complexity (i.e. the efficiency of description/transmission of the physical complexity) is a necessary step.
    In the context of the coin toss sequence, the probabilities (or physical complexities) of the following two sequences are both the same:

    HHHHHHHHHH

    HTHHTTHTHT

    So, the probability of the sequence doesn't tell us anything about how deliberate the sequence is. However looking at the sequence most people would recognise the first sequence as deliberate and the second as random. So what we need is a way to quantify this degree of intent. This measure is algorithmic specified complexity which measures the efficiency of description/transmission of the sequence. It is a necessary step in Dembski's explanatory filter.
    I think that he is wrong on this ... because functionality is observed to be related to a specific sequence that is neither produced deterministically nor randomly.
    To illustrate, a word in the English Language is neither a repeat of letters (HHHHHHHHHH) nor a pattern of letters (THTHTHTHTH) it is specific sequence of letters (HAT AND CAT)
    ... and there are similar specific sequences of base pair 'letters' within the language of DNA.

    oldrnwisr wrote: »
    Yes, we heard you the first time.
    OK ... so are you agreeing with me on this, then?
    oldrnwisr wrote: »
    Well, yes in the coin tossing sequence we're talking about relatively small numbers just 10^6 actors and 10^7 attempts. However, in the amino acid sequence the numbers get much bigger. So in the case of amino acids, even if we took chance into account, we're talking about the number of actors being 10^50 and the number of attempts being 10^30. So even if we were talking about modern protein sequences, we are talking about a situation which is only slightly improbable. If we are talking about shorter sequences of the kind found in abiogenesis experiments then the probability falls dramatically (i.e. 10^40 or less).
    ... but we are talking about Human Beings ... and were effectively stuck back somewhere around short-chain biochemicals, using random processes, before we reach the UPB.

    oldrnwisr wrote: »
    OK, the set of instructions as you call it is the mRNA.
    ... and the 'set of instructions' were your words, that I borrowed to make my point that deterministic or random natural/spontaneous processes are incapable of producing these instructions ... and the only known sytem for doing so is intelligent action - so the best available explantion for how these tightly specified instructions came about originally, was through the appliance of intelligence.

    oldrnwisr wrote: »
    Also I don't know why you're being so vague by using the term cytoplasm.

    300px-Animal_Cell.svg.png

    The cytoplasm, for those who didn't study biology is everything inside a cell membrane, excluding the nucleus (1,2 on the diagram above). Protein synthesis occurs in the ribosome (3 on the diagram, the tiny dots you can see in the image). The ribosome acts like an assembly line, stitching together amino acids to create a protein. The instructions come from the messenger RNA. As I've shown in my previous post, the development of RNA from abiotic chemical compounds can be demonstrated theoretically and experimentally. There's no need or place for intelligence in the mechanism.
    Yes RNA could be produced artificially, using very significant inputs of intelligent design ... but it would be very difficult if nit impossible to produce spontaneously ... and the instructions carried on it can only be produced by intelligence.

    oldrnwisr wrote: »
    Speculative? Hardly. Like I said there are only a few basic assumptions in my process and each of them are supported by the data. The hydrothermal activity and implications for chemosynthesis are well discussed in the literature. You can read a very nice summary article here.
    As for the other assumptions regarding the chemical composition of the early Earth, you are more than welcome to refute any or all of the papers where these assumptions come from:
    ... all still very specualtive ... but they aren't them main issue ... even granting, for the sake of argument that they could be produce ... there is no theory, no matter how speculative, on how the vast quantities of high quality tightly specified information was infused onto the DNA and, in turn, RNA molecules ... and the information in the Human Genome, if printed out on standard sheets of paper would result in a stack of paper with a height greater than the Statue of Liberty

    http://bio4.us/biotrends/human_genome_height.html
    Comparisons with the Windows 10 programme on a CD, pale into comparison with the Human Genome.

    oldrnwisr wrote: »
    OK, I think this is as good a point as any to clear up some obvious misconceptions you have about redundancy and specificity.

    Firstly, redundancy. The first thing to clarify is that when we're talking about redundancy here we're not talking about functional redundancy (i.e. backup mechanisms like having two eyes as you mention in your response to spacecoyote). We're talking about structural redundancy (i.e. that a given protein can do the same job even if you make massive changes to its amino acid sequence). I'm not sure you've quite grasped the nature of protein structural redundancy and the implications for the idea that genetic information is composed of "tightly specified functional information" to borrow a phrase of yours. You see, the structure of a protein is related to its amino acid sequence, but only loosely so and the function of a protein is related to its structure, but again, only loosely so. Let's take one of our previous examples, cytochrome c.
    Cytochrome C is a protein found in many species which is about 100 amino acids long and is the kind of sequence that many creationists talk about. There are several points of interest about cytochrome C:

    1. It's length varies between 103 and 112 amino acids (Most higher order organisms have 104aa versions).
    2. In over 30 species with 104aa versions only 34 amino acids are conserved (i.e. do not change from species to species).
    3. The general structure of the protein follows a CXXCH (cysteine-any-any-cysteine-histidine) pattern. However, the variation in the overall sequence is so great that biologists have had to develop 4 different classes of fold structure to group the different variants.
    So, what does all this mean? Well, to put this information in context, let's look at a claim previously made by JC on multiple occasions:

    When we compare JC's claim with hard biological evidence we see the flaw in JC's argument. Contrary to his claim of "completely ordered lists" we find that there is, in fact, a massive degree of flexibility in the ordering of these lists. Secondly, even though there may be critical sequences (and that's not at a given) these sequences constitute a small fraction of the overall sequence of the molecule/organism. Therefore, the idea of tightly specified information which is destroyed by mutation is, on the whole, ridiculous.
    It isn't at all ridiculous ... all specified functional information degrades as random changes are made to and random changes are never observed to improve it ... because the useless combinatorial space is almost infinitely greater than the useful combinatorial space.
    ... and that is why mutagenesis will rapidly kill you by degrading even a tiny number of critical DNA sequences.
    oldrnwisr wrote: »
    You see, JC, while you're so busy accusing me of being confused you have conflated two entirely different creationist arguments about specificity, one dealing with abiogenesis and one dealing with evolution.
    The abiogenesis specificity argument, first outlined by Henry Morris in "Scientific Creationism" (pgs. 59-69 if you're interested) and also used by people like Dembski and Stephen Meyer (whose version of the argument is quoted below) goes like this:

    "First you need the right bonds between the amino acids. Second, amino acids come in right-handed and left-handed versions, and you've got to get only left-handed ones. Third, the amino acids must line up in a specified sequence like letters in a sentence. Run the odds of those things falling into place on their own and you find that the probabilities of forming a rather short functional protein at random would be one chance in a hundred thousand trillion trillion trillion trillion trillion trillion trillion trillion trillion. That's a 10 with 125 zeroes after it!"

    There are several mistakes of fact in the argument.

    • You don't only have to get left-handed amino acids. There are bacterial species which use right-handed amino acids.
    • The necessity of left-handed amino acids is presented as a problem. It isn't. The overwhelming majority of amino acids, even those formed abiotically, are left-handed. This is due to a circular polarisation of UV light in the early solar system.
    • The amino acids, don't have to line up in a specified sequence. As we've seen above, the amino acids don't even have to be the same, for the most part, nor do there have to be the same number of them. This is due to the function of a protein being related to its structure and the flexibility of said structure compared to its aa sequence.
    ... and yet mutagenesis, which alters the genetic sequence ... and therefore the aa sequence causes disease and and death.
    Yes, there is a degree of 'plasticity' built into the sequences that produce proteins ... to cope with environmental assaults ... but these are limited ... and when mutagenesis occurs the results can be devastating.
    oldrnwisr wrote: »
    The second argument deals with mutation of a pre-existing organism or sequence.This argument is outlined by Henry Morris, again in "Scientific Creationism" (pgs.55-57). JC has also outlined this claim before in several places:

    The invocation of specificity when it comes to evolution has been the argument that the genetic information in organisms is so "tightly specified" that it is very fragile. Even one or two mutations will destroy this specified information and since most mutations (according to JC) are deleterious, this is what we find to be the case.

    Except that it's not what we find. At all.

    The real answer is that most mutations do, well, nothing. Most mutations are neutral. At the moment, each human is born with about 175 mutations compared to their parent's genomes. Of these only 3 result in any change in functionality. Of mutations which do result in functional change (1.7%), nine out of 10 will be deleterious and 1 will be beneficial. This isn't a human specific result, of course,
    ... 9 out of 10 deleterious and 1 out of 10 beneficial ... some estimates are thousands to one beneficial ... but if, for the sake of argument, I accept this 10 to 1 figure ... it implies that you would have 'run away' deleterious effects the further along the mutagenic curve you go ... which would certainly kill you ... and the one in 10 'beneficial' mutations wouldn't save you.
    Equally, we are only talking about a tiny number of mutations ... before you die ... as little a 3 driver gene mutations is all that is required to cause lung and liver cancer.
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4291633/
    ... and that is why mutagenesis is an agent of degradation and death ... and is not any kind of plausible candidate for improvement and life.

    oldrnwisr wrote: »
    the same result has been found in bacteria.
    The key implication of complex functional specified information is that you can't go around messing with it. If you change even one or two nucleotides then you'll break everything. Something will either stop working or not work as well as it should. Except that we find time and again situations where a mutation produces something even better. Like a resistance to AIDS or a resistance to heart disease or increased bone strength or resistance to malaria or the acquisition of a new ability to digest a new food source. To quote Steve Rogers, I could do this all day but I think you get my point.
    If creationists were right about specificity then we wouldn't see just 3 functional mutations out of 175 and we wouldn't see beneficial mutations. If the Christian creationist narrative were correct, then we would only see degradation from an original state of perfection. But we don't. What we see is a case of regression to the mean.
    What we see with mutagenesis, is rapid degeneration and death ... and this would be even more rapid, were it not for the furious work of auto-repair mechanisms within living cells ... that are themselves tightly specified ...and therefore placed there originally by intelligence.
    oldrnwisr wrote: »
    Thanks to the Red Queen effect, there are times when a powerful beneficial mutation will arise but then will get reset as it spreads through the population. There have been numerous examples of this throughout history. For example, the sickle-cell trait (mentioned above) is a mutation which alters the hemoglobin beta gene. The heterozygous (i.e. one copy of the mutation) form of the trait results in a high degree of resistance against malaria. However, people with a homozygous form (i.e. two copies of the mutation) suffer from sickle-cell disease. This is the Red Queen effect, a series of advantages and resets or setbacks and recoveries. It is named for the Red Queen in Alice in Wonderland who never gets anywhere because no matter how fast she moves the scenery moves right along with her. For those interested there is a terrific pop-sci book on the subject by Matt Ridley.
    ... yes the Red Queen is a children's fable ... just like evolution allright.:)
    ... in the case of Sicle Cell Anaemia ... something that would kill you in it's homzygous manifestation makes you sick in its hetrozygous manifestation and protects you from Malaria ... which sounds like the 'cure' is only marginally better than the disease.:eek:
    It certainly doesn't go anywhere in explaining how almost perfect Human Beings came to be Human Beings.
    oldrnwisr wrote: »
    You're right I didn't. Instead I directed my efforts at the core of your arguments rather than chasing you down more rabbit holes. I short-circuited the need to deal with your spurious points and instead refuted the central premise of your argument.
    You actually posted standard (evolutionist) textbook stuff that was tangental to the points that I had made.
    I'd like to thank you for quoting and addressing some of my points in this post.


  • Registered Users Posts: 14,615 ✭✭✭✭J C


    oscarBravo wrote: »
    Like the efficient and elegant design of the giraffe's laryngeal nerve?
    It works quite well actually ... elegance can be extravagant ... and the giraffe's laryngeal nerve is nothing, if not extravagant.:)


  • Registered Users Posts: 14,615 ✭✭✭✭J C


    robindch wrote: »
    Or, in a human, the silly design of the vertebral vein which passes through the vertebrae, and the cervical artery which passes close by, which means that sharp movements of the neck can cause the either a tear possibly resulting in stroke, or sometimes, a rupture causing a relatively instant death.

    Intelligent design? My ass!

    http://www.webmd.com/stroke/news/20140807/could-chiropractic-manipulation-of-your-neck-trigger-a-stroke#1

    421781.jpg
    You remind me of a guy who has a Ferrari ... who is complaining about the fact that it has mirrors that could cut and kill him ... if they were to break in some kind of freak accident.
    ... go enjoy your Ferrari-like body Robin, ... and stop worrying about things that will never happen.:)
    robindch wrote: »
    Intelligent design? My ass!
    ... it was intelligently designed too ... and is quite a multi-tasker !!:D


  • Technology & Internet Moderators Posts: 28,804 Mod ✭✭✭✭oscarBravo


    J C wrote: »
    It works quite well actually ... elegance can be extravagant ... and the giraffe's laryngeal nerve is nothing, if not extravagant.:)
    It works, sure. If it was designed, it was badly designed. Calling bad design "extravagance" is quite a reach.


  • Registered Users Posts: 25,236 ✭✭✭✭King Mob


    J C wrote: »
    You remind me of a guy who has a Ferrari ... who is complaining about the fact that it has mirrors that could cut and kill him ... if they were to break in some kind of freak accident.
    ... go enjoy your Ferrari-like body Robin, ... and stop worrying about things that will never happen.:)
    Could you please explain how this flaw came about?
    Did God design it in on purpose? Or was he not able to account for it? Or did he not know it could happen?

    I don't think not worrying about it makes it not exist.
    And you should worry about it since it shows that the human body is not "perfect" as you claim. Nor is it explainable by your bull**** hand waving about "the fall".

    So why does this flaw exist?


  • Advertisement
  • Registered Users Posts: 14,615 ✭✭✭✭J C


    oscarBravo wrote: »
    It works, sure. If it was designed, it was badly designed. Calling bad design "extravagance" is quite a reach.
    Why do you think it was badly designed? ... it works perfectly well ... it's a bit extravagant allright ... but extravagance isn't a design fault.


Advertisement