Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Muppet newbie C declaration question

Options
  • 28-11-2003 12:46pm
    #1
    Registered Users Posts: 2,320 ✭✭✭


    How do you declare boolean values in C? we havent covered it in our notes and it'd be useful for an advanced bubble sort.
    void sort_advbub (int array[])
    {
    int n, index, temp, changes;
    compare=0;
    swaps=0;
    
    do
    {
    n=SIZE;
    changes=0;
    n--;
    compare++;
    for (index=0; index < n; index++)
       {
       if (array[index] > array[index+1])
       	{
    	temp = array[index];
    	array[index] = array[index+1];
    	array[index+1] = temp;
    	swaps++;
    	changes=1;
                    }
      }
    }while (changes!=0);
    }
    

    where compare and swaps are global integers used to find the number of passes through the array and how many times elements are swapped. I'd rather use a boolean value to control the loop than the current integer value. (PS if the code looks funky thats because i started with PASCAL!)

    cheers in advance

    daz


Comments

  • Registered Users Posts: 19,396 ✭✭✭✭Karoma



    #define TRUE 1
    #define FALSE 0

    typedef int boolean;

    void main()
    {
    boolean c;

    //then
    c=FALSE;
    // or
    c=TRUE;

    blah blah blah
    }


    ?


  • Registered Users Posts: 79 ✭✭tendofan


    I'm not fond of using

    #define TRUE 1
    #define FALSE 0

    or in fact, #defines in general.

    I prefer

    typedef enum
    {
    FALSE = 0,
    TRUE = 1
    } boolean;

    or

    const unsigned char TRUE = 0x01;
    const unsigned char FALSE = 0x00;

    Tendofan.


  • Closed Accounts Posts: 9,314 ✭✭✭Talliesin


    #include <stdbool.h>


  • Registered Users Posts: 19,396 ✭✭✭✭Karoma


    oh yeah! take the easy way out... typical :rolleyes:


    :D


  • Registered Users Posts: 79 ✭✭tendofan


    Which is nice if your using something other than VS.NET 2003.

    Tendofan.


  • Advertisement
  • Registered Users Posts: 2,320 ✭✭✭Q_Ball


    I've sorted the boolean issue but is there any reason why i'm getting 9 compares and 45 swaps in a worst case of 10...1?


  • Registered Users Posts: 2,281 ✭✭✭DeadBankClerk


    FYI:

    #include <stdbool.h>

    will just insert:

    typedef _Bool bool;

    _Bool is a standard c type.


  • Closed Accounts Posts: 9,314 ✭✭✭Talliesin


    Originally posted by Q_Ball
    I've sorted the boolean issue but is there any reason why i'm getting 9 compares and 45 swaps in a worst case of 10...1?

    Because it's a bubble sort?

    Bubble sort is still a poor algorithm (arguably the canonical poor algorithm - see <http://catb.org/~esr/jargon/html/B/bubble-sort.html>)with operates in quadratic time ("O(n²)" in big-O notation); in the worse case it will require n-1 compares and n * (n-1) / 2 swaps.

    Your "changes" test will improve best-case performance to the point of being in linear time, but it won't affect worse-case at all.


  • Registered Users Posts: 2,320 ✭✭✭Q_Ball


    So that'll make it 9 comparisons and (10*9)/2 swaps => 45 which is right! ha ha cool, that quadratic O time thing isnt being explained to us thanks for the ref site and thanks everyone who helped :)


  • Closed Accounts Posts: 9,314 ✭✭✭Talliesin


    Originally posted by Q_Ball
    that quadratic O time thing isnt being explained to us

    Warning: There are probably a few mathematical imprecisions in what I'm going to say, but the following should be good enough in practical use.

    It's a way to talk about the efficiency of an algorithm or operation.

    Using ^ to indicate that the next character is superscript (so n^c means "n to the power of c"), examples are:

    O(1) "constant"
    O(log(n)) "logarithmic"
    O((log(n))^c) "polylogarithmic"
    O(n) "linear"
    O(n log(n)) "linearithmic"
    O(n^2) "quadratic"
    O(n^c) "polynomial" or "geometric"
    O(c^n) "exponential"
    O(n!) "factorial"

    An example of an operation that operates in constant time, or O(1) is examining a given item in an array. While in practice it may be that executing temp = array[23] could be faster than executing temp = array[152] for a variety of reasons, for the most part these factors aren't worth worrying about (and are often unpredictable). For all intents and purposes temp = array[23] will take as long as temp = array[152].

    An example of an operation that operates in logarithmic time is a binary search on a sorted array. While this takes longer to do on a large array than on a small array, the time doesn't grow as fast as the array (you need at most 3 passes on an array of 8 elements, at most 4 on an array of 16 and so on).

    An example of an operation that operates in linear time is examining a given item in a list, finding the 26th item takes 26 times as long as finding the first item.

    An example of an operation in linearithmic time is a merge sort.

    As we've seen bubble sort is an example of an operation in quadratic time.

    An amusing example of an operation which operates in factorial time in the average case is bogosort.
    Note that in real life, with real pseudorandom generators it is often possible to feed a bogosort input where never finishing isn't just the theoretical worse case, but is actually guaranteed).

    It can also be worth considering cases where there are occasional heavy penalties on top of the normal cost. For example if you have a vector implemented as an array with some "growing room" then adding an element to the end will be a constant time operation if there is growing room left, but a very heavy linear time operation if more memory has to be obtained. This is called "amortised constant time". I've seen it marked as O(1)+, but I don't know if that's a common convention.

    Anyway, all of these things can be used when we're deciding which algorithms to use, and what sort of data structures. For example in deciding whether to use a list or a vector we would consider whether we will often be inserting an element after a given element (O(1) for lists, O(n)+ for vectors) or whether we will often be trying to access the element a given offset into the structure (O(1) for vectors, O(n) for lists).

    This is particularly useful in cases where we don't necessarily know all that could be known about code we call into. For example the C++ library will be implemented differently in different cases, but there are minimum guarantees in the standard, such as binary searches on a random-access structure will be at least as good as O(log n) in the worse case (of course if an implementor finds a way to do better they are allowed to).

    As for bubble sort. Some people think it shouldn't be taught at all. It is useful though in that it is simple code so you can see what happens. It's also a good starting point for experimentation, improving the best-case performance as you did is a good example. You could also try replacing the swap with an xor-swap and seeing if it made any difference.


  • Advertisement
  • Registered Users Posts: 1,726 ✭✭✭gerryk


    in some of the examples above, checks on returns of system calls may fail. False is defined as 0, True as NON-ZERO, not 1. If you must use #defines like this, do something like...

    #define FALSE 0
    #define TRUE !FALSE


Advertisement