Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Big O Notation

Options
  • 03-11-2008 11:56am
    #1
    Registered Users Posts: 517 ✭✭✭


    Hi,

    Does anyone have a good grasp of the big O notation with regards to assessing algorithm cost and running time?

    I am doing this at college at the moment, but cant get my head around the whole aspect.

    I know that the definition:

    f(n) is O(g(n)) if there exists constants c and n0, such that
    0 <= f(n) <= cg(n) for all n >= n0

    I can see that for a certain value of n from n0 onwards this will hold true. But what does it tell me?

    Also, is it true that Big Omega is "Best case" for an algorithm and Big Theta is "somewhere inbetween" Best Case(Omega) and Worst Case (Big O)?

    Any insight you can pass on would be much appreciated.

    Many thanks.


Comments

  • Closed Accounts Posts: 82 ✭✭cyberbob


    f is in the "order of" g .

    its *sorta* like saying a certain distance is in the order of inches or miles or lightyears... cept you're talking about the order of a function of n

    baby example .

    say you have a grid n wide and n tall.
    define f(n) as the time taken to populate the grid
    define g(n) = n * n

    hence in this case f(n) = c*g(n) , c being the time taken to populate each cell.
    ie the trivial case of the O function. with n0 = 1

    in studying algorithms you take n as the length or size of the problem and g(n) being the complexity of the problem, eg populating your grid plus colouring in the bottom row say would be O ( c1 * n*n + c2 * n ) or O(n*n) as this action dominates (assuming c1>= c2)

    dunno if this helps...


  • Registered Users Posts: 517 ✭✭✭lisbon_lions


    Thanks Bob,

    So what we are saying is that we can forget about constants and trivial aspects from a function as this is minor in the overall context?

    lets say c1 +2n + n*n would just be O(n*n) as this is dominant as n increases to bigger values?

    Like for instance, lets value n at 5. 90+n + n*n, the dominant factor(n*n) would kick in from n=10 onwards. so in this example n0 would be 10 and from that point onward it would always hold true?


  • Closed Accounts Posts: 82 ✭✭cyberbob


    in my experience of studying complexity ( which is a while back now ) , you're never really concerned with relatively small n , so for brevity you'd talk about your example being O( n squared ) .

    I suppose if you take away that relaxation , what your saying sounds right... would never have given it that much thought.


Advertisement