Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Method confusion

Options
  • 01-08-2012 5:00pm
    #1
    Registered Users Posts: 1,268 ✭✭✭


    So I ran an experiment but my approach to analysing it has been criticised and I am trying to re-analyse the data but I am confused as to which methods to use.

    I have 3 groups (a, b,c) where a is the control.
    I asked each participant in each group to complete a test 3 times.
    The first and last test is completed equally for all groups but the environment for the second test changes for b and c (b=distracting environment and c=very distracting environment.)

    After each test I take 3 performance based measures from the application (ratio data) and I take 7 measures from a subjective questionnaire with a rating from 0-20 (technically ordinal data although some use it as interval).

    I am trying to determine 1) the effect on performance and subjective ratings of the environment and 2) the relationship between the performance and subjective ratings given the environment.

    Can anyone suggest a good method to analyse this data?
    I have the data in SPSS v20 but I'm having difficulty analysing it and also comparing parametric and non-parametric data simultaneously...???


Comments

  • Registered Users Posts: 1,268 ✭✭✭deegs


    No one?

    My current thought are a two way mixed anova, trating the subjective data as parametric, but seperately doing a non parametic method to verify any significance found?

    My origional fault was identified to me as I recorded the user data from the first and second tests and subtracted them to determine the difference. (I did this with the second-third and first-third also).

    I then completed my analysis on the differences. My logic for this was that comparing the cognitive or intellectual similarities between users was pointless (one person scored 50% and the other scored 75%) and I thought comparing individual performance improvements (both improved by 4%) was a clever way to go.

    Apparantly not, I was told I should not alter the data before the analysis is done.

    Any thoughts on that?


  • Registered Users Posts: 13,104 ✭✭✭✭djpbarry


    deegs wrote: »
    No one?
    I doubt many people understand the question. For example, I have absolutely no idea what is meant by ratio, ordinal or interval data.

    I'm not trying to be smart here - just trying to offer an explanation as to why you've not received any responses.

    Different scientists in different disciplines speak different languages!


  • Registered Users Posts: 1,268 ✭✭✭deegs


    Thanks, I'm trying to figure it all out myself.

    Regarding data types...
    Probably easier just to link than have me try to explain...
    http://www.graphpad.com/support/faqid/1089/
    http://changingminds.org/explanations/research/measurement/types_data.htm

    I guess the bottom line is I designed an experiment with a method in mind, but I've been told that that method is not appropriate... so now I am trying to determine what method to use but the experiment seems really complex and confusing when doing this post hoc analysis and method identifications... if ya catch my drift....

    For example, I've eliminated one of the groups and several participants from another group with a hope of running a simpler method (two groups so easier anova or simple t-test). My effect and power will be insignificant but as this will lead on to further experiments I'm not to worried as long as I can at least analyse the data...

    Thanks anyway... if it doesnt make sense it might be best to delete the thread altogether. :(


  • Closed Accounts Posts: 11,001 ✭✭✭✭opinion guy


    deegs wrote: »
    My current thought are a two way mixed anova, trating the subjective data as parametric, but seperately doing a non parametic method to verify any significance found?

    Ok I don't fully understand everything you have said.
    But in regards to the above sentence.

    Why are you doing both parametric and non-parametric testing ?

    I think you might be confused here. You shouldn't be doing both but only one or the other.

    You certainly can't use a non-parametric test to confirm a parametric one. It doesn't work like that.


  • Registered Users Posts: 1,268 ✭✭✭deegs


    Thanks opinion guy :)
    I took 2 measurements from the subjects.
    One was performance and was this is a ratio data so I can use parametric tests but the other was a likert style questionairre and therefore ordinal data and I cannot do a parametric test.
    So... how can you do analyse both data's when one is parametric and the other non...
    Most people will just consider the questionairre data as interval and treat it as parametic, but that is not correct.
    One paper I read in particular performed both parametric and non parametric tests on the questionairre data as I suggestd above....

    Head is wrecked with this experiment... my next ones are gonna be so much simpler!


  • Advertisement
  • Closed Accounts Posts: 11,001 ✭✭✭✭opinion guy


    deegs wrote: »
    Thanks opinion guy :)
    I took 2 measurements from the subjects.
    One was performance and was this is a ratio data so I can use parametric tests but the other was a likert style questionairre and therefore ordinal data and I cannot do a parametric test.
    So... how can you do analyse both data's when one is parametric and the other non...
    Most people will just consider the questionairre data as interval and treat it as parametic, but that is not correct.
    One paper I read in particular performed both parametric and non parametric tests on the questionairre data as I suggestd above....

    Head is wrecked with this experiment... my next ones are gonna be so much simpler!

    Hey well ok. Lets back up a bit.

    Am I right in thinking that both these variables are separate outcome measures ? i.e. you have made you intervention (i.e.distracting environment), and these are two different ways of measuring the effect of that intervention ??

    If that is the case, then it wouldn't really be right to combine both variables (or sets of variables) in the same analysis.
    Therefore you could do two separate analyses - one for the parametric data and one for the non parametric data and thus avoid your parametric vs non parametric conundrum. You could then present your results in the manner 'for outcome measure A, x was associated with y after parametric anaylsis using method 1, meanwhile for outcome measure B after non parametric analysis....blah blah blah'

    Does that help ??
    Its hard to get a grasp on what you are trying to do from the info you've given us. So my answers may not be 100% correct but I'm doing my best here ;) It could be worth talking it over in person with a statistician if you can get your hands on one - that way you can have the dataset in front of you making it easier to answer these kinds of questions


  • Registered Users Posts: 1,268 ✭✭✭deegs


    Therefore you could do two separate analyses - one for the parametric data and one for the non parametric data and thus avoid your parametric vs non parametric conundrum.
    Yep at a most basic level thats exactly what I did, in fact I did that just this morning :)
    I would like to see if there was a correlation so I would have preferred to analyse them together, but I'm not sure that is possible. :)

    Thanks!

    Yes, I think I need help :D


Advertisement