Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Help me solve test pogram
Options
-
18-06-2012 2:24pmHi all,
I have been given a task to do for a job. It requires me to add code to a existing test program. I have read instructions and amended the program as far as I can but am now stuck. I was wondering if any of you could give me some pointers.
I have pasted the excercise below. The sample program code is as follows. Everything was already provided EXCEPT for AccessControlCoordinatorTest, which I wrote. Apart from my unit test, can anyone suggest what the test is actually asking?
Any suggestions welcome.
Walrus
AccessControlDecisionMakerpublic interface AccessControlDecisionMaker { public boolean performAccessCheck(String book); }
AccessControlCoordinatorpublic class AccessControlCoordinator { private AccessControlDecisionMaker accessControlDecisionMaker1; private AccessControlDecisionMaker accessControlDecisionMaker2; public AccessControlCoordinator(final AccessControlDecisionMaker accessControlDecisionMaker1, final AccessControlDecisionMaker accessControlDecisionMaker2) { this.accessControlDecisionMaker1 = accessControlDecisionMaker1; this.accessControlDecisionMaker2 = accessControlDecisionMaker2; } public synchronized boolean performAccessCheckForBook(final String book) { if (this.accessControlDecisionMaker1.performAccessCheck("book")) { return true; } if (this.accessControlDecisionMaker2.performAccessCheck(book) && !this.accessControlDecisionMaker1.performAccessCheck("book")) { return false; } if (!this.accessControlDecisionMaker1.performAccessCheck(book) && !this.accessControlDecisionMaker2.performAccessCheck(book)) { return true; } return true; } }
AccessControlCoordinatorTestpublic class AccessControlCoordinatorTest extends TestCase { private AccessControlDecisionMaker decisionMaker1; private AccessControlDecisionMaker decisionMaker2; AccessControlCoordinator accessControlCoordinator; @Before public void setUp() { decisionMaker1 = EasyMock.createMock(AccessControlDecisionMaker.class); decisionMaker2 = EasyMock.createMock(AccessControlDecisionMaker.class); } @Test public void testPerformAccessCheckForBook() throws Exception{ boolean accessAllowed = false; String book = "book"; EasyMock.expect(decisionMaker1.performAccessCheck(book)).andReturn(false); EasyMock.expect(decisionMaker2.performAccessCheck(book)).andReturn(false); EasyMock.replay(decisionMaker1); EasyMock.replay(decisionMaker2); accessControlCoordinator = new AccessControlCoordinator(decisionMaker1, decisionMaker2); accessAllowed = accessControlCoordinator.performAccessCheckForBook(book); assertFalse(accessAllowed); } }
0
Comments
-
Here is the test question:
This exercise should take no longer than one hour.
Feel free to change the supplied code in any way you choose, you may introduce third party libraries if you see necessary.
You should produce easy-to-read, functionally correct, tested Java code.
It is important that you provide unit tests for the access control coordinator, but the implementation
of the access control decision makers is not really important for the purposes of this test.
The access control decision makers are threadsafe, and this will run in a multi-threaded environment.
A book is simply represented as a String.
Story
To produce a program that coordinates access control to a book in a library.
The program will grant access to a book if at least one of the two supplied access control decision makers returns a true value.
Acceptance Criteria
If the first access control provider grants access to the book then access is granted.
If the second access control provider grants access to the book then access is granted.
If both access control providers grant access then access to the book is granted.
If neither access control provider grants access then access is not granted.
If a null value is supplied to the access control coordinator then a checked exception should be thrown.0 -
how did you come to author AccessControlCoordinatorTest if you don't know what its doing?
Im not trying to be nasty BTW0 -
Is this for an interview, or an exam, or something?
performAccessCheckForBook looks pretty broken, to me.
What is it checking the hard coded string "book" for, instead of the string contained in the 'book' string that is passed in?
That looks really dodgy?
The logic of the second if statement in performAccessCheckForBook doesn't seem to make sense. It seems to violate "If the second access control provider grants access to the book then access is granted."
That code just looks really badly written - is this a real software issue, or some sort of a test, to see whether you can identify badly written code?0 -
The logic of the second if statement in performAccessCheckForBook doesn't seem to make sense.
The bit I find unclear in the requirements isA book is simply represented as a String.
Story
To produce a program that coordinates access control to a book in a library.
That being the case the circumstances to test would be that "the book" to check access on is literally "book" ...
AND
effectively that either decisionmaker1 (dm) or dm2 is true. Though they do specify the four possible combinations of dm1 & dm2 so I would test these 4.
And finally that the null book is captured via an exception. The fact that code does not produce such an exception just points to its failure but it is a requirement so we should test for it.0 -
The original code supplied as part of the question does look like a mess, but I would think that this is a diversion. The test should be written first anyway, so it should not really be concerned, or effected, by the implementation - only that a specific result is returned under specific circumstances.
Really?
Disclosure: I'm becoming wary of 'test-driven-development'. It seems to me that people are taking it way too far; its become the new dogma.
Would you really advocate that, on coming across code like this, the first thing to do would be to start writing tests?
The complexity of the testing code is going to be almost equal to, or even greater than, the complexity of the actual code that it is testing. How does starting with tests help, in this situation?
Clearly, to me, the first thing to do, on coming across code like that, is try and figure out what the hell is going on: why is this code in a production system? How did it get here? Does it currently work, even though it diverges from spec? Is the spec correct? Who wrote this code, and what were they thinking?
If, after all that, it is determined that the spec is actually describing how the code should be, then, after looking carefully to see what effects the changes will make, the code should then be cleaned up and brought into line with the spec.
I would probably not write tests, at all, for that particular function. I would spend my time reviewing it, to make sure its correct, and documenting it. If the function later increases in complexity, then I'd write tests for it.
We don't know if this is a made-up question, like for an interview, or a real situation. But if its a real situation, thats what I'd do.
If it was an interview, I'd be using this to find out what the interviewers actually would do in a real situation, and what they were trying to get at, to let me learn something about the people I might be working with.The bit I find unclear in the requirements is
So is this literally for one single instance of a book? The fact that the book cannot be associated with a AccessControlCoordinator makes me believe that it must be so and that would explain the use of the "book" literal in the provided code.
That being the case the circumstances to test would be that "the book" to check access on is literally "book" ...
AND
effectively that either decisionmaker1 (dm) or dm2 is true. Though they do specify the four possible combinations of dm1 & dm2 so I would test these 4.
And finally that the null book is captured via an exception. The fact that code does not produce such an exception just points to its failure but it is a requirement so we should test for it.
This doesn't make any sense to me.
Its very hard to understand a method that accepts a reference to a string, called 'book' as a parameter, and then makes queries based on a hard coded literal "book". Whats the point of the reference being passed in, otherwise?
When I see that, I almost certainly think 'bug' or 'wtf is going on here?'. Here be dragons, etc.0 -
Advertisement
-
The AccessControlDecisionMaker object names are horrendous in terms of readabiilty. A firstAccessDecider and secondAccessDecider or something equivalent is more readable since the differentiating part of the string comes first.
The code is badly written and wouldn't work as per the use case given as others have already mentioned. A single if else statement is all that is needed. If either access decider grants access then return true, otherwise return false.
Null checking the String argument passed should be done at the start of the method.
The current mess of ifs makes little sense especially due to the string literal "book" and the default value returned is true. Defaulting to deny access is always a safer option.
Testing in this case should be to check each of the acceptance criteria which is easily done through mocking which you are doing. Testing this in a production environment would be very important since it is an access controller which you would expect to be a relatively important/sensitive class.0 -
Is this an interview question?
There is so much wrong with it - Actually, I'd go so far as to say it's offensive - that I wouldn't know where to start, butif (x) return true; if (y && !x) return true; if (!y && !x) return false; return true;
is about as bad as it gets. (hint: return x || y;)0 -
moycullen14 wrote: »Is this an interview question?
There is so much wrong with it - Actually, I'd go so far as to say it's offensive - that I wouldn't know where to start, butif (x) return true; if (y && !x) return true; if (!y && !x) return false; return true;
is about as bad as it gets.
Thats not what is currently written there.
I think, that as it stands, it says:
if (x) return true.
else if (y) return false
else return truemoycullen14 wrote: »(hint: return x || y;)
That is, of course, what should be there.
But there something funny going on, if the code has diverged so far from the spec. If its a test, then its a weird test, and an explanation is in order. If its production code, then treat with caution, something strange has happened.0 -
Really?Would you really advocate that, on coming across code like this, the first thing to do would be to start writing tests?The complexity of the testing code is going to be almost equal to, or even greater than, the complexity of the actual code that it is testing. How does starting with tests help, in this situation?Clearly, to me, the first thing to do, on coming across code like that, is try and figure out what the hell is going on: why is this code in a production system? How did it get here? Does it currently work, even though it diverges from spec? Is the spec correct? Who wrote this code, and what were they thinking?If, after all that, it is determined that the spec is actually describing how the code should be, then, after looking carefully to see what effects the changes will make, the code should then be cleaned up and brought into line with the spec.
I would probably not write tests, at all, for that particular function. I would spend my time reviewing it, to make sure its correct, and documenting it. If the function later increases in complexity, then I'd write tests for it.We don't know if this is a made-up question, like for an interview, or a real situation. But if its a real situation, thats what I'd do.
IamtheWalrus did sayHere is the test question:If it was an interview, I'd be using this to find out what the interviewers actually would do in a real situation, and what they were trying to get at, to let me learn something about the people I might be working with.This doesn't make any sense to me.
Its very hard to understand a method that accepts a reference to a string, called 'book' as a parameter, and then makes queries based on a hard coded literal "book". Whats the point of the reference being passed in, otherwise?
When I see that, I almost certainly think 'bug' or 'wtf is going on here?'. Here be dragons, etc.0 -
I don't think people are understanding, from my reading this is not about the coding style (or lack of it), it's a question about unit testing. In which case we don't care how it's coded just that we can show if all requirements set out are fulfilled or not. Which they clearly are not since one requirement is for a checked exception to be thrown when a null book is passed and there are clearly no exceptions thrown.0
-
Advertisement
-
I don't think people are understanding, from my reading this is not about the coding style (or lack of it), it's a question about unit testing. In which case we don't care how it's coded just that we can show if all requirements set out are fulfilled or not. Which they clearly are not since one requirement is for a checked exception to be thrown when a null book is passed and there are clearly no exceptions thrown.
Problems and challenges like this should be approached holistically.
Challenge every assumption, ask all the questions, get to the heart of the problem. I'd always try for the most correct fix to the most root cause, until it either becomes clear, or is explained why only the localised or symptomatic fix is appropriate.
If someone tells me to write tests for a piece of code like that, in pretty much any scenario, I'm going to comment on the inherent problems in the code, and try and get them fixed, as a higher priority.
If its made crystal clear that isn't a desirable approach, then fine, I'd focus on doing the given task, narrowly defined; but not until that time.
That's how I'd approach that problem, with the information as provided in this post, even in a setting as abstract as an interview.
Asking for clarification a lot, and trying to understand the 'why' of a project, is almost always a necessary part of working on a project, and I'd be surprised if someone would hold these questions against me.0 -
But it's not a project, it is a test question. If you don't answer it you don't pass the test.0
-
But it's not a project, it is a test question.
To my mind, its not completely clear what this is.
The OP says "Help me solve test problem", but also says "I have been given a task to do for a job."
A 'test problem' might be a problem set as a test, but it might also be a problem to do with testing.
The context of the problem is ambiguous, I've asked for clarification; perhaps the OP will come back about it.
But, because theres another point I'd like to make, lets just assume its an interview question, or a question coming up as part of an interview process, or some such.If you don't answer it you don't pass the test.
There is sort of a philosophical issue here.
Lets say this came up, in a written exam I was sitting, exactly as presented here. This is probably the most constrained situation I can think of (the most constrained realistic situation, we're not talking Swordfish, here).
Before writing the 'tests' required, in an exam, I would write a few sentences explaining my issues with the code, saying I'd need more information on the root cause of these issues, saying that really its the code that needs fixing, first etc - the things I mentioned in this thread.
Its a philosophical thing, partly about me as a person, but also strongly about me as a developer. If someone gives me a task to do, I'm going to think about whether the task makes sense, think about whether it could be solved a better way, about whether the question makes sense, and about whether theres anything the person asking the question might have got wrong.
I think this is a very positive attitude. I would be surprised if someone, interviewing a developer, was unhappy with an attitude of constructively thinking about the root cause of the problem. (Note, this is not the same attitude as you get in developers who obstinately refuse to solve a problem because they think the solution is suboptimal.)
Anyway, in a nutshell, in any sort of a job interview, if someone was to penalise me, and take a hardline "If you dont answer it, you fail the test", having just providing the information provided in this thread, and refusing to give additional clarification, then its they that have failed; and thats good to know, without losing extra time.0 -
The OP says "Help me solve test problem", but also says "I have been given a task to do for a job."Before writing the 'tests' required, in an exam, I would write a few sentences explaining my issues with the code, saying I'd need more information on the root cause of these issues, saying that really its the code that needs fixing, first etc - the things I mentioned in this thread.Its a philosophical thing, partly about me as a person, but also strongly about me as a developer.I would be surprised if someone, interviewing a developer, in any sort of a job interview, if someone was to penalise me,
If this was really a real world task for someone then this is not the case ... but I cannot believe this is real. But you are correct in that neither of us really know the answer to that question.0 -
Indeed, and I see the developer and tester role as different roles. Admittedly in small companys they are most often shared, but on larger projects not necessarily so.
But frankly, looking at the OP's question, if that come up in an exam, I'd say the problem is the exam. If it came up in an interview and was representative of what the prospective job would involve doing, I'd just thank the interviewer for their time, turn down the job and walk out (which sounds daft in a recession, but to be honest, that's sometimes just the best option to take - your name is tied to the work you do, so taking on a job you think can't be done is just going to cause problems when seeking the next role for you as well as making this role a pita while you're in it. And yes, I've had to do that a few times over the last few years - I suspect we all have).0 -
On larger projects, almost definitely not shared. Hell, we've got five testing teams in here looking at the output from one development team (they look for different things; functionality, performance, etc, etc).
But frankly, looking at the OP's question, if that come up in an exam, I'd say the problem is the exam. If it came up in an interview and was representative of what the prospective job would involve doing, I'd just thank the interviewer for their time, turn down the job and walk out (which sounds daft in a recession, but to be honest, that's sometimes just the best option to take - your name is tied to the work you do, so taking on a job you think can't be done is just going to cause problems when seeking the next role for you as well as making this role a pita while you're in it. And yes, I've had to do that a few times over the last few years - I suspect we all have).
If it's an interview question, I would wholeheartedly agree. As an aside, I hate these 'suggest some improvements to this section of code' questions in an interview - especially when there is no interaction.
Code - even crap like the above - doesn't exist in isolation. No code should be changed except for very good, specific reasons or goals. This trend where refactoring is seen as an absolute 'good thing' - irrespective of the environment the code works in - is very dangerous.
There should ALWAYS be a reason to rewrite/criticise code. Maybe there are bugs, it needs extensions or reuse. It's unmaintainable, performance is poor, etc are all valid reasons. Without knowing that context, it's impossible to say what should be changed. Sorry for the thread swerve....0 -
Talk of refactoring reminds me that the first time I came across the concept of writing unit tested code was in Martin Fowler's (et al) "Refactoring". Was that really over 10 years ago!? I had been doing it (i.e. writing tests before code) in a rudimentary manner for years before but that was the first I can recall seeing it formalised, and I never did it for existing code (prior to refactoring/bug fixing as Fowler proposes). I don't know about others but I find writing tests for existing code to be much more complex.0
-
Disclosure: I'm becoming wary of 'test-driven-development'. It seems to me that people are taking it way too far; its become the new dogma.
Would you really advocate that, on coming across code like this, the first thing to do would be to start writing tests?
The complexity of the testing code is going to be almost equal to, or even greater than, the complexity of the actual code that it is testing. How does starting with tests help, in this situation?
Clearly, to me, the first thing to do, on coming across code like that, is try and figure out what the hell is going on: why is this code in a production system? How did it get here? Does it currently work, even though it diverges from spec? Is the spec correct? Who wrote this code, and what were they thinking?
If, after all that, it is determined that the spec is actually describing how the code should be, then, after looking carefully to see what effects the changes will make, the code should then be cleaned up and brought into line with the spec.
I would probably not write tests, at all, for that particular function. I would spend my time reviewing it, to make sure its correct, and documenting it. If the function later increases in complexity, then I'd write tests for it.
Definitely I'd agree that you need to clarify if the spec is correct if the code seems heavily different from it. But regarding writing tests first as being too dogmatic to TDD, I don't think that it is.
If you're clarified that the acceptance criteria documents exactly how the method should behave, you write one test for each & when they all pass, you can be happy that the method is functioning correctly. One principal of TDD is that you should only write the minimum amount of code needed to make a failing test pass.
Starting from the spec rather than the code means you're less likely to have a confirmation bias to your tests too. Additionally, it means that once you've them all passing & you move on to further refactoring (if you want to bring the code in line with standards, best practice), you know instantly if you break anything by doing this.
You're saying you'd document it, but the idea of TDD is that the test itself is documentation. If someone refactors the method later on, they have to refactor the test also, or it won't pass. People often ignore docs & they become stale.0 -
Definitely I'd agree that you need to clarify if the spec is correct if the code seems heavily different from it. But regarding writing tests first as being too dogmatic to TDD, I don't think that it is.
(And yes, I've read your namesake's tome on the subject, and it's a lovely approach... just not one you can use in every situation. TINSB and all that).0 -
TDD by it's nature is not supposed to be tacked onto a codebase especially not one in maintenance mode (or in wide-scale use for a long time) and is k/mloc.
(you could tack it on I suppose, but as you said it would be monstrous and hard/impossible to justify)
You would start a new project using TDD, but as you said it is one option of many and is often (like OOP etc) put forward as the savior of every single potential project no matter what by some people.0 -
Advertisement
-
Definitely I'd agree that you need to clarify if the spec is correct if the code seems heavily different from it. But regarding writing tests first as being too dogmatic to TDD, I don't think that it is.
If you're clarified that the acceptance criteria documents exactly how the method should behave, you write one test for each & when they all pass, you can be happy that the method is functioning correctly. One principal of TDD is that you should only write the minimum amount of code needed to make a failing test pass.
Starting from the spec rather than the code means you're less likely to have a confirmation bias to your tests too. Additionally, it means that once you've them all passing & you move on to further refactoring (if you want to bring the code in line with standards, best practice), you know instantly if you break anything by doing this.
You're saying you'd document it, but the idea of TDD is that the test itself is documentation. If someone refactors the method later on, they have to refactor the test also, or it won't pass. People often ignore docs & they become stale.
As Sparks clarified, my issue here would be whether applying TDD is appropriate in a situation like this, or whether its adding tests for the sake of it.
The first issue is whether it makes sense to start trying to apply it retroactively to an existing codebase. Sparks covered this well.
There is a second, separate issue here, though, which I'd like to just mention again. Hopefully the issues won't get muddled together.
I actually don't think I would write unit tests for that method, with that spec.
Tests have a cost. They take time to write, and, more importantly, they take effort to keep in sync with a codebase. When we need to change the functionality, we then need to change the tests. This increases the time it takes to refactor code. That can increase inertia. Tests are not free.
There is, of course, a large upside to using tests. The upside is that they can actually decrease inertia, if they enable changes and refactors to be made with more confidence. This can happen when a complex codebase can confidently be verified to still work after a code change, or a refactor, because its behaviour is verified by simpler tests.
But, you only get this upside, when you have a test that is more self evidently correct than the codebase is. If it is just as easy to verify that the codebase is correct, as that the test is correct, then the advantage of tests is minimal.
This method should read something like:public synchronized boolean isBookAccessAllowed(final String book) { if (book == null) { //throw the checked exception } if (this.accessControlDecisionMaker1.isBookAccessAllowed(book) || this.accessControlDecisionMaker2.isBookAccessAllowed(book)) { return true; //access granted } return false; }
(excuse my rusty java style).
Code like that is simple to the point where it adds almost nothing to write the unit tests for it.
If the code was more complex, like if there were layers of caching, or complex error handling, then I would add tests.You're saying you'd document it, but the idea of TDD is that the test itself is documentation. If someone refactors the method later on, they have to refactor the test also, or it won't pass. People often ignore docs & they become stale.
Thats a bad solution to a bad problem. There should be the level of docs that people are willing to maintain. Making the coder have to change the specification in the code, and then immediately in the tests, buys you almost nothing, in a simple situation like this.
Its the wrong solution to the issue of docs going stale.0 -
And we're not talking a few days or weeks of effort here; we're talking man-decades of effort to do it by the book, for something which produces no tangible, saleable output this quarter (go on, tell me about the long term benefits... and then tell me how you convince sales, marketing and accounting to ignore quarterly figures for a year or two in pursuit of those benefits. I could use the giggle).
Sure, I'm not saying that I'd tack it on to a project that's in production & is already a few million lines of code! Just that in the example given, we're talking about a fairly accessible method that could be tested without too much recourse to mocking out the entire system.If it is just as easy to verify that the codebase is correct, as that the test is correct, then the advantage of tests is minimal.
I would say that the real advantage of having even simple tests is that it automates this checking for you. If it's a simple method, the tests are quick to write, but they provide the safety net against regression issues later on.
Error checking at compile time arguably only catches errors that are easy to spot, but just because they are silly/simple errors doesn't mean they wouldn't have broken your programme.Tests have a cost. They take time to write, and, more importantly, they take effort to keep in sync with a codebase. When we need to change the functionality, we then need to change the tests. This increases the time it takes to refactor code. That can increase inertia. Tests are not free.
Tests are more expensive that documentation to write initially, but I'd argue that they're not more expensive to update. & when they do fall out of sync with the codebase, you're guaranteed they're updated before the developer leaves the chunk of work. That's not the case with documentation.
I'd say it's a realistic solution to a bad problem & I'm not advocating no documentation at all, but a small amount of core docs & a big test suite. People get sloppy when they are in a rush, lack concentration, whatever.
In the same way that I can rollback an SVN commit if the code doesn't mean house-style/fails compilation, etc. automated suites can give that level of monitoring too.0 -
Interesting thread.
Wondering what was the correct and final answer for this !
Cheers,
Jon0
Advertisement