Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Next steps for software team?

Options
  • 15-01-2012 11:07pm
    #1
    Closed Accounts Posts: 638 ✭✭✭


    Hey guys

    I'm trying to improve our software development area in IT.
    Its currentily not so bad but I'm running out of improvements to make based on my previous experiences. We've continuous build integration, automated tests, and automated reports being sent out with code coverage and style checks.

    As you can see, its far from a bad set up. However, Thats defeatist thinking and will lead to stagnation.

    Has anyone any suggestions to make, or can point me to resources or communities where i could see what other great IT departments are doing, best practices, and maybe some new innovative ideas coming down the SDev pipelines?

    Thanks


Comments

  • Closed Accounts Posts: 8,015 ✭✭✭CreepingDeath


    How about code reviews and performance profiling.
    Better documentation, wiki pages for new developers to get up-to-speed.
    Simplify installation with an automated installer.
    When checking in items to your source code repository, prefix it with the bug tracking id or enhancement id to the comment to track code against changes, eg.

    [BUG-0123] Creation of new user fails under DB2 database
    [CR-1022] Addition of new anti-gravity simulator

    Some source code repositories will let you add a check to the comments, so you can force developers to add their comments in the form of "[id] comment".


  • Registered Users Posts: 7,157 ✭✭✭srsly78


    Yeah stuff like Confluence and JIRA have svn plugins that can link to the actual revision in question.

    And definitely make sure svn forces people to comment :) Nothing worse than people committing without leaving a note.


  • Moderators, Society & Culture Moderators Posts: 9,689 Mod ✭✭✭✭stevenmu


    R&D could be worth looking at. It's very easy to fall into a pattern of solving all new problems with the same old solutions. It's worth being familiar and up to date with new technologies, products and techniques that could make projects far more successfull. Of course the other side of that coin is that there's no point using something new just for the sake of it, but often using something new can bring very real benefits.


  • Closed Accounts Posts: 638 ✭✭✭theTinker


    Thanks for the replies and ideas.

    "When checking in items to your source code repository, prefix it with the bug tracking id or enhancement id to the comment to track code against changes, eg."
    Definitely a good idea, we dont do this at the moment really. It was done before but now that i think of it, i believe its dissappeared into the abyss of forgotten practice.

    We already have Jira, which is overused in my place for EVERYTHING but we do what we must to appease the auditors and managers.

    Code reviews are starting too, they are done informally from time to time but starting from tomorrow, just by coincidence they are being made formally. Beforge merging features to main trunk.

    Wikis are abound and well kept. They are a bit verbose but quite handy for new developers.

    Im looking at some new technologies such as Jenkins/Hudson to replace Cruise Control as I've found a large amount of time is spent fixing environment issues and configurations, often down to difficult or cumbersome CC config files in XML and bugs in CC design.

    I'd Love some input on the following issue.
    After fixing up tests due to data issues(trying our best to make them more non brittle), I HATE having to merge to 15 different branches to update the tests. Does anyone know of any tool support that could perhaps update unit tests in feature branches more easily than 5 seperate merges?

    I think after an upgrade to jenkins, There is plugins to publicly blame whoever brakes a build. Im starting to notice some new contractors etc checking in changes without associated tests.

    DAO testing is a huge slowdown on our build. But im not sure how to test the DAO layer without making connections and queries... Perhaps just running these once a day in the morning is the best choice. I'd love to speed them up though...


  • Closed Accounts Posts: 638 ✭✭✭theTinker


    stevenmu wrote: »
    R&D could be worth looking at. It's very easy to fall into a pattern of solving all new problems with the same old solutions. It's worth being familiar and up to date with new technologies, products and techniques that could make projects far more successfull. Of course the other side of that coin is that there's no point using something new just for the sake of it, but often using something new can bring very real benefits.

    New techs im looking at is Jenkins, and maven. Its something i really need to think about as Hudson, the new fork off by oracle from jenkins, seems to be more Maven orietated in thier plans for an enterprisey portable solution. However the dev community stayed with jenkins and thats where all the plugins are coming from!
    Im clueless on maven so I need to learn that too before making any conclusions.

    We've automated UAT testing happening, however I wish i could see a way to take it to the next level... Its fine at the moment, but lacks...that wow factor. Tends to be brittle aswell.

    Im leaning towards an upgrade in techs, then using the new tool support to help make sticking to standards easier.

    Coverage is less than 50% at the moment for testing, so i think a big push towards that would be good.

    I'd love some way to get more synergy between other teams systems, frankly we are pretty black box to each other at the moment. any suggestions towards this goal?

    Much appreciate the help guys. Hitting ones own limits of experience and knowledge is a interesting wall!!


  • Advertisement
  • Closed Accounts Posts: 638 ✭✭✭theTinker


    How about code reviews and performance profiling..

    We don't do this at all other than running some manual tests and saying "thats alright I guess".

    Thanks for the good idea.

    We've multiple enterprisey systems supporting a few different technologies and making call outs to many engines of various types.
    I do find our web apps to run a bit slow, they have a bit of a aged feel about them if you get me.

    have you any suggestions regards to technologies for webapps, and Web service call out for performance profiling.. This could be interesting alright. I suspect I'd know where our biggest slow downs are, but it'd be much easier to make a business case for improving them with a bit of performance metrics and history to back me up.


  • Registered Users Posts: 2,494 ✭✭✭kayos


    theTinker wrote: »
    I'd Love some input on the following issue.
    After fixing up tests due to data issues(trying our best to make them more non brittle), I HATE having to merge to 15 different branches to update the tests. Does anyone know of any tool support that could perhaps update unit tests in feature branches more easily than 5 seperate merges?

    First off your Unit tests should have no such thing as Data Issues! If your hitting external components such as files/databases and objects other than the Unit under test then its an integration test and to reduce data issues these tests should set-up and tear down their own data. If you want to just write unit tests look into some sort of Mocking framework e.g. Rhino Mocks would be one for .NET. Of course this comes with some issues for legacy code where DI/IoC is not in place and Mocking can become a pain.

    Why so many Branches?
    If I was working across 5 branches of the same code base then Huston we have a problem. Only time that many branches would be acceptable in my eyes is where the software is a product and the different versions are branched. In that case because a test is broke in Product Version 5 Branch does not mean its broke in Product Version 3 branch. If you have 5 branches of the same code and version then I'd have to ask why? Yes there is always exceptions to the rule but it seems like a mess to me. So why do you run multiple branches?

    If you have 5 devs working across 5 branches in isolation then your going to run into problems when it comes to merging them all back together. It also removes the benefit of CI because if I'm on a private branch I am only ever integrating to my branch. CI is for the main branch to ensure my changes integrate and play nice with everyone else's work.
    theTinker wrote: »
    DAO testing is a huge slowdown on our build. But im not sure how to test the DAO layer without making connections and queries... Perhaps just running these once a day in the morning is the best choice. I'd love to speed them up though...

    CI builds should be often and quick. If your data tests are slowing things down it would be acceptable to only run these nightly (run them at night when the systems are quiet). Again depending on the tests you want to run Mocking could be of use.

    Ok onto other things....

    TDD/BDD
    I should not need to explain why these are good! If you do not do it then read up and see why you should :).

    Gated Check-ins
    I've been wanting to get to mess with gated check-in's for a while. But sadly I'm not working with TFS any more and wont get to try it. If TFS is your source repository do look into it. But seeing as you are using or looking at Hudson/Jenkins I doubt your running TFS (no need for Hudson/jenkins when TFS has build management built in).

    Code Reviews
    For code reviews you could use something like Review Board. Good point it can be setup to automatically send out review requests on a check-in. Bad point its a bit late to review when you have checked in. You could do your reviews inline with your development work i.e. Code in Pairs.

    Tooling
    Are you using the best possible tools for the platforms you use? To me using anything but TFS, MSBuild, MSTest, FxCop etc for .NET work is a major pain in the ass and means lost/wasted time. Or to put it another way we could all write code in notepad and then do a command line compile or we can use an IDE and let it do the hard work and remove the chances of user errors :).

    Testing
    Automate, automate and automate. If the system under test is highly UI driven following patterns such as MVC/MVVM can help you automate testing without the need to do recorded UI tests. This means the tests are more robust and less likely to fail due to a button move 3 pixels to the left....

    Word of warning: Do not for one second think that high Code Coverage means
    better software. In fact the tests need to be reviewed more than the code as its easy to write tests that cover 100% of the code and test 0% of it! Your reports will tell you that everything has been tested and working. But that's far from true.


  • Registered Users Posts: 11,262 ✭✭✭✭jester77


    theTinker wrote: »
    I'd Love some input on the following issue.
    After fixing up tests due to data issues(trying our best to make them more non brittle), I HATE having to merge to 15 different branches to update the tests. Does anyone know of any tool support that could perhaps update unit tests in feature branches more easily than 5 seperate merges?

    Gitflow would help here


  • Moderators, Society & Culture Moderators Posts: 9,689 Mod ✭✭✭✭stevenmu


    theTinker wrote: »
    New techs im looking at is Jenkins, and maven. Its something i really need to think about as Hudson, the new fork off by oracle from jenkins, seems to be more Maven orietated in thier plans for an enterprisey portable solution. However the dev community stayed with jenkins and thats where all the plugins are coming from!
    Im clueless on maven so I need to learn that too before making any conclusions.
    I've heard some good things about Maven, though I don't really know anything about it myself.

    Since you seem quite process/test focused, I read a good article by John Carmack on static code analysis recently, that could be something worth looking at: http://altdevblogaday.com/2011/12/24/static-code-analysis/

    Although when I mentioned researching new technologies and techniques, it was as much from the point of view of what the developers are using to actually write code. There may be new platforms or frameworks worth being aware of, or new methodologies, practices or patterns within your existing platforms and frameworks that could help your development a lot.

    Another thing I'd suggest is that rather than trying to apply every improvement you can possibly find, to more specifically target changes at your unique setup. You will see greater returns by identifying your organisations unique trouble spots and then introducing solutions for those, as opposed to blanket introduction of every best practice you can. You can potentially identify trouble spots with periodic process review meetings with developers where you can find out where they're spending lots of time that they don't need to, what issues they keep bumping into, and get their ideas about what would make their jobs easier and more productive.


  • Closed Accounts Posts: 638 ✭✭✭theTinker


    Hey Kayos, thanks for the feedback.

    We've implemented mocking on 99% of our tests at the moment, so the DAO tests I speak off are actually 20 tests out of 3,500. However these 1% take 65% of our testing time each build. I think i may set them to run just once a day but I'd love to avoid that as it's a technique prone to grow and infect.

    Our code coverage needs to drastically improve too, but its a work in progress. Difficult to write thousands of tests for an already existing system lol
    kayos wrote: »
    First off your Unit tests should have no such thing as Data Issues! If your hitting external components such as files/databases and objects other than the Unit under test then its an integration test and to reduce data issues these tests should set-up and tear down their own data. If you want to just write unit tests look into some sort of Mocking framework e.g. Rhino Mocks would be one for .NET. Of course this comes with some issues for legacy code where DI/IoC is not in place and Mocking can become a pain.

    We've so many requirements in the pipeline always that we do need 4+ frequentily, I think we've 3 active requirements at the moment and 1 branch being used for testing infastructure upgrade. We've a good system of branching and merging frequentily. Every new requirement gets a new branch, and every requirement is merged back to our trunk branch as its completed. We keep the requirements short and 2-3(6-9weeks) sprints in duration. The branches rarely have any problem reintegrating as requirements dont overlap too often in a bad way.
    kayos wrote: »
    Why so many Branches?
    If I was working across 5 branches of the same code base then Huston we have a problem. Only time that many branches would be acceptable in my eyes is where the software is a product and the different versions are branched. In that case because a test is broke in Product Version 5 Branch does not mean its broke in Product Version 3 branch. If you have 5 branches of the same code and version then I'd have to ask why? Yes there is always exceptions to the rule but it seems like a mess to me. So why do you run multiple branches?
    I think you've a good point below here. I will have to ponder upon it further. As our requirements do come across completed, they are usually tested pretty well before merging to trunk, and retested after that, then merged to preprod, then retested. So they usually work well, but they do tend to either completely hit the trunk branch all or nothing. Eg. 600 files were merged today.
    kayos wrote: »
    If you have 5 devs working across 5 branches in isolation then your going to run into problems when it comes to merging them all back together. It also removes the benefit of CI because if I'm on a private branch I am only ever integrating to my branch. CI is for the main branch to ensure my changes integrate and play nice with everyone else's work.

    I'm inclined to agree, reluctantly, I guess i like full CI builds and testing constantly. However, resources are becoming a problem and we sure are using ALOT of them.
    kayos wrote: »
    CI builds should be often and quick. If your data tests are slowing things down it would be acceptable to only run these nightly (run them at night when the systems are quiet). Again depending on the tests you want to run Mocking could be of use.


    We are definitely lacking here. Not in a dismal way, but certainly from a unit test point of view. We are not TDD. We have good acceptance test written before coding most of them, or in parrell by an assigned BA, but I do think there is a large canyon between the BA and Developer when it comes to unifying an approach. I shall definitely look into this. Thanks for driving it back in. Simple things get forgotten so quickly! I may look into a more condusive tool set for aligning the BA and Developer work.
    kayos wrote: »
    TDD/BDD
    I should not need to explain why these are good! If you do not do it then read up and see why you should :).

    Well.. If i be a noob at what i do. That is a fantastic idea. Im sick to death of people checking in small modifications, running a build, tests, and acceptance automatically on the CI server only to find a stupid error in their code which brought down the whole environment.
    Gated Check ins is a GREAT idea. Im gonna read up on these tomorrow and see if there is anything out there we can use. Cant use TFS im afraid! will need to be open and free! lol.
    kayos wrote: »
    Gated Check-ins
    I've been wanting to get to mess with gated check-in's for a while. But sadly I'm not working with TFS any more and wont get to try it. If TFS is your source repository do look into it. But seeing as you are using or looking at Hudson/Jenkins I doubt your running TFS (no need for Hudson/jenkins when TFS has build management built in

    Done. New but done. Infact did the first one today. Was quite surprised how many things i'd class as defective in the code i seen. It was a big requirement with only small defects, but some could be potentially harmful.
    Ill look into the tool mentioned, I only used word for keep track of the problems. Thanks
    kayos wrote: »
    Code Reviews
    For code reviews you could use something like Review Board. Good point it can be setup to automatically send out review requests on a check-in. Bad point its a bit late to review when you have checked in. You could do your reviews inline with your development work i.e. Code in Pairs.

    Yeah our UI testing is decent enough. Its based on structure language designing the tests, like Page contains xyz, enter blah into BlahField, Press submit. No pixels etc, so they pass grand. I find they are quite resource hungry, but i dont know if this is our tool or tests. Fitnesse is the name of the tool, paired with selenium, its a powerful combo. but definitely starting to hit its mechanical and design limitations. Verbose and badly structured is not the word!
    kayos wrote: »
    Testing
    Automate, automate and automate. If the system under test is highly UI driven following patterns such as MVC/MVVM can help you automate testing without the need to do recorded UI tests. This means the tests are more robust and less likely to fail due to a button move 3 pixels to the left....

    Word of warning: Do not for one second think that high Code Coverage means
    better software. In fact the tests need to be reviewed more than the code as its easy to write tests that cover 100% of the code and test 0% of it! Your reports will tell you that everything has been tested and working. But that's far from true.


    Thanks for all the suggestions. Ill begin some learning about the ones i can use this week. Its good to have some fresh insight. Much appreciated.


  • Advertisement
  • Closed Accounts Posts: 638 ✭✭✭theTinker


    jester77 wrote: »
    Gitflow would help here

    Thanks for that. Interesting concepts.

    I happen to think we use it already. by accident.

    We have Trunk for production, always a copy of production.
    We have development for everything.
    Each new requirement branches off development, does the feature work, tests, then is remerged back to developmented, retested. Then these changes (by commit/revision numbers) are moved to Trunk for release.

    Seems very similar. Its a good flow. however when someone breaks something in developement......hmmm...now that i think about it. The breaks in builds is usually from development branch. Not the feature branches.

    Perhaps this is a flaw in our usage of the development branch for 'everything' thats not a feature requirement...

    Any thoughts? Ill look into the gitflow more anyways, maybe It can help us redesign our flow.


  • Closed Accounts Posts: 638 ✭✭✭theTinker


    stevenmu wrote: »
    I've heard some good things about Maven, though I don't really know anything about it myself.

    Since you seem quite process/test focused, I read a good article by John Carmack on static code analysis recently, that could be something worth looking at: http://altdevblogaday.com/2011/12/24/static-code-analysis/

    Although when I mentioned researching new technologies and techniques, it was as much from the point of view of what the developers are using to actually write code. There may be new platforms or frameworks worth being aware of, or new methodologies, practices or patterns within your existing platforms and frameworks that could help your development a lot.

    Another thing I'd suggest is that rather than trying to apply every improvement you can possibly find, to more specifically target changes at your unique setup. You will see greater returns by identifying your organisations unique trouble spots and then introducing solutions for those, as opposed to blanket introduction of every best practice you can. You can potentially identify trouble spots with periodic process review meetings with developers where you can find out where they're spending lots of time that they don't need to, what issues they keep bumping into, and get their ideas about what would make their jobs easier and more productive.

    Yeah many good points you make. Im trying hard to not implement new technologies or tools without having vert valid business cases for them.
    I have no desire to go threw upgrade work and new tech standards to learn to simple update a process we already do without any business improvement.
    I want something that really improves our productivity.
    I got the idea earlier to track everyones work(by thier time sheets) and see can I find any abnormalities in thier work. eg: are they spending alot of time managing environments etc and not actively producing new code.


  • Registered Users Posts: 2,023 ✭✭✭Colonel Panic


    Man, I wish I worked on your team! I don't have anything to add, but I've enjoyed reading the thread. Will see what I can do in my own place methinks.


  • Closed Accounts Posts: 638 ✭✭✭theTinker


    Man, I wish I worked on your team! I don't have anything to add, but I've enjoyed reading the thread. Will see what I can do in my own place methinks.

    Frankly its the quality of a very a few in here that have brought such good standards, before here, I wouldnt even of used a CI. Great to learn from them all.

    Thanks for all the helpful responses. It gave me some really good ideas for some new building infastructure and testing structure design. Very high level but Ill be creating a report of my analysis and proposals over the next short term. Ill be happy to stick up my findings and perhaps others can use them.


  • Registered Users Posts: 1,785 ✭✭✭Farls


    First off fantastic thread.

    Can I ask what methodology you are using? I'm guessing it's Agile? Do you have a QA department? and is it .NET or Java or ?

    Having tried and tested most CI's including a home made one I would go with Jenkins, fantastic tool and has a nice plugin for JIRA, on one team here they created their own 'blame' plugin that informs the team when and who broke the build.

    50% Unit Test coverage appears low, we are at 87% and over 100% in some modules having started at 65% months ago. We push the bar up every once in a while and if the coverage falls below our threshold then the build fails.

    If a unit test requires access to something external that you can't mock then it's not a unit test and should be re-written or moved else where.

    I have no idea why so many branches? we have 1 branch per product and merge once per sprint (2 week sprints) or if we need to release a patch.

    Run full integration tests nightly, run smoke tests (small batch of integration tests to ensure product works) every 30 mins or every hour.

    Create solution tests/project tests that check namespaces, folder structure, references, copyright etc...the list is almost endless here.

    Personally I don't like Gated-CheckIns, their like a nappy that tries to catch simple mistakes we shouldn't be making and I think they slow things down.

    Code reviews I find a waste a lot of time also and should only happen on new developers code. Refactoring is the best code review of all, if you can't understand what your refactoring then somebody needs their ass kicked.

    Patters & Practices meetings...this is like R & D, every few weeks/months have a meeting where some dev's can introduce new patterns/practices to the codebase. It's also an opportunity to re-iterate current ones and where needed show the bad code and then how it should of been done, without naming names :-)

    I have a lot of other things I would add to this but for the time being I think there is a lot to go on here.

    Kayo's you sound like the kind of guy were currently looking for ;) Either that or sitting at a desk not too far from me here already!!


  • Closed Accounts Posts: 638 ✭✭✭theTinker


    we are using Agile scrum at the moment. Most implementation from the book entitled Scrum from the trenches. Most as in anything we didnt find too cumbersome or pendantic. Its worked well for thetime for us. About 4+ years now. We don't have a QA department per say as its not just a product company. We have BA QAs per team for UAT testing and requirment construction and sign off etc.
    Farls wrote: »
    First off fantastic thread.

    Can I ask what methodology you are using? I'm guessing it's Agile? Do you have a QA department? and is it .NET or Java or ?

    Yeah i've looked into the Blame plugin, wiki seems to be outdated for it but thier release seems current enough. I like it. I like public responsibility for things. Tends to get alot less crap washed under the rug.
    Farls wrote: »
    Having tried and tested most CI's including a home made one I would go with Jenkins, fantastic tool and has a nice plugin for JIRA, on one team here they created their own 'blame' plugin that informs the team when and who broke the build.

    Its beyond laughable at such a low level. however, the only comfort I have for it is that, its our unit test coverage that is 40-50% but as we have lots of web apps, i sleep a little better that we also have ALOT of fitnesse and Fitnesse and selenium testing going on which dont make it into those metrics.
    It needs to be increased badly. Its happening but with such a large code base and so many new requirement coming in, It takes time. We(the devs) have been putting the refactoring of tests into active requirement time estimates whenever the requirement hits that package/module area.
    Farls wrote: »
    50% Unit Test coverage appears low, we are at 87% and over 100% in some modules having started at 65% months ago. We push the bar up every once in a while and if the coverage falls below our threshold then the build fails.


    Yeah. I need to revise these parts quite alot. As a whole there isnt alot of options that I see happening. They are DAO layers and cannot be mocked out any further. They make calls to external systems such as DBs and even a few remote WS. i am loath to move them to a slow tests group and run them once a day, but we may have no choice.
    Farls wrote: »
    If a unit test requires access to something external that you can't mock then it's not a unit test and should be re-written or moved else where.

    We usually have 2-4 requirements on the go. sometimes there is business delays or marketing delays which put things on a backburner for months so we allow a branch for sleeping features. 2-3 active. And usually one development branch for merging all things together. This is our root branch. we have another branch for Preprod which we move revisions to from the root branch.
    So *FeatureBranches -> 1 root branch -> pre prod branch. Stuff that is ready but not being released or development environment stuff like test fixtures for fitnesse etc are left in Root.
    Farls wrote: »
    I have no idea why so many branches? we have 1 branch per product and merge once per sprint (2 week sprints) or if we need to release a patch.
    We do the integration stuff nightly, along with automated UAT tests as they take hours. We dont do smoke tests. Ill think about that one. We usually just run unit tests on commit.
    Farls wrote: »
    Run full integration tests nightly, run smoke tests (small batch of integration tests to ensure product works) every 30 mins or every hour.

    This is a new one. I've never seen something with that purpose. It would not turn up much very often i would think, but that is a good thing! However when it did! it would be very beneficial i'm sure. Any further explainations on how you run this type of stuff in your own place? Its interesting.
    Farls wrote: »
    Create solution tests/project tests that check namespaces, folder structure, references, copyright etc...the list is almost endless here.

    Then you will hate my report im making lol, I'm leaning towards a gated branch infastructure. Commits to gated branch force build, if build successful, then move commits to root development branch. I agree that it technically should not be needed, Our unit tests are there to be run locally for testing before commits to the repository, but I think people will always try to commit without proper testing. Little changes that they think "It'll never break anything,sure its tiny!" lol :)
    Farls wrote: »
    Personally I don't like Gated-CheckIns, their like a nappy that tries to catch simple mistakes we shouldn't be making and I think they slow things down.

    We've just started them and I found the first one immensely useful. I learnt alot about a new requirement going in that I was not working on. Just reviewing another guys code. I also listed about 20 queries which may turn up mistakes or enlighten me, and I listed about 10-15 things i class as issues/problems/errrors. Most small or just sloppy. The other guy is a good programmer too. This was based on 700+ files changed.
    Farls wrote: »
    Code reviews I find a waste a lot of time also and should only happen on new developers code. Refactoring is the best code review of all, if you can't understand what your refactoring then somebody needs their ass kicked.

    Great idea. I learnt have of my programming best practices from other peoples mistakes, and other people pointing out my mistakes...that and a great book i believe entitled Java patterns made easy. Ill have to check that name..but ...wow... that book just upgraded me about 2 years experience in a week.
    Farls wrote: »
    Patters & Practices meetings...this is like R & D, every few weeks/months have a meeting where some dev's can introduce new patterns/practices to the codebase. It's also an opportunity to re-iterate current ones and where needed show the bad code and then how it should of been done, without naming names :-)

    Thanks. I enjoy my work alot. I do need to learn to go home at 5pm though. My OH must be suspecting things at this stage lol.
    haha maybe we do...or maybe we will! its a small industry!
    Farls wrote: »
    Kayo's you sound like the kind of guy were currently looking for ;) Either that or sitting at a desk not too far from me here already!!

    thanks for the feed back. the idea for solution tests and patters and practices meetings are very interesting and sound like good additions.


  • Registered Users Posts: 2,494 ✭✭✭kayos


    Farls wrote: »
    Personally I don't like Gated-CheckIns, their like a nappy that tries to catch simple mistakes we shouldn't be making and I think they slow things down.

    You are right people shouldn't be making stupid mistakes but after nearly 15years in the industry I can assure you they do, myself included. The theory behind gated checkins is that nothing ever hits the main stream until it has passed CI. So if you break a test, your coverage is too low, too many nasty static analysis warnings etc its booted out and never "infects" the main code stream. Can you honestly say you have never gone and taken down the latest source from the repository and find its broke? It shouldn't happen but it does. Not that gated check-ins will prevent this totally but its better than nothing.
    Farls wrote: »
    Kayo's you sound like the kind of guy were currently looking for ;) Either that or sitting at a desk not too far from me here already!!

    I'm a .NET guy stuck in a Linux C/C++ & Java world.
    theTinker wrote:
    We usually have 2-4 requirements on the go. sometimes there is business delays or marketing delays which put things on a backburner for months so we allow a branch for sleeping features. 2-3 active. And usually one development branch for merging all things together. This is our root branch. we have another branch for Preprod which we move revisions to from the root branch.
    So *FeatureBranches -> 1 root branch -> pre prod branch. Stuff that is ready but not being released or development environment stuff like test fixtures for fitnesse etc are left in Root.

    Your still not convincing me you need that many branches. More like this is how its currently done and you are accepting that.

    First off it also sounds like your agile process is failing.

    If you have started on a User Story and it then gets put on the back burner your product owner has failed to set the correct priorities in the backlog. The sprint should be protected from changes like this.

    If you have started on a User Story and it then turns out you have not all the required information it should never have been committed to in the first place. The team should have identified this in their prep work for the sprint and in the sprint planning refused to place it in the sprint backlog due to the missing information.

    If you commit to doing something in a sprint it should be done in the sprint and available in the product at the end. It should not be developed in a branch and then not pulled into the main stream and left sit idle. If you come back to it in a couple of months does it merge cleanly and without the need to rework some of the code?

    What benefit does having 3 devs work on Branch 1 doing Feature 1, while 3 other devs work on feature 2 in branch 2 give you?
    All you get is a delay in integration. All 6 should be working on a single branch. Then when I check-in code everyone gets the latest and its integrated nice and early. If you work in multiple branches no-one is ever working on a full set of the latest code until Feature 1 Branch gets merged to Dev and then merged down to Feature 2 branch and visa versa. That's a lot of unneeded time lag and possible trouble in the merges. Remember no one should be checking in anything that is incomplete (that's what shelve sets are for if you have a decent source control system) and they sure as hell should not be checking in broken code. So there is no reason in my mind for all those branches other than causing extra work.

    Tell me this if you have 3 features in development how many scrum teams are there working on those features?

    Please note I completely understand that sometimes some features need to be done in isolation but this should be the exception not the rule. But as I'm picking on this point so much you can see that I view this excessive branching as poor practice.


  • Registered Users Posts: 1,785 ✭✭✭Farls


    The solution test runs on all check-ins, it's basically global unit tests that run against everything in your codebase. They are unit tests that check for the type of things I've mentioned already like copyright information, namespaces, references, directory naming, empty methods, coded ui tests not linked to stories in TFS, project settings...like I said the list is almost endless

    What these mean is that we no longer rely on people remembering to do X,Y and Z, we now ensure that remember because the build breaks with their name on it!

    Patters and practices are fantastic also, they also force developers to understand more what and why they are doing things. As I found out a few weeks ago, you don't really understand something until you have to explain it somebody else! It also keeps the bar high and skills sharp and the bad code/good code examples are very beneficial for learning and joking about to help with team spirit and moral.

    Just running unit tests on a build gives confidence that things are green but it by no means that anything works, I would advise getting a simple smoke test going ASAP.

    When I said about our code coverage I meant unit test coverage...some metrics....

    We have 550k lines of code on my current project excluding white space and we don't use comments (code is meant to be naturally readable, if you need a comment your code is too complicated or your creating an API) on that code base we have around 5k unit tests and 2k automated tests I'm not sure what our automated coverage is but it will be very high as almost everything is tested.

    I must take a look at that book you mentioned, as for the OH gettting suspecting as to what your up to I'm in the same position the past 2 years :-) "CI" might also stand for Continuous Improvement.


  • Registered Users Posts: 7,157 ✭✭✭srsly78


    Farls wrote: »
    we don't use comments

    5351d1320573746-954-not-sure-if-serious.jpg


  • Registered Users Posts: 11,262 ✭✭✭✭jester77


    Farls wrote: »
    We have 550k lines of code on my current project excluding white space and we don't use comments (code is meant to be naturally readable, if you need a comment your code is too complicated or your creating an API) on that code base we have around 5k unit tests and 2k automated tests I'm not sure what our automated coverage is but it will be very high as almost everything is tested.

    :eek: For real??

    I really pity anyone joining that team, especially a junior dev who may not be too experienced.


  • Advertisement
  • Registered Users Posts: 1,785 ✭✭✭Farls


    jester77 wrote: »
    :eek: For real??

    I really pity anyone joining that team, especially a junior dev who may not be too experienced.

    The reason comments are normally used is two fold. Firstly they are used for API's and secondly to explain complicated code for maintainability.

    Firstly we aren't creating an API and secondly we don't have complicated code that is hard to maintain. Our code is like reading English and follows strict patterns and practices. This is something a junior developer would and many have benefited from instead of looking for some code in a sea of green or worse again badly written code with little or no green!

    I've spent years working with commented code and the past year without comments and I'd never go back. I have to admit it was a shock at first but I realise now how badly I was writing my code before ie. badly named variables, method signatures, classes, huge methods that do multiple things same with classes, no defined structure and simple laziness!


  • Registered Users Posts: 7,157 ✭✭✭srsly78


    Mickey mouse stuff doesn't need any commenting no. Doesn't need highly paid developers either.


  • Closed Accounts Posts: 638 ✭✭✭theTinker


    your right, It is how it is currentily done and I am accepting that, but because i like it this way and its working out well for us. The branches are created and destroyed often so there isnt much integration to be done, and changes for a feature are kept in isolation from the rest of the environments until they are ready.

    The product owner does fail to set the correct priorities sometimes. Theres nothing i can do about that. Its a largely business driven model which im not privy to. Its not a IT company releasing tech. They are solutions to market driven forces which change rapidily, squabble with inter rival politics and sometimes need to be done for really odd business reasons. I dont agree with some of them, but I work in the team that makes the solutions. Im not the guy that decides what gets made. Our budget and survival depends on releasing what they want.
    kayos wrote: »
    Your still not convincing me you need that many branches. More like this is how its currently done and you are accepting that.

    First off it also sounds like your agile process is failing.

    If you have started on a User Story and it then gets put on the back burner your product owner has failed to set the correct priorities in the backlog. The sprint should be protected from changes like this.
    This isnt something that happens. We usually have good information given to us and its pretty complete by the time we start. I dont know what gave this impression?
    kayos wrote: »
    If you have started on a User Story and it then turns out you have not all the required information it should never have been committed to in the first place. The team should have identified this in their prep work for the sprint and in the sprint planning refused to place it in the sprint backlog due to the missing information.
    No of course it doesnt, but theres nothing we can do about that. It rarely happens, infact, only twice in 5 years if i remember correctily, but they're always just market driven forces or political maneuvering changed the business solution required that would make this happen. I wish they were just technical issues!
    kayos wrote: »
    If you commit to doing something in a sprint it should be done in the sprint and available in the product at the end. It should not be developed in a branch and then not pulled into the main stream and left sit idle. If you come back to it in a couple of months does it merge cleanly and without the need to rework some of the code?

    I do see some of your concerns, but in the case you've highlighted for Branch 1 doing feature 1, and Branch 2 doing feature 2. It gives us the very needed benefit of releasing feature 1 when its due for release before feature 2. Our requirements get released at different times.
    How would suggest handling this situation with one branch? Like how do you partly release ur code base?

    For us, feature 1 would be released, feature 2 would merge in feature 1 one changes when it gets merged back into the root development branch, then released. How is it handled in your place?
    kayos wrote: »
    What benefit does having 3 devs work on Branch 1 doing Feature 1, while 3 other devs work on feature 2 in branch 2 give you?
    All you get is a delay in integration. All 6 should be working on a single branch. Then when I check-in code everyone gets the latest and its integrated nice and early. If you work in multiple branches no-one is ever working on a full set of the latest code until Feature 1 Branch gets merged to Dev and then merged down to Feature 2 branch and visa versa. That's a lot of unneeded time lag and possible trouble in the merges. Remember no one should be checking in anything that is incomplete (that's what shelve sets are for if you have a decent source control system) and they sure as hell should not be checking in broken code. So there is no reason in my mind for all those branches other than causing extra work.

    We're 1 team, we've about 5-6 developers atm, and about 3 features being done. Feature 1 is coming to an end and is being merged into root dev and then into pre. Feature 2 is setting itself up and they are build tests for it before coding it. Feature 3 is mid way.
    Im on Swat at the moment.
    What does your team do in this situation?
    kayos wrote: »
    Tell me this if you have 3 features in development how many scrum teams are there working on those features?

    No worries, but carry on picking. Im very interested in improving things around here and I certainly dont take finding fault with my work and setup personally. Just helps it get a bit better.
    kayos wrote: »
    Please note I completely understand that sometimes some features need to be done in isolation but this should be the exception not the rule. But as I'm picking on this point so much you can see that I view this excessive branching as poor practice.


  • Closed Accounts Posts: 638 ✭✭✭theTinker


    wow big difference, we've about 110k I THINK, pure java source, no ws stubs, files, tests, etc.
    we've 3.5k tests and only hit about 40% coverage. All are automated.
    Farls wrote: »
    We have 550k lines of code on my current project excluding white space and we don't use comments (code is meant to be naturally readable, if you need a comment your code is too complicated or your creating an API) on that code base we have around 5k unit tests and 2k automated tests I'm not sure what our automated coverage is but it will be very high as almost everything is tested.


  • Registered Users Posts: 1,785 ✭✭✭Farls


    srsly78 wrote: »
    Mickey mouse stuff doesn't need any commenting no. Doesn't need highly paid developers either.

    :confused:


  • Registered Users Posts: 2,494 ✭✭✭kayos


    I jinxed myself..... new project = branch per feature ><


  • Closed Accounts Posts: 638 ✭✭✭theTinker


    kayos wrote: »
    I jinxed myself..... new project = branch per feature ><

    And our team is raging over it!

    ironically, I was pushing your single branch towards my team today.

    Lots to consider first, but perhaps we will be swapping shoes!


Advertisement