Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Why C?

Options
2»

Comments

  • Registered Users Posts: 4,003 ✭✭✭rsynnott


    damnyanks wrote:
    I would never consider programming a art. Its a trade... at least learning how to do it. To do it correctly is a different story.

    Wuh? Any idiot can bang together code to do a given thing. The art is in writing code to do that thing well, in a reliable and maintainable fashion.


  • Closed Accounts Posts: 7,563 ✭✭✭leeroybrown


    C does seem to be the predominately used first year teaching language in the colleges. I think the main reason for this is historical. C has been taught like this for years now going back long before the other options that people would think of (C#, Java, etc) even existed. Universities in general tend to change slowly when it's not pushed. C is still a very useful language to have and most of the other languages taught in later years tend to have C-like syntax.

    I studied C in college and found it a very useful way to learn a lot about programming. That said, many people who had less of an aptitude for programming found it difficult.


  • Registered Users Posts: 21,264 ✭✭✭✭Hobbes


    kenmc wrote:
    I would argue that an OO language is not the best one to start off on, because if you become accustomed to OO methodology you would potentially find it quite difficult to go "back" a step to something which didn't handle OO eg C.

    And I disagree. OO is partly a language type but it is also a programming mehodology. Something a programmer has to learn if they are to write maintainable code. C doesn't offer the maintainability of OO languages.
    There is no one language that can be called "better" than the others

    Again you are incorrect. Certain languages were designed for teaching programming and not coding for production itself. For example PASCAL and LOGO.
    i.e. a master C programmer would be more than capable of churning out a more stable, secure program faster than a relative newbie in Java or C++ or whatever

    And a master Java/C++ programmer can be more capable of churning out a more stable, secure program faster then a relative newbie in C. I don't get your point.
    In my opinion programming is an art more so than a science,

    It can very much be both. UML, Patterns, Unit Testing, Documenting are all more to do with the science then an artform.

    The days of the solo programmer are more or less over. Because of this it is better to learn how to write code to put other people on or that 5-10 years down the line can have modules upgraded/changed without destroying the underlying code. C doesn't easily offer that at the start that an OO would.
    (again related to who and what rather than what, hence why it's so difficult to say how long a module will take to write - depends on the writer)

    While only partly true the writing of such a document isn't. We rountinely use costing documents with UML diagrams to write out the layout of the code along with times for each part of the module. Generally a costing document (for us) has to be broken up into 5 days or less steps. Anything over 5 days has to be broken up into its subparts. Any professional coder who can't do this or starts writing the code first is writing piss poor code.
    Once someone has an understanding of programming and the thought process behind it, then they can start to worry about how to write secure code.

    Writing Secure, maintainable code. As I said, C doesn't offer that as easy as other languages.


  • Registered Users Posts: 4,276 ✭✭✭damnyanks


    kenmc wrote:
    Interesting.... why a trade? you mean like a plumber/electrician/plasterer/bricklayer???
    If that's the sort of trade you're likening it to then I don't really understand the link - tradesmen don't do any of the design work involved with whatever they're building - they're just told to build a wall here, plaster it, put sockets here there and everywhere.....
    Whereas in programming/SW engineering it's all about design - least the stuff I do is....
    like the architect designs the house, the programmer/SW engineer designs the system....
    Still reckon it's an art to be honest......

    The design stage can be seen as a artistic action. But the programming like a trade. How many developers around the world are just given a design document and told to make it.

    I have no problem with someone considering an art. But a normal tradesman can think their work a art form as well... I think that is more along the lines of having pride in what you do. I love nothing more then having something work correctly that I have created from scratch.

    You see the system , program, hello world ;) grow from nothing. It is a difference of opinions I guess. Software is now at the stage where it is everywhere, where it affects peoples lifes and in some cases can be responsible for people being injured if not worse (Weapons systems, software crashes yada yada yada :D)


  • Registered Users Posts: 4,276 ✭✭✭damnyanks


    rsynnott wrote:
    Wuh? Any idiot can bang together code to do a given thing. The art is in writing code to do that thing well, in a reliable and maintainable fashion.

    What happens when model based development finally kicks in. They've been saying it for years so lord knows when it will happen. But back in the day people probably scoffed at the idea of writing C to create assembly code.

    Like I somewhat stated above. Software is now such a huge part of life that artistic expression on code will probably become a thing of the past. The IEEE and other bodies (SEI) are making a move to create software engineering into true engineering discipline to eliminate errorrs in development (code re-use and general standards) and make it as rapid as possible.

    Ok... I do know that I am making little sense at the moment. I will edit the post later or reply. A book on softare engineering is always a interesting read(At least for me :D). Its amazing to see screw up's that happened when people tried to create their visions.

    But its the 18th... I'm Irish and I live abroad so I went a bit too Patrick yesterday. After some sleep I will make it readable :)


  • Advertisement
  • Registered Users Posts: 7,276 ✭✭✭kenmc


    damnyanks wrote:
    Like I somewhat stated above. Software is now such a huge part of life that artistic expression on code will probably become a thing of the past. The IEEE and other bodies (SEI) are making a move to create software engineering into true engineering discipline to eliminate errorrs in development (code re-use and general standards) and make it as rapid as possible.

    Ah see thats where I think they're barking up the wrong tree. In civil engineering it's all quite easy to do something like this - eg to pour a foundation wall of x*y*z will take x*y*z cubic units of concrete and will require function(y*z) hours to harden. When it's done it's done. This can be easily scaled to a whole building, as you're working with tangible entities - a brickie can lay X blocks an hour and hence can build a wall of a*b in a defined number of hour (assuming he has a supply of raw materials and doesn't take too long over lunch).

    In software though, there is nothing tangible. How long will it take to write a tuner driver for the LSI L64733 chip? Well thats just like asking how long a piece of string is. Sure you can break it down into chunks and "day sized" work blocks, but there are SOOOOO many variables.... how good is the chipspec? Has the programmer ever done a tuner driver before? What OS is it running on? What middleware will it use? Then there's all sorts of things like timing, signal degradition etc etc.

    Estimation of the amount of work involved in a particular task is one of the biggest problems facing development teams. Especially when trying to debug a problem that happens eg once every 2 days. How can you estimate how long it will take to do this? This is part of where the "art" comes in. Different programmers can debug things in different/cleverer/faster ways than others. And so many things can seemingly affect timing issues, and yet the answer can be something so completely different....

    Ok probably too much info but a recent example I had was an audio glitch was appearing every so often in a system I was developing. It seemed to indicate a memory leak was happening, especially as when I added some dummy variables into a structure to push it down the stack a bit, the glitch went away. after a hell of a lot of head scratching the problem turned out to be nothing at all to do with memory.... the dynamic declaration of the extra variables simply slowed the processor *just* enough that a write to the audio chip which happened earlier had enough time to be accepted by the chip before the next write went to the audio chip, so that the 2 writes were correctly handled, and no glitch occurred. How the hell can you account for this sort of thing in "science" where everything has a defined behaviour. How can you account for an unknown bug in a chip which says that register X is not cleared on a soft reset, but turns out that it really is?? The above problem had already been examined by 2 other programmers without success. I got lucky. I tried something they didn't. Dunno how or why I decided to do so, but I did, and it turned out to be the right answer. I'm not saying that I was better than the other 2, I just thought up something different. This sort of bug happens all the time in real world systems, and in this case it was not likely to cause anyone to die, but it would seriously pi$$ someone off it they were watching tv and the audio started acting up every random interval.

    Ok I think I've probably gone off on a tangent here so I'm sorry, but I just wanted to stress the points floating through my mind.

    As for "standards" then I'm all for it - every project we work on has a coding standard which must be followed, eg stuff like you can't use strcpy, only strncpy to (hopefully) prevent buffer overflows - but even then there's no guarantee that the programmer will not allow the buffer to get written with more data than it can store - but hopefully it should. As for estimation well I would dearly love for there to be a better way of estimating work, to stop the inevitable late nights and weekends that come with the job!!!

    In my experience Code Reuse is generally not feasible. Different customers won't allow you to use their code for others, unsurprisingly. Going from one OS to another means that large changes are needed. Different middleware - again a change.Different processors/chips handle things differently, some don't have any floating point numbers. Different amounts of memory/flash. I think there are just too many different variables in the system, and the field is too new. We've been building structures for thousands of years, we've been programming for what - 40?? We've a hell of a long way to go yet.

    Anyway, how this as gone from "Why C" to "Is programming an art or a science" eludes me but I think it's all my fault. Now back to work for me. I've a problem to debug :) But before I go.....

    "Mostly, when you see programmers, they aren't doing anything. One of the attractive things about programmers is that you cannot tell whether or not they are working simply by looking at them. Very often they're sitting there seemingly drinking coffee and gossiping, or just staring into space. What the programmer is trying to do is get a handle on all the individual and unrelated ideas that are scampering around in his head. "(Charles M Strauss)


  • Registered Users Posts: 21,264 ✭✭✭✭Hobbes


    kenmc wrote:
    In software though, there is nothing tangible. How long will it take to write a tuner driver for the LSI L64733 chip? Well thats just like asking how long a piece of string is. Sure you can break it down into chunks and "day sized" work blocks, but there are SOOOOO many variables.... how good is the chipspec? Has the programmer ever done a tuner driver before? What OS is it running on? What middleware will it use? Then there's all sorts of things like timing, signal degradition etc etc.

    They are all part of your costing document. A (proper) costing document doesn't fit within a set time but lists off all the knowns, possible unknowns and critical parts that need to be overcome. Other parts of what you listed off would be put into the functional spec.


    A bad costing document from managements point of view is where the developer is given X days to get all features in. A bad costing document based on developers point of view is when they are incapable of breaking down the total time required to do something or give silly totals that are not realistic.
    Estimation of the amount of work involved in a particular task is one of the biggest problems facing development teams. Especially when trying to debug a problem that happens eg once every 2 days. How can you estimate how long it will take to do this?

    You cost for a set time for fixing SPR's if you have an SPR that is taking more hours you either adjust the time (project or more developers) or pend the SPR for next release while your developers work on other SPRs.
    This is part of where the "art" comes in. Different programmers can debug things in different/cleverer/faster ways than others.

    There is no art involved. So one developer finds a faster way, then they should document that system for other developers to follow. Any developer that doesn't do this is detrimental to the project.

    *snip* How the hell can you account for this sort of thing in "science" where everything has a defined behaviour.

    Unit testing.
    The above problem had already been examined by 2 other programmers without success. I got lucky. I tried something they didn't. Dunno how or why I decided to do so, but I did, and it turned out to be the right answer. I'm not saying that I was better than the other 2, I just thought up something different.

    Again you share this knowledge with the other developers. Document it and move on. You also write a unit test to check for this in the event it shows up again.

    Fixing *unknowns* can be an artform but it doesn't mean that writing software is.
    As for estimation well I would dearly love for there to be a better way of estimating work, to stop the inevitable late nights and weekends that come with the job!!!

    If you are working silly hours it means that your project is poorly planned.
    I think there are just too many different variables in the system, and the field is too new.

    The field isn't that new at all. There are many kinds of patterns and processes for dealing with certain fields of computing. IEEE are a good resource on this.

    I also recommend reading Rapid Development.


  • Registered Users Posts: 2,082 ✭✭✭Tobias Greeshman


    To the OP, I think that the only reason to use teach students C in an Electronics course, would be regards to showing how hardware is controlled by software. As C can be used in a very low level and used with inline assembly, makes it a prime choice for using it in such a course.


  • Closed Accounts Posts: 8,264 ✭✭✭RicardoSmith


    As in all walks of life, theres a lot of things that make someone good at what they do. Some of them would be good training, good work practise, and a talent (natural or learnt) for what they are doing. A programming language is a tool do job. You pick the right tool for the job.


  • Closed Accounts Posts: 25,848 ✭✭✭✭Zombrex


    I'm in 1st Engeineering and we've been doing some programming in C. Is there any reason to choose this language?

    Thanks

    Is the most reasonable answer that most modern widely used languages (C++, Java, C#) are based on C syntax and because C is a good example of a structured programming language, useful to learn before you get into more complex ideas such as OOP, AOP or COP.

    Plus C is a good middle ground between the structure of high level programs and the way they act when compiled down to machine code, so it is a good language if you are trying to grasp how programs do things rather than just what they do.


  • Advertisement
  • Registered Users Posts: 4,003 ✭✭✭rsynnott


    Of course, most systems programming is still done in C, and it's popular as well in the embedded field.


Advertisement