Home > Rant > C# – Final conclusion (for now)

C# – Final conclusion (for now)

Disclaimer: Not a well thought out post. Just a little summary. And with C# terms here and there.

I don’t want to write as much as I did in my past few posts about programming, so there is not much space for today’s C#. 🙂

My feelings towards C# and .NET are mixed. VS makes me want to play with it more. On the other hand whenever I try out some neat trick I thought up a moment ago and it turns out not to work, because the language or the framework doesn’t let me, I get very disappointed. C# “the right way” requires a lot of typing. Implementing interfaces, even if VS has a feature to automatically add the required functions, feels like I’m repeating the same thing a thousand times. .NET’s use of memory and speed makes me think about users who would be very unhappy with programs written in it. The foreach which would be usable with a built in counter, and if it wouldn’t demand implementing some interface (or the single GetEnumerator function) is killing me, and its almost never the good choice. (Though elegant for sure.) Not knowing what’s going on behind the scene is scary. I have an urge to avoid all the .NET classes written for specific purposes as they tend to have terrible performance. The obsession against pointers forces me to write everything with extra care. Does it need to be “boxed”? If I pass around lists, will they get recreated a lot? Anything that returns collections, enumerators etc. creates new objects, then you throw them out and make your own just to return that, and this could go on for several levels. Pointers need care because they can cause really terrible things, but avoiding pointers give you a whole lot of other problems. And it’s also very difficult to debug.

But then that feeling comes back that I must touch VS again…

Categories: Rant Tags: ,
  1. February 11, 2012 at 6:53 am

    Taking less memory is always better. By the way, thanks for the last update. I see a lot of fixes.

    • February 20, 2012 at 9:50 pm

      I wanted to reply for a long time but couldn’t get to it.

      I should probably write a new post but I wrote too much about c# already so I’ll just comment here. I think c# is a nice language. .NET is screwed up in my opinion (I don’t want to offend anyone), but the language itself with a better library could be great. Garbage collection and the philosophy that memory is cheap could work, enumerators and such are probably the future. All I’m saying is that TODAY, it still doesn’t look like a good alternative. But who knows what will come tomorrow? With so much improvement in computing capacity and all other kind of capacity it is only a matter of time when a programmer won’t have to bother about memory allocation and pointers anymore. The development of such systems must be done today for such a future to become reality, so I don’t want to stop anyone. Just give it 5-10 more years. I just wish it wouldn’t be allowed for Microsoft to do so, because they always make things more complicated than necessary.

  2. AndyB
    August 29, 2012 at 8:55 pm

    Hi. you’re dreaming if you think MS can make simple things 🙂 They stopped being an engineering-driven company a decade ago, ending up in backfighting and politics instead thanks to their internal management.

    But, .net suffers the same problems as Java, massive memory usage and overly simple libraries that we knocked together quickly “because its such a simple language to write in” – hence no-one stopped to think what they were really doing.

    Memory is a bitch for performance, not just because you use a lot of it up, but also because it costs a lot to get it from RAM to the CPU cache where it can be used. The more you need, the more time spent shuffling it over the buses. Hence, memory usage will always be expensive even if you have unlimited amounts of it (well, or until someone comes up with a super way of directly attaching CPUs to all memory – like 100Gb of L1 CPU cache).

    I’m sure there are plenty of other issues too, I know my very first .NET app (a simple tool to perform some DB data conversion) was tremendously slow even though I spent a bit of time doing everything I could think of to make it go faster – in the end I reverted to OLEDB C++ code and dropped a 13 hour run down to 13 minutes (give or take – it was that order of magnitude).

    My biggest problem with .NET though, is how it encourages dumbing down of programming – since MS told us the GC meant you never had to worry about memory or object lifetimes again, you know the advice would be taken up by poor programmers who never tried to do better – and hence you end up with a lot of tools for memory leak detection in .NET apps, the SafeHandle class in .NET (for non-GCable data), and the StringBuilder class. They also told me exceptions were free, yet read Chris Bumme’s blog for the full details of how very very slow they are. This kind of hype didn’t help anyone.

    The idea that hardware will get faster so the inefficiences in .NET will be unnoticeable is a poor one, today MS is going back to C++ due to the cost of running that inefficient software on the cloud (why run 10 servers when you could make an efficient program that will give you the same throughput using 1 server, think of the electricity usage!), or on mobile with limited electricity resource.

    Visual Studio is nice though, it should serve as an incentive for Eclipse to get that bit better.

    • September 1, 2012 at 6:06 pm

      Hi, I do think that big companies like MS will always hinder development. Any organization or person that has so much power will do so, be it business, politics or religion. But programming won’t be always like what it is today, and C# might eventually help that happen.

      C# might use a lot of memory because its badly implemented and not because the language itself is bad. I don’t know. Speed is not critical for every program out there, and apart from the usual MS fanboy, there are many who like it because it has easier concepts than for example C++, so it is very easy to learn. You can argue that this means that C# is a dumbed down language, and that’s probably correct, but compared to that, in C++ you can’t avoid rewriting the same thing a thousand times (or use some free libraries that force you to release your source as well), because it is not dumbed down enough. If .NET is the fault of C#, then the lack of better standard libraries is the fault of C++. BUT, I don’t think that it is the fault of the languages, but their standard libraries instead.

      I don’t say that hardware should get faster to be able to use the inefficient .NET and badly implemented(?) C# better, it should get faster to support languages with GC and other enhancements better. Or even to have new enhancements that are impossible today. And I hope that by that time, .NET and C# will be extinct, and something better can take its place. It’s almost sure it won’t be MS to invent it though…

      • AndyB
        September 1, 2012 at 8:31 pm

        C#’s not a bad language but… there are flaws with it: firstly, it’s designed to use RAM – all GC based systems are like that. You tell people that RAM is cheap and therefore to use it, and let the GC free it up without the programmer having to worry about it, that’s the point of a GC in many respects. Unfortunately, this has a side-effect… people use a lot of RAM, objects are created and ignored much more often than other systems, temporaries are created almost by design.

        This filters into the libraries and is why you’d say many are poor.

        Using lots of RAM is not a good thing, you may have lots of main system RAM, but the IO transfer of it into CPU cache is not cheap, and the more you use – obviously the slower you go as you spend progressively more time on IO. (its like threading, when this was first popularised in Windows, lots of people created threads for everything, and the system spent more time switching contexts than performing work).

        I suppose you can see the proof of this in the StringBuilder class. Why would you want such a thing, unless temporary object creation during string concatenation was slow?! Of course temporary object creation during strings is slow – but its just as slow for other forms of object manipulation.

        the language itself is quite good, I think it got a little lost when they started adding the kitchen sink to it, but my problems with it are mainly due to the fundamentals of garbage collection as a memory model. For some truly great detailed info, look at Chris Brumme’s blogs: http://blogs.msdn.com/b/cbrumme/archive/2003/10/01/51524.aspx is my favourite as it addresses performance problems if you overuse exceptions – something that happens because people were told exceptions are free to use on .NET (thanks MS for the hype), but you might also want to check out his comments about the poor memory model they specced for the CLR.

    • September 2, 2012 at 6:10 pm

      The StringBuilder class is similar in function to the stringstream class in STL. If you take normal STL strings in C++, even those have to recreate their inner array when the size of the string must grow. (Although I found that the stringstream is pretty slow as well…)

      I agree with most of your points. Using lots of ram is slow, and C# has flaws (so does C++), but I don’t think that GC will be a bad choice forever as technology advances. There is nothing wrong with trying to reduce the burden on programmers, because programming is difficult and even the most skilled programmers can make bad mistakes. If it’s not C#, something else will do it right.

  3. AndyB
    September 11, 2012 at 9:03 pm

    Yes, but don’t forget that the stringstream is an optimised construct that is there because memory operations are slow – and that such constructs do not exist for other types (eg objects) even though C# tends to create many, many objects. Hence.. it’s slow. C+ doesn’t suffer the same problem – partly because many objects are created on the (very fast) stack, partly because C++ coders don’t tend to create thousands of objects where 1 will do.

    I still don’t think the GC is the best system for memory management, RIAA is still much better way to manage object lifetimes too. There’s nothing complicated about that either – I can’t see why C# couldn’t have had something that traditional as a memory manager anyway, it’s not like the GC has solved circular references (weakreferences in c#), reference counting (eg safehandle class), or memory leaks (hence all the .net memory checking tools). All I see C# has given us is a way of coding that encourages using dozens of classes.

    Encouraging KISS designs would be the most beneficial thing we could do for software today. (I’ve just been given a WCF webservice to deal with , its almost as simple as a “hello world” webservice, yet somehow the author has managed to have 50 .cs files in the solution!)

    • September 12, 2012 at 6:41 pm

      I don’t see how stringstream is optimized or anything in the STL. The source code looks like a mess for both VS and GCC, and they also use different source codes because only the class members’ apparent behavior is standardized. (I thought that C# storage classes would be optimized instead as their source code is not given away and MS controls it all, but my tests proved otherwise.)

      In case you wanted to write RAII, I prefer it over GC as well. I’m not arguing with you at all, just tried to raise some points why a system like GC might be supported by many.

      Programming in C++ for example is starting to get out of hand. Look at the C++11 standard. They added a lot of new things while not removing anything from the language. (The latter is said to be for compatibility, and yet when you try to compile something old with newer compilers, they give you lots of errors, but this is entirely unrelated to the current conversation.) The difficulty is starting to become choosing the right tool for the job instead of doing the job itself. It is easy to see why people turn away from the language. (Before you point it out, C++ doesn’t force using all its features on the programmer and thus it can be used more simply, but it does look intimidating when you are not good at it.)

      To be fair, when I picked up C# and saw all its collection classes I was a bit lost, and handling anything more complicated required the use of 15 classes instead of 1 or 2 as everything was forced into its own class. The KISS design you mentioned was not considered in C# at all. (MS is terrible when it comes to simplicity. Or they just think in a bizarre way I will never get?) Being used to C++ myself I don’t see why it is good to have everything in its own unit either. Though I even think that OO is not the best technique. I like using some kind of hybrid where OO is not forced on everything, and C++ is perfect for that.

      BUT I still can’t say that GC must be thrown out of the window. Memory usage and speed will work the same like disk space. When CDs became standard, games grew from being a few megabytes to fill in the whole CD, and now it is the same with DVDs. Is it a waste? Yes, but who cares? I think the same will be valid for memory size and CPU cycles sooner or later. (Some people say it already is, but they have money.)

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: