Is .NET the holy grail of software development in the modern age!! (Part 2)

I must first open this post by apologising for the long delay in my follow up to part one. It was pointed out to me by my social networks Kung Fu  practitioner (you know who you are Chris 😉  that leaving my blog for so long was signifying it’s death.  I don’t want to kill my blog just yet so I will do what I can to resurrect it, for those folk that know me probably know that I have a lot to say!

Back to the follow-up then – is .NET the right technology stack to build a professional, packaged, off the shelf software application with?

As with most things in software the answer is not clear cut.  There is no doubt that its possible to build a high quality windows desktop application using .NET, especially with the later versions of the .NET libraries, WPF being a great example.  So if one was to build a desktop application targeted at Windows then IMHO there is not much to choose from that is better.  It is worth remembering with .NET in this context; you must remember that the .NET runtime versions, components and myriad of service packs and OS hot fixes that you are bombarded with must be part of your ongoing application support strategy because unlike compiled languages like C++ where you can almost dictate and lock down specific library versions which remain by-and-large static. With .NET though, updates to runtime components can change things in the execution of your applications after your release which can cause new problems or changes in behaviour that did not exist when you ran your product through the QA cycle as part of you releasing it.

For me though, the much bigger problem is when client/server applications are developed.  Take a typical business application which has some kind of presentation (desktop or browser) application, business logic which is typically server-side logic accessible to the presentation layer via some sort of web services API and a back-end SQL database sitting behind the business logic.  With .NET and Microsoft’s development environment you can build all three tiers in a simple, easy, point and click, highly integrated way inside a single IDE. It’s easy, really easy to create a UI, create some web services, connect to a database, create code objects to serialise/deserialize data to/from the database all in a way that follows some idealistic design patterns and create an initial application quickly.  What’s wrong with this you may ask, isn’t that what every product sponsor wants – fast development, good standard design patterns and a product on time, hell yes please, I would have them all day but… I can’t tell you how many projects I have seen that follow this route only to find that the initial product put out in the field is slow, unreliable, full of bugs and basically unsupportable – often to the total surprise of the team that created the product in the first place.

So what goes wrong so?  Well, its first worth mentioning that there is absolutely nothing wrong with the .NET technology, the problem is with the expectatiation set by the “.NET dream” and the way in which .NET is used.  Shame on Microsoft for setting such a vivid dream of perfection, but shame of the developers and architects for believing it!   Just because you can create a web service with two clicks does not mean that is the “right” way to create your particular web service, that’s just the “easy” way to create your web service and that’s the problem – two clicks and the project has a new web service so makes progress fast.   Many application developers use the easy tools to create their skeleton but then spend ten times more developer hours trying to make the code created by the easy button fit their requirement.  The result is bloated code that is hard to understand and often ends up being a compromise on the original design intent; how many times have I seen software designs change just to fit the specific IDE, toolkit or individual developer’s idealism.  The other problem with such a highly integrated development environment is there is often an illusion that code will be better because the framework takes care of evils like pointers, memory management and so on.  Microsoft have done such a fantastic job at branding the “Managed code” idea, but this is an ideal aimed at managers who can now hire developers that are often less skilled than their lower level developer counterparts and the code created will be less buggy because the managed environment takes care of it for them.  When I hear “managed code” I think RCBPP or “Runtime Compensation for Bad Programming Practices”, which by the way is perfectly fine for applications that are not expected to be fast and scalable in a green-friendly way.  Bad programming practice should not be compensated for at runtime by executing CPU cycles each time the code must run, it should be ironed out at design time so the code can run many times faster for the same CPU cycles.

.NET developers please take note – just because .NET has a garbage collection engine within its “managed code” environment, this is not an instruction for you to absolve yourself of memory management and other responsible coding practices in your design. .NET code is just as susceptible to performance issues, memory fragmentation, race conditions and other serious runtime conditions that make applications perform either badly or in an unreliable way.

Things get even worse at the database design stage where the data model is defined. At this stage there is all too often an over abundance of triggers and stored procedures that actually go way beyond the data model and impose much of the business logic for the application.  A well designed three-tier architecture should be loosely coupled at each layer, the integration point should be at the web services API, not the database so putting application logic in the database is madness, and actually demonstrates a poor overall application design with too much architectural input being taken form the DBA/Data Architect.  None the less it happens, and much more than it ever should.  The database vendors love it because most features that one would use triggers and SP’s for are not standard; they are almost unique in behaviour to each vendor’s database system thus often tying the application to that database developer.  Developers please note – its only the database vendors that strongly advocate for putting application logic into the data model; for the rest of us the data level logic should be there to perform data related tasks only – if you have the need to call a stored procedure to perform some application logic or the integrity of your application logic depends on the database performing some functions through database logic then review your design – because its probably wrong.

Another problem I have with .NET is the seemingly defacto standard amongst some of the .NET developers I have come across seem to  have a belief that a “Web Service” is something that you create in the Visual Studio IDE presented as SOAP (Java folk, you are guilty of this too I am afraid).  This common belief is the result of how easy it has been made to achieve this particular task within the development environment, its easy – really easy, so it must be the right way right? NO – there are so many things wrong with this idealism.

Number one: If you need a web service that is going to be hit a few times an hour then perhaps this approach is fine but if you are trying to build a web service that will be efficient and consume the least amount of system resources so you can get a high and sustained transactional throughput then managed code is not the way to go – not unless you are hell-bent on keeping the Dell or Sun server ecosystem fed with new customers, and air conditioners all over the world burning resources and cash.

Number two:  SOAP is the standard that has become synonymous with Web Services for many .NET developers, yet more companies that have demonstrable experience delivering high quality web services API’s to the web on a global scale choose to use almost anything but SOAP. Why? Well SOAP is simply not the one-size-fits-all answer – sorry SOAP lovers… SOAP has some good ideas but is generally badly conceived. It’s complicated with over 40 pages of specification, some of which is almost nonsense, its overweight and is often only used in a context where the very basic need of an RPC, that is, send a request to a server and get a response – the overhead of SOAP in this case can be significant.  There are many alternatives to SOAP that are in common use today that process more efficiently, are more browser friendly, simpler to understand, implement and document across numerous server, desktop and mobile platforms – the problem with these alternatives is there is not a “make me one” button in the IDE so SOAP seems to win out amongst the SMB/Point solution application developers.

Number three: IMHO any use of the .NET stack should be more prevalent at the front end and less-prevalent and ideally non-existent at the back end. I personally like the idea that for a server, every CPU clock cycle possible works towards servicing the users request, that means less software layers, less runtime compensation for bad programming practices and better quality, well designed and well implemented code executing as close to the machine-level as possible.   This I know can be thought of as a rather old-fashioned perspective but in a perverse and somewhat gratifying way, as green computing becomes ever more important to us, this will be the only way to write code that will run in our future data centres – I am pretty sure that programmers that don’t have the capability to think, design and develop in this way will find themselves quickly outmoded as less layers, better software design and more efficient use of computing resources becomes a necessity.  Developers happy to live 20 layers above the CPU will find themselves as outdated as they think I am today for caring about how many CPU cycles it takes to service a request! By the same token, I don’t believe that a “managed code” runtime and do it quick buttons in an IDE turn great desktop application developers into application server guru’s

Well my rant is done; I am not sure I answered the question in a succinct way but in summary, while I think there is a place for .NET for as long as Windows exists on the desktop, I don’t think it’s for the big servers of our future.  As long as .NET continues to be positioned as a viable server technology with the IDE and tooling that can make almost anyone a .NET developer, we can only expect the continuing trend of *NIX operating systems to continue to serve the ever increasing numbers of web services and applications we use, relegating .NET and the often poorly implemented .NET business applications to the point and SMB niche products and solutions.

Is .NET the holy grail of software development in the modern age!! (Part 1)

I have spent the last 20 years of my working life involved with software development in some form or another.  My first programming experiences were on home computers like the Sinclair ZX81, Spectrum and Commodore 64 where writing in 6502 or Z80 assembler using just a word processor, a cassette recorder and a dot matrix printer if you were lucky was the only tooling to hand.  The first program I ever wrote on the PC was written in x86 assembler the DOS ‘debug’ tool and was a simple terminal emulator for connecting to a VAX over DECNet!  I stepped up to the ‘C’ language soon after that and eventually to C++.  I have also developed a bunch of AVR, ARM and PIC microcontroller code for numerous projects too. Now days, pretty much any language is usable to some degree or another, and my software thinking in terms of achieving a specific task has become somewhat language agnostic.  As you can tell from my past though, I am someone with a slightly unhealthily liking for low-level programming and I have clearly had way to much spare time, late nights, coffee and pizza!

Anyway, back to the topic in hand. What does all that have to do with .NET you are probably thinking?  Good question – but before I answer that, I must clarify what I mean when I say .NET for the purpose of this blog post.  I am specifically talking about the lowest level of the .NET stack which is the CLR and the .NET framework base libraries.

My affinity with the inner workings of computers and my low-level programming experience has become a bit of a curse for me in the modern programming age.  When working with C and C++ this low-level understanding can be a real advantage but when it comes to the higher level languages like those provided in .NET one must to a large degree ignore how the machine works and think just about  the language and the idealism that is presented within its design.

I should just state that much of what I am about to say applies equally well to Java, PHP, JavaScript, VB and other interpreted language environments and frameworks as it does to .NET and of course, it goes without saying that this is all my own humble opinion so take it as you will. Disclaimer Ends— 😉

.NET is an interpreted language. I say this because the ,NET code running is being managed by an execution engine which is best described as a byte-code interpreter – despite what the marketing hype may indicate with things like JIT, native code generation and so on, the very notion of “managed code” means this must be true. You are probably thinking, what is wrong with that? Well nothing in principle, but what is wrong though, is what people choose to use this type of software stack for.

There is no escaping that .NET is popular and has a great following – and by the way in my opinion it’s pretty good for some things too, I am not anti-.NET and I must say the development tools that Microsoft have created in Visual Studio are exceptional and by far my most favourite modern-day development environment (Apple, please take note :).  The problem is though, .NET is not ideal for everything – yet despite this, some people tend to hail NET as the next coming.  I can’t tell you how many organisations I have come across over the years who have set a “corporate policy” that states all applications must be based on .NET.  How can an organisation decide on such a policy, it’s crazy because it’s unenforceable if you happen to use Microsoft’s own products – many of which to this day are not developed in .NET (more on this later).  I suppose that is the power of the Microsoft marketing machine, best not sell this to developers said Microsoft, lets sell it to the executives instead.

Let’s take a look at what some of the key marketing messages put out at the time were in order to answer the “Why .NET” question: –

  • Managed code means fewer bugs,  reducing maintenance time by up to XX%
  • Mixed language environment allows all of your existing skills even across  multiple languages to work on the same project reducing time to market
  • Platform independent (really, that’s how .NET and the (then called) IL was touted almost 10 years ago, using .NET will ensure you have portable code!! Yeah right – where is that!)
  • Garbage collection, exception handling and other language features makes it easier to write code that works reliably.
  • The standard framework and foundation libraries mean your VB developers can transition to C# much faster.

The message here was not really aimed at programmers but at the management and executives of companies and development teams. Use .NET and you can standardise and save time, which will save lots of money and improve you competitive edge by reducing your time to market. .NET means your teams can write code in half the time and the software will work.

Clearly, this is the way to go right? Perhaps we should look at where .NET came from. It was not a brand new innovative idea that came to someone in a vision – it was simply a natural evolution of the old VB6 environment which was plagued with all sorts of problems that needed to be resolved in order to move forward.  Microsoft’s first stab at a component model was COM, which was an evolution of an even more ridiculous specification called OLE.  VB6 was used primarily within corporate application development world and things were messy, and for the most part COM was the problem but was also the keystone technology that held VB6 together.  Windows today is still plagued with the aftermath of COM nightmare that seemingly will never go away much to Microsoft’s wishes to the contrary I am sure.  VB6 supported components through COM but components were unreliable, difficult to work with and component vendors would write bad code that would bring the whole application down. Although for the most part, the third-party components were the problem, Microsoft did not help because COM was so badly implemented in the first place anyone writing a COM component would invariably get things wrong and the sheer complexity of the COM specification sealed its fate.

.NET with its managed code environment, assembly management and standard component library was clearly the headline wish list for VB7.  So really, .NET is an evolution of VB – a significant one granted, but an evolution none the less – this was more obvious in the early versions of the .NET framework back in 2002.  When VB6 was at its height, I remember its place was within the enterprise where corporate organizations developing their own internal applications, there were very few off-the-shelf products around that were developed using Visual Basic 6.

Reviewing the marketing messages above its pretty clear that Microsoft’s intention, initially at least, was to migrate their VB6 user community to .NET – their aim was the corporate in-house development teams, the ISV community also targeted only really adopted .NET later on as it started to improve.   The corporate in-house development market is a real sweet spot for .NET IMHO, and it’s probably the best environment for in-house corporate desktop and web application development teams, with the only real competition being Java.

Perhaps controversially though, I don’t believe that .NET is the right software stack for ISV’s to choose when developing the core of a professional, off the shelf and supportable software product – choosing .NET is far more advantageous to the ISV than it is to the customers that ends up using the product!

Why? Well, I have already exceeded my word count on this one so if I have not managed to send you to sleep already, I will explain what I mean in Part 2…