Fully Programmable Modular Bench Power Supply – Part 3

Previously in Part 2 I created a simple regulator circuit using a pass device and an op-amp, I was basically putting the well understood theory of this kind of circuit into a practical circuit and verifying my own basic understanding. Apart from no-load DC conditions which were fine and dandy, any sort of even moderate resistive or capacitive load drove the circuit into wild instability, clearly the breadboard its self, thin connecting wires and long component leads as well as a lack of any star earthling were all going to contribute to this, so it was time to refine the circuit a bit and build it on a more sturdy vero board with some attention to layout, earthing and current handling. Here is the circuit I built: –

PSU Schematic Version 0.1

The initial prototype is a throw-away, that means I build it on vero (strip) board, use low/no cost components, cheap sockets for the IC’s I want to re-use and as many components as possible from the junk bin – when I am done with my testing I strip anything worth keeping for the next project and throw it in the bin – the point is, don’t expect it to be pretty…..

PSU Throw Away 0.2
PSU Throw Away 0.2

The significant change was adding gain to the circuit which is critical if I am to get the desired output range of 0-30 volts. The gain is obtained in Q1 which is in the classic emitter follower (if it was a NPN BPT) configuration. Having this configuration for the driver means I only need a swing of between 1 and 2v to get the full output swing of 0-30v. Needing a small voltage swing means I can run the op-amp at 14v. The driver and output is acting as a voltage amplifier which is what FET’s are good at doing. The diode D3 is ensuring that the op amp is running in class A in the required range driving into R8 which is its load. By doing this we eliminate any potential crossover distortion the op amp might exhibit because of its internal center point.

The op-amp U1.1 is doing the error correction. The driver FET (Q1) is inverting, which means the negative feedback from the output of the regulator is actually fed into the positive input of the op amp, this is a different configuration to the first design where the driver was a current amplifier and non-inverting.

One interesting addition is the Vgb+ supply voltage. You will see that I have included pre-regulator pass device (Q2), the idea here being when there is a large amount of power being dissipated across Q3 the micro controller can detect this and drive Q3 which in conjunction with C5 and C6 will act as a bucket style pre-regulator. Because Q3 is acting purely as a switch we need it to be both very fast and very low on resistance and one of the things a power FET does very well is have very low on resistance. However, in order to get this low on resistance, you need to drive the FET’s gate pin at approximately +5v above the source pin. That means if you have 30v on the drain, and you want to see 30 volts on the source, then you need to drive the gate with +35 volts. Vgb+ is obtained by using a simple AC voltage doubler and a series regulator that’s referenced from +5, giving 5v above V+ regardless of input voltage.

So filled with high expectations I power up my new masterpiece for the first time and err, well not great. Unstable is an understatement, I had high frequency instability, low frequency instability and wildly different variations of the same with different loads. For the most part, a large electrolytic cap across the output brought it mostly under control, the high frequency stuff was largely killed off by placing C12 on the driver (Q1). Various loads across the range would yield different results and it was unpredictable and certainly not reliable in a way that I would be happy to power my next project from it!

So whats going on? why are these types of circuits so unstable? Well, I am sure there are very detailed scientific explanations but I don’t have anything like the knowledge to explain those; so in layman’s terms this is the best I can do. Under perfect DC conditions, the ideal circuit (which is how I tend to visualise electronic circuits) there is negative feedback and no phase shifts so you have a perfect DC Servo, so it locks the output to the input reference and as load is placed on the output the error amp drivers more or less to maintain the output at the input level – easy right? Unfortunately, all electronic components, wires, board layouts power sources and environments generate noise and have parasitic inductance and capacitance all of which can introduce phase shifts at different frequencies – this means that across a frequency spectrum our perfect theoretical circuit with its perfect negative feedback actually turns into positive feedback and your stable DC servo becomes an oscillator, your circuit can be stable at one frequency and totally unstable at another frequency and this can be occurring at the same time. Worst case you have numerous different stable and unstable conditions concurrently.

As I was measuring this behaviour I was observing a lot of fluctuation, things were varying in a random and unclean way, the analogue equivalent of a random number generator in software. To me the circuit felt very loose and reminded me a lot of those old fashioned black and white CRT TV’s I used to play with when I was a kid, The whole thing was powered by a big dropper resistor and change in the percentage of white/black proportion would significantly change the voltage levels in the TV’s electronics and this could be seen visually as instability as the picture would appear to breath and move around on the screen as things changes. Compare that to later solid state CRT TV’s where regulated power supplies were used, everything was much more stable visually.

I had managed to quieten the PSU down and drive into loads with good regulation and I achieved this by slugging the think with capacitance which has the effect of lowering the bandwidth of the circuit, which means the circuit has no, or substantially reduced gain at higher frequencies and any oscillations at those frequencies obviously go away. The problem with reducing bandwidth though is the impact on the dynamic response of the PSU under varying load conditions (more on that in a future article).

Types of capacitor also play a big part, for example I was only able to stabilise the high frequency issue using a 3n3 to 10n polyester cop, putting in the same value in multilayer ceramic did not work – that was yet another indicator that the circuit is simply too sensitive to instability – its loose.

My test conditions were done mainly under a 5-10 watt load at about 15v. I have attached some photos and some scope traces so you can see the sort of effects I was seeing.

Average rating  1 2 3 4 5fYou must login to vote
IMG_5135.jpg
IMG_5136.jpg
IMG_5137.jpg
IMG_5143.jpg
IMG_5144.jpg
IMG_5145.jpg
IMG_5121.jpg
IMG_5122.jpg
IMG_5123.jpg
IMG_5124.jpg
IMG_5125.jpg
IMG_5126.jpg
IMG_5127.jpg
IMG_5129.jpg
IMG_5133.jpg
IMG_5135.jpg
IMG_5136.jpg
IMG_5137.jpg
IMG_5143.jpg
IMG_5144.jpg
IMG_5145.jpg
IMG_5121.jpg
IMG_5122.jpg
IMG_5123.jpg
IMG_5124.jpg
IMG_5125.jpg
IMG_5126.jpg

In summary, despite having it quiet and stable with various test conditions I was left with the impression that the circuit felt wrong. At this point I am suspecting many things, the FET’s are very fast so they were a concern, the feedback loop is running at high power levels so effects of parasitic capacitance and inductance are very pronounced. My overall take away was I need to start again and just for a while I thought, maybe I should just use an off-the-shelf IC solution – but I felt like I was giving up to easy. I don’t have the skills and experience to design this scientifically so I have to approach this with a bit of trial and error and a lot of instinctive sauce. That desire to not give up got me thinking – and an idea came to me – what if I created a very low-power regulator which was highly stable at the desired voltage range and then scaled that up with a simple current amp – could I get better results?

In Part 4 I will share with you the great progress I made and a fully working regulator design.

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.

Fully Programmable Modular Bench Power Supply – Part 2

Now the overall high level system design and parameters are set (see Part 1), its time to get down to some practical design. As I specifically do not want to use one of those “out-of-the-box” all in one regulator chips the first thing we need is a working linear voltage regulator that we can build upon. Using the prototyping breadboard I created the following circuit. The objective was to set up and verify the DC conditions for a basic regulator. Unlike a classic regulator circuit where there is typically a fixed reference and a variable resistor (POT) in the feedback loop, this regulator calls for something slightly different because our micro controller and appropriate DAC will generate an accurate reference voltage between 0 and a couple of volts, the exact value of which will be set by the user; the regulator circuit must track this reference voltage and set its DC output to a multiple of that reference.

The following schematic shows my initial 30 minute attempt at building such a regulator.

PSU Schematic Version 0.1

I should state at this point that the circuit is basic and is missing lots of things that one would expect to find. The purpose of this initial design was not to create a perfect regulator but was to setup a circuit to verify the basic DC conditions and theoretical practicality of the circuit.

The circuit is built around a FET pass device (type IRF540) which is the power workhorse. Unregulated power into the drain with the load placed on the source. The op-amp is configured as a simple error amplifier. The op-amp will move its output up or down (depending on the input state) in order to get its two inputs as close together as possible. Because the op amp drives the power device and the negative input of the op-amp is derived from the output that the power device we have a closed loop servo circuit and this is in essence what a typical regulator is made up of.

Because of the closed loop nature and the tight feedback loop, one of the biggest problems with these types of control circuits is stability. Each component and any introduced capacitive or inductive loading will create phase shifts (all components exhibit parasitic capacitance and inductance). Phase shifts will put the circuit into positive feedback at certain frequencies so there is a good possibility that such a circuit will become an oscillator at certain frequencies. As a consequence, a great deal of attention needs to be paid to this problem in the design which needs to ensure that it works reliably and remains stable within the scope and specs of the requirements. The more voltage and current range required the more difficult it is to tune the circuit to be stable of the range. Component choice, DC conditions, speed/bandwidth, gain, noise and PCB/Track layout all matter a lot here. It’s for this reason that many power supply designs avoid discrete solutions in favour of a one-chip solution. Discrete regulators are hard to make, and even harder to make reliable and stable and to be honest, most people cannot be bothered when there is a $2 off the shelf IC that will do the job most of the time. However, a professional grade programmable bench power supply needs something more than what these one chip solutions offer.

Considering the breadboard approach to building the above circuit (see photo) under moderate loads the regulator circuit was pretty stable, actually I was very surprised just how stable it was.

Birds Nest Construction

I thought I was on my way but very quickly realised I was actually on the wrong track. The reason why this design is stable is because it does not have much gain, in fact it has a gain of just 2. So put 2 volts into the V_REF input and get 4 volts out. The driver transistor Q1 is in effect a current amplifier in this configuration – the op-amp has to swing its output pretty much the range of the desired output. For low voltage requirements where you con comfortably run your entire circuit from a single supply this is a very good approach because of the inherent stability of the circuit. However, this design requires a regulated output of 0-30v which would require the unregulated input voltage after the rectifier and reservoir caps to be about 45 Vdc. Not many op amps can run at this level, and those that can are only just within range so we would be right on the edge of the spec for the device which does not make for a robust design and I personally don’t like components being pushed to their limits – it feels wrong, a bit like driving your car continuously in the red like, you can do it but not many people want their car to scream like that all the time.

In order to facilitate ease of use and the modular design I have set out to achieve, I want to create a regulator module that runs from a single AC power source, a single winding from the mains transformer. Many bench PSU’s (for example, like the Agilent E3631A I recently repaired) have multiple internal power supplies where the control circuitry is powered separately from the main power source. This makes it easier to design for higher regulated voltages and helps with things like noise immunity too but because of the design goals I have set it’s not an option for this design.

The design needs to take into account some system design constraints. No split supply for the op-amps, so we need to use single-supply capable devices. The op-amps and control circuitry need to run at a much lower voltage than the pass device and regulator output and the only way that is possible is by introducing voltage gain outside of any op amp. Voltage gain using an emitter follower (or equivalent approach) provides the level translation needed and lots of gain but that’s where the stability problems start.

In Part 3 I will describe the next evolution of the regulator circuit where I introduce gain and a considerable amount of instability and start to formulate a workable solution.

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.

Fully Programmable Modular Bench Power Supply – Part 1

After fixing a high quality power supply (see here if you are interested) it spurred me on to have a go at designing and making my own – I think anyone who does electronics as a hobby will at some point or another build a power supply to use on their bench – I have not done that myself before so I thought I would give it a go. My aim is to use my basic understanding of analogue electronics and create a fully programmable bench PSU that will perform at least as well as the Agilent PSU, I think this aim is reasonable on the basis that todays components are considerably better than those that were available 20 years ago. Apart from the performance characteristics, there are a system engineering characteristics that I also want to consider because I would like to make it possible for anyone else to build this PSU as a DIY project with the ultimate aim of creating a high quality PSU that is modular and can be built in various configurations and be built at a hobby user or small lab price point. Here is a photo showing the very first working prototype regulating at 5.010 volts.

The first working prototype regulator

Why a PSU project, there are hundreds of them already? Firstly, the two areas of technology I really enjoy are electronics/embedded and software development and this project requires a fair amount of both to be brought together. Apart from that, no other reason than because I think I can do a decent job – we shall see 🙂

I thought it would be a good idea to try and set out what I have in mind. My aim is to create a modular PSU system designed to be used in lab or test automation environments. The first thing I want to create is a module similar in concept to those audio amplifier modules you can buy for building a HIFI amplifier, the module will physically look something like this.

The PSU module will take a single AC input from your line transformer of choice on its input and will provide a fully programmable lab quality Constant Current/Constant Voltage regulated DC output. The module its self will not have any kind of controls or display, but instead will have a fully isolated serial I/O which will be connected to a controller with the idea being you can create a multi-channel PSU with isolated outputs while also providing a single earth referenced controller that can be safely connected to a computer or other test equipment in test automation environments. Once I have created these modules, my intention will be to make a a few variants of controller, an RS232 interface and a PC software controller, a simple stand-alone control board with an LCD display and a couple of rotary encoders to control a single module and a more comprehensive control board that can control up to four modules with a nicer display (TFT/VFD?) and other interfaces such as RS232, USB and Ethernet to create a full function standalone multi output bench PSU.

Focusing back on the PSU module, I would like it to support a number of configurations with just a few component changes, primarily this is to allow different voltage/current ranges and resolutions to be selected to suit different requirements.

The headline specs for the regulator module are as follows: –

  • Up to 50w of power
  • Output range options 0-6vdc 0-5A, 0-10vdc 0-5A, 0-15vdc 0-2.5A, 0-25vdc 0-1A, 0-30vdc 0-1A
  • Constant Voltage and Constant Current capable
  • Remote sense capability
  • Over voltage, over current, reverse power and short circuit protection at all power levels
  • Optional on-board pre-regulator to lower the modules heat dissipation for higher voltage ranges
  • On-board temperature monitoring
  • Fully isolated serial interface for programming, control and monitoring
  • Control resolution down to 1mV and 1mA in low voltage range

There are also some system engineering constraints I want to apply, these are: –

  • Low component count
  • Easy to source low cost components
  • Easy to build DIY
  • Physically robust construction
  • One single PCB design for all voltage range configurations

In terms of my design approach, and I must state at this point that I am no analog electronics expert, I really just have a passing understanding. None the less, I want to avoid using the easy option in the form of the classic single package regulator IC’s that most DIY PSU builders use. LM317T, LT3080 and the like. These are great components, don’t get me wrong, but what you don’t get with these is a professional grade PSU without putting a significant amount of other electronics around them, by which time you have pretty much lost any advantage you have gained over using discrete components. Apart from that, one of the main drivers for this project is to learn more about building this kind of project and to share that learning with others.

In Part 2 I will describe my first attempt at building a discrete linear voltage regulator, with a schematic of the first working circuit along with a description of what I found along the way.

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.

HP/Agilent E3631A Power Supply Teardown & Repair

I recently bought a faulty Agilent E3631A bench power supply on e-bay which I thought would be a nice addition to my *slightly excessive* electronics hobby workbench.  These power supplies are really nice; they are engineered and built like military equipment, good high quality materials and mechanically very robust.  An great indication of how good these things are is the second hand values, these things cost $900-$1000 to buy in reasonable condition so they are not cheap.  I bought this particular one faulty and thought I would have a go at repairing it.

My first impression of the electronics in side was not great, it seemed very seriously over engineered for what it was trying to achieve.  It seemed like the designers had a field day adding all sorts of crazy circuits because they could.  The ADC is made up of discrete IC’s, there is a custom logic chip in there as well as a CPU, ROM and RAM, there are numerous power supplies for bias and control circuits all floating around each other and most things seemed much more complicated than they need to be.  The one real surprise though was the opto isolation in the analogue domain. The CV and CC reference signals from the DAC for the +6v supply are isolated through high linearity opto couplers type HCNR200, this is something that would be crazy to do today when the cost of micro-controllers are so low and have all the goodies like DAC’ ADC’s and PWM’s making isolation in the digital domain a far more sensible design choice.

6vIsolation

In fairness though, I was making my initial judgements based on what’s possible with today’s components, things were very different 20 years ago so given its age it’s a pretty sophisticated piece of kit really.  Once working it does appear to work very well so my initial thoughts are not really founded on anything other than my own instinct to want things to be easier to understand and better as a result.

On with the repair….

First things first, after a quick check of the obvious big components like the series regulator transistors etc, I very quickly needed a schematic diagram. Agilent were less that helpful here, the manuals they put out now days specifically have the detailed schematics removed from the documents despite there being a reference to them in the index. When I contacted Agilent and asked for a schematic I was told in no uncertain terms (after a 4 day response time) that they no longer make the schematics available, but they do offer a £450 exchange repair service – come on HP/Agilent, by all means offer the service but don’t stop those of us who want to hack around from doing so.  The solution was to buy an original printed service manual which did include the schematics; e-bay and $10 got me what I needed.  As luck would have it, while waiting for the manuals to arrive in the post, I also managed to find a manual on the net which still had the schematics present – not from any official Agilent source I might add…

I set out to work on fixing it and found I had to strip it down completely, removing the two boards, front panel, transformer and wiring from the chassis and spread it out on the bench. If you find yourself needing to repair one of these, be prepared to commit serious bench space to the exercise.  I have taken a bunch of photo’s if the teardown so you can see what all the bits look like.

Home » HP/Agilent E3631A Power Supply Teardown & Repair » HP/Agilent E3631A Power Supply Teardown
IMG_4987.jpg
IMG_4987.jpg
IMG_4988.jpg
IMG_4988.jpg
IMG_4989.jpg
IMG_4989.jpg
IMG_4990.jpg
IMG_4990.jpg
IMG_4991.jpg
IMG_4991.jpg
IMG_4992.jpg
IMG_4992.jpg
IMG_4993.jpg
IMG_4993.jpg
IMG_4994.jpg
IMG_4994.jpg
IMG_4995.jpg
IMG_4995.jpg
IMG_4996.jpg
IMG_4996.jpg
IMG_4997.jpg
IMG_4997.jpg
IMG_4998.jpg
IMG_4998.jpg
IMG_4999.jpg
IMG_4999.jpg
IMG_5000.jpg
IMG_5000.jpg
IMG_5001.jpg
IMG_5001.jpg
IMG_5002.jpg
IMG_5002.jpg
IMG_5003.jpg
IMG_5003.jpg
IMG_5004.jpg
IMG_5004.jpg
IMG_5005.jpg
IMG_5005.jpg
IMG_5006.jpg
IMG_5006.jpg
IMG_5007.jpg
IMG_5007.jpg
IMG_5008.jpg
IMG_5008.jpg
IMG_5009.jpg
IMG_5009.jpg
IMG_5010.jpg
IMG_5010.jpg
IMG_5011.jpg
IMG_5011.jpg
IMG_5012.jpg
IMG_5012.jpg
IMG_5013.jpg
IMG_5013.jpg
IMG_5014.jpg
IMG_5014.jpg
IMG_5015.jpg
IMG_5015.jpg
IMG_5016.jpg
IMG_5016.jpg
IMG_5017.jpg
IMG_5017.jpg
IMG_5018.jpg
IMG_5018.jpg
IMG_5019.jpg
IMG_5019.jpg
IMG_5020.jpg
IMG_5020.jpg
IMG_5021.jpg
IMG_5021.jpg
IMG_5022.jpg
IMG_5022.jpg
IMG_5023.jpg
IMG_5023.jpg
IMG_5024.jpg
IMG_5024.jpg
IMG_5025.jpg
IMG_5025.jpg
IMG_5026.jpg
IMG_5026.jpg
IMG_5027.jpg
IMG_5027.jpg
IMG_5028.jpg
IMG_5028.jpg
IMG_5029.jpg
IMG_5029.jpg
IMG_5030.jpg
IMG_5030.jpg
IMG_5031.jpg
IMG_5031.jpg
IMG_5033.jpg
IMG_5033.jpg

There were various faults with the PSU, numerous op amps and some CMOS logic IC’s were faulty as well as two open circuit 33k resistors. At a guess I would say there was some kind of big static or high voltage discharge into or across the outputs that caused the original fault. I had to isolate the various areas of the circuit and work on them individually, making assumptions about what should be present in terms of voltage levels and feed in lots of external signals to get to the bottom of each fault.  I struggled with the configuration of some of the analogue circuitry – fortunately for me I have a good friend who understands much more about analogue electronics than I do so some exchange of e-mails and sections of circuits with measurements kept me on track and expanded my own knowledge too – cheers Span.

Here are all the components I ultimately had to change…
IMG_5035

While fixing it I also managed to introduce some faults of my own. Specifically I managed to blow two of the HCNR200 opto couplers, easily done just with a slip of a multi meter probe shoring out pins 2 & 3 puts 15v with no current limit straight into the internal LED rendering it open circuit instantly.  I managed to blow four of them like this before I figured out what I kept doing – doh!

After working through these problems I finally got it working except — when placing a dummy load on the +6 output, the voltage I was measuring went up!  A bit more inspiration from my friend Span and a scope on the output and voila – it was bursting into oscillation under load, probably due to the 4 ohm wire wound load resistor. It turned out this was down to the fact that the electrolytic capacitors soldered onto the back of the binding posts on the front panel are actually there for stability reasons – obvious once you know. I had removed the output wiring from the front panel to make it easy to work on. Strapping 1000uF across the output solved the problem.

You can download the Agilent E3631 Service Guide which include the schematics

Having worked on this I have been inspired to have a go at designing my own programmable PSU from the ground up to see if I can match the specs but use more modern components and design approach – I will post info on progress if I get around to it.

[UPDATED:] I am getting around to it… http://gerrysweeney.com/fully-programmable-modular-bench-power-supply/

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.

The meaning of Life, the Universe and Value!

I was inspired to write this blog entry from a thread in Facebook.  We all know that the internet is *free* right?  Using Google, G+, Facebook, Twitter and loads of others I could mention cost no money to use.  How can that be, how can companies be considered worth upwards of $100bn yet provide their services to their customers entirely for free.

The business model is actually dishonest because it takes advantage of a human condition we in the civilised world have developed around value – more specifically our own value!  Pretty much every civilisation has a universal mechanism for their people to measure value – it’s called money and it’s the fundamental device that enables trade and commerce so its important.  Money is also this device that people use to value themselves as an individual often comparing themselves to others; we call this wealth – think the Rich 500 list.  If you have lots of money one week you feel “flush” and you might go on a shopping spree or treat your friends to dinner, and when you’re doing these things you feel good and in some way elevated, important or in a leadership position within your social group. If you have more money than your peers you feel more successful – these feelings are addictive and that’s why craving money is natural, because it leads us to get those feelings.

Don’t get me wrong, we all need to measure our own success and money is certainly a great and universal barometer,  but one of the side effects of this is what the term “free” mean to us. We are taught from an early age that more money is good – get a good job – get a nice house etc.  So the natural assumption that you derive from this is “spend less and get more for it” is good – we all love those bargains right!

So anything that you get for “free” must be good – right? That is surely the best bargain of all.  It’s so easy to think that because the idea of getting something of some value to us without having to pay money for it appeals so much that we are blinded to what is really happening.   Bottom line is that anything you get for free you are actually paying for indirectly and yet you don’t know it and that’s what free services take advantage of – and if you work out the *real* cost to you personally it will almost certainly be a much higher cost than if you actually paid money for it.

Don’t talk rubbish I hear you say, how can that be?

I can’t really answer that for you, if you really want to know the answer to that you must answer it for yourself.  All  you really need to do is work out what is valuable to you individually in non-monetary terms and you will have your answer.   To help you on your way let me give you some questions you might want to ask yourself that might point you in the right direction of discovering your own value?

  • How much do you earn an hour on average over your working life?
  • How long is your life?
  • If you worked a normal life, spent nothing and saved all of your money how much would you end up with?  What would you do with it when you have it? And when would you do those things?
  • If you had a child, how much time (in hours) would you spend splashing around in a pool on holiday with him or her between the ages of 1 and 14?
  • How often have you found yourself on a mission that has involved you spending a vast amount of your own time justifying and claiming for that $10 meal expense or that parking ticket that you did not deserve?
  • How often have you put off the opportunity of going out or traveling to somewhere new?
  • How often have you sat their for tens of minutes waiting for your Microsoft Windows computer to update and reboot its self so you can get onto the internet and just accepted that as the norm?
  • How long did you stand in that queue in the January Sales to buy that top you wanted for 30% off the tag price?
  • How much time do you spend in Facebook rejecting those pesky apps that want you to give away your personal information?

I could go on but that should be enough.  Work out what it is that you personally value and then work out why things that are “free of charge” are not actually “free” to you – and once you have worked that out you may well find that getting things “free of charge” is not as valuable to you as your instincts would lead you to believe. But you already knew that right?

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.

Is .NET the holy grail of software development in the modern age!! (Part 2)

I must first open this post by apologising for the long delay in my follow up to part one. It was pointed out to me by my social networks Kung Fu  practitioner (you know who you are Chris 😉  that leaving my blog for so long was signifying it’s death.  I don’t want to kill my blog just yet so I will do what I can to resurrect it, for those folk that know me probably know that I have a lot to say!

Back to the follow-up then – is .NET the right technology stack to build a professional, packaged, off the shelf software application with?

As with most things in software the answer is not clear cut.  There is no doubt that its possible to build a high quality windows desktop application using .NET, especially with the later versions of the .NET libraries, WPF being a great example.  So if one was to build a desktop application targeted at Windows then IMHO there is not much to choose from that is better.  It is worth remembering with .NET in this context; you must remember that the .NET runtime versions, components and myriad of service packs and OS hot fixes that you are bombarded with must be part of your ongoing application support strategy because unlike compiled languages like C++ where you can almost dictate and lock down specific library versions which remain by-and-large static. With .NET though, updates to runtime components can change things in the execution of your applications after your release which can cause new problems or changes in behaviour that did not exist when you ran your product through the QA cycle as part of you releasing it.

For me though, the much bigger problem is when client/server applications are developed.  Take a typical business application which has some kind of presentation (desktop or browser) application, business logic which is typically server-side logic accessible to the presentation layer via some sort of web services API and a back-end SQL database sitting behind the business logic.  With .NET and Microsoft’s development environment you can build all three tiers in a simple, easy, point and click, highly integrated way inside a single IDE. It’s easy, really easy to create a UI, create some web services, connect to a database, create code objects to serialise/deserialize data to/from the database all in a way that follows some idealistic design patterns and create an initial application quickly.  What’s wrong with this you may ask, isn’t that what every product sponsor wants – fast development, good standard design patterns and a product on time, hell yes please, I would have them all day but… I can’t tell you how many projects I have seen that follow this route only to find that the initial product put out in the field is slow, unreliable, full of bugs and basically unsupportable – often to the total surprise of the team that created the product in the first place.

So what goes wrong so?  Well, its first worth mentioning that there is absolutely nothing wrong with the .NET technology, the problem is with the expectatiation set by the “.NET dream” and the way in which .NET is used.  Shame on Microsoft for setting such a vivid dream of perfection, but shame of the developers and architects for believing it!   Just because you can create a web service with two clicks does not mean that is the “right” way to create your particular web service, that’s just the “easy” way to create your web service and that’s the problem – two clicks and the project has a new web service so makes progress fast.   Many application developers use the easy tools to create their skeleton but then spend ten times more developer hours trying to make the code created by the easy button fit their requirement.  The result is bloated code that is hard to understand and often ends up being a compromise on the original design intent; how many times have I seen software designs change just to fit the specific IDE, toolkit or individual developer’s idealism.  The other problem with such a highly integrated development environment is there is often an illusion that code will be better because the framework takes care of evils like pointers, memory management and so on.  Microsoft have done such a fantastic job at branding the “Managed code” idea, but this is an ideal aimed at managers who can now hire developers that are often less skilled than their lower level developer counterparts and the code created will be less buggy because the managed environment takes care of it for them.  When I hear “managed code” I think RCBPP or “Runtime Compensation for Bad Programming Practices”, which by the way is perfectly fine for applications that are not expected to be fast and scalable in a green-friendly way.  Bad programming practice should not be compensated for at runtime by executing CPU cycles each time the code must run, it should be ironed out at design time so the code can run many times faster for the same CPU cycles.

.NET developers please take note – just because .NET has a garbage collection engine within its “managed code” environment, this is not an instruction for you to absolve yourself of memory management and other responsible coding practices in your design. .NET code is just as susceptible to performance issues, memory fragmentation, race conditions and other serious runtime conditions that make applications perform either badly or in an unreliable way.

Things get even worse at the database design stage where the data model is defined. At this stage there is all too often an over abundance of triggers and stored procedures that actually go way beyond the data model and impose much of the business logic for the application.  A well designed three-tier architecture should be loosely coupled at each layer, the integration point should be at the web services API, not the database so putting application logic in the database is madness, and actually demonstrates a poor overall application design with too much architectural input being taken form the DBA/Data Architect.  None the less it happens, and much more than it ever should.  The database vendors love it because most features that one would use triggers and SP’s for are not standard; they are almost unique in behaviour to each vendor’s database system thus often tying the application to that database developer.  Developers please note – its only the database vendors that strongly advocate for putting application logic into the data model; for the rest of us the data level logic should be there to perform data related tasks only – if you have the need to call a stored procedure to perform some application logic or the integrity of your application logic depends on the database performing some functions through database logic then review your design – because its probably wrong.

Another problem I have with .NET is the seemingly defacto standard amongst some of the .NET developers I have come across seem to  have a belief that a “Web Service” is something that you create in the Visual Studio IDE presented as SOAP (Java folk, you are guilty of this too I am afraid).  This common belief is the result of how easy it has been made to achieve this particular task within the development environment, its easy – really easy, so it must be the right way right? NO – there are so many things wrong with this idealism.

Number one: If you need a web service that is going to be hit a few times an hour then perhaps this approach is fine but if you are trying to build a web service that will be efficient and consume the least amount of system resources so you can get a high and sustained transactional throughput then managed code is not the way to go – not unless you are hell-bent on keeping the Dell or Sun server ecosystem fed with new customers, and air conditioners all over the world burning resources and cash.

Number two:  SOAP is the standard that has become synonymous with Web Services for many .NET developers, yet more companies that have demonstrable experience delivering high quality web services API’s to the web on a global scale choose to use almost anything but SOAP. Why? Well SOAP is simply not the one-size-fits-all answer – sorry SOAP lovers… SOAP has some good ideas but is generally badly conceived. It’s complicated with over 40 pages of specification, some of which is almost nonsense, its overweight and is often only used in a context where the very basic need of an RPC, that is, send a request to a server and get a response – the overhead of SOAP in this case can be significant.  There are many alternatives to SOAP that are in common use today that process more efficiently, are more browser friendly, simpler to understand, implement and document across numerous server, desktop and mobile platforms – the problem with these alternatives is there is not a “make me one” button in the IDE so SOAP seems to win out amongst the SMB/Point solution application developers.

Number three: IMHO any use of the .NET stack should be more prevalent at the front end and less-prevalent and ideally non-existent at the back end. I personally like the idea that for a server, every CPU clock cycle possible works towards servicing the users request, that means less software layers, less runtime compensation for bad programming practices and better quality, well designed and well implemented code executing as close to the machine-level as possible.   This I know can be thought of as a rather old-fashioned perspective but in a perverse and somewhat gratifying way, as green computing becomes ever more important to us, this will be the only way to write code that will run in our future data centres – I am pretty sure that programmers that don’t have the capability to think, design and develop in this way will find themselves quickly outmoded as less layers, better software design and more efficient use of computing resources becomes a necessity.  Developers happy to live 20 layers above the CPU will find themselves as outdated as they think I am today for caring about how many CPU cycles it takes to service a request! By the same token, I don’t believe that a “managed code” runtime and do it quick buttons in an IDE turn great desktop application developers into application server guru’s

Well my rant is done; I am not sure I answered the question in a succinct way but in summary, while I think there is a place for .NET for as long as Windows exists on the desktop, I don’t think it’s for the big servers of our future.  As long as .NET continues to be positioned as a viable server technology with the IDE and tooling that can make almost anyone a .NET developer, we can only expect the continuing trend of *NIX operating systems to continue to serve the ever increasing numbers of web services and applications we use, relegating .NET and the often poorly implemented .NET business applications to the point and SMB niche products and solutions.

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.