Fully Programmable Modular Bench Power Supply – Part 1

After fixing a high quality power supply (see here if you are interested) it spurred me on to have a go at designing and making my own – I think anyone who does electronics as a hobby will at some point or another build a power supply to use on their bench – I have not done that myself before so I thought I would give it a go. My aim is to use my basic understanding of analogue electronics and create a fully programmable bench PSU that will perform at least as well as the Agilent PSU, I think this aim is reasonable on the basis that todays components are considerably better than those that were available 20 years ago. Apart from the performance characteristics, there are a system engineering characteristics that I also want to consider because I would like to make it possible for anyone else to build this PSU as a DIY project with the ultimate aim of creating a high quality PSU that is modular and can be built in various configurations and be built at a hobby user or small lab price point. Here is a photo showing the very first working prototype regulating at 5.010 volts.

The first working prototype regulator

Why a PSU project, there are hundreds of them already? Firstly, the two areas of technology I really enjoy are electronics/embedded and software development and this project requires a fair amount of both to be brought together. Apart from that, no other reason than because I think I can do a decent job – we shall see 🙂

I thought it would be a good idea to try and set out what I have in mind. My aim is to create a modular PSU system designed to be used in lab or test automation environments. The first thing I want to create is a module similar in concept to those audio amplifier modules you can buy for building a HIFI amplifier, the module will physically look something like this.

The PSU module will take a single AC input from your line transformer of choice on its input and will provide a fully programmable lab quality Constant Current/Constant Voltage regulated DC output. The module its self will not have any kind of controls or display, but instead will have a fully isolated serial I/O which will be connected to a controller with the idea being you can create a multi-channel PSU with isolated outputs while also providing a single earth referenced controller that can be safely connected to a computer or other test equipment in test automation environments. Once I have created these modules, my intention will be to make a a few variants of controller, an RS232 interface and a PC software controller, a simple stand-alone control board with an LCD display and a couple of rotary encoders to control a single module and a more comprehensive control board that can control up to four modules with a nicer display (TFT/VFD?) and other interfaces such as RS232, USB and Ethernet to create a full function standalone multi output bench PSU.

Focusing back on the PSU module, I would like it to support a number of configurations with just a few component changes, primarily this is to allow different voltage/current ranges and resolutions to be selected to suit different requirements.

The headline specs for the regulator module are as follows: –

  • Up to 50w of power
  • Output range options 0-6vdc 0-5A, 0-10vdc 0-5A, 0-15vdc 0-2.5A, 0-25vdc 0-1A, 0-30vdc 0-1A
  • Constant Voltage and Constant Current capable
  • Remote sense capability
  • Over voltage, over current, reverse power and short circuit protection at all power levels
  • Optional on-board pre-regulator to lower the modules heat dissipation for higher voltage ranges
  • On-board temperature monitoring
  • Fully isolated serial interface for programming, control and monitoring
  • Control resolution down to 1mV and 1mA in low voltage range

There are also some system engineering constraints I want to apply, these are: –

  • Low component count
  • Easy to source low cost components
  • Easy to build DIY
  • Physically robust construction
  • One single PCB design for all voltage range configurations

In terms of my design approach, and I must state at this point that I am no analog electronics expert, I really just have a passing understanding. None the less, I want to avoid using the easy option in the form of the classic single package regulator IC’s that most DIY PSU builders use. LM317T, LT3080 and the like. These are great components, don’t get me wrong, but what you don’t get with these is a professional grade PSU without putting a significant amount of other electronics around them, by which time you have pretty much lost any advantage you have gained over using discrete components. Apart from that, one of the main drivers for this project is to learn more about building this kind of project and to share that learning with others.

In Part 2 I will describe my first attempt at building a discrete linear voltage regulator, with a schematic of the first working circuit along with a description of what I found along the way.

HP/Agilent E3631A Power Supply Teardown & Repair

I recently bought a faulty Agilent E3631A bench power supply on e-bay which I thought would be a nice addition to my *slightly excessive* electronics hobby workbench.  These power supplies are really nice; they are engineered and built like military equipment, good high quality materials and mechanically very robust.  An great indication of how good these things are is the second hand values, these things cost $900-$1000 to buy in reasonable condition so they are not cheap.  I bought this particular one faulty and thought I would have a go at repairing it.

My first impression of the electronics in side was not great, it seemed very seriously over engineered for what it was trying to achieve.  It seemed like the designers had a field day adding all sorts of crazy circuits because they could.  The ADC is made up of discrete IC’s, there is a custom logic chip in there as well as a CPU, ROM and RAM, there are numerous power supplies for bias and control circuits all floating around each other and most things seemed much more complicated than they need to be.  The one real surprise though was the opto isolation in the analogue domain. The CV and CC reference signals from the DAC for the +6v supply are isolated through high linearity opto couplers type HCNR200, this is something that would be crazy to do today when the cost of micro-controllers are so low and have all the goodies like DAC’ ADC’s and PWM’s making isolation in the digital domain a far more sensible design choice.

6vIsolation

In fairness though, I was making my initial judgements based on what’s possible with today’s components, things were very different 20 years ago so given its age it’s a pretty sophisticated piece of kit really.  Once working it does appear to work very well so my initial thoughts are not really founded on anything other than my own instinct to want things to be easier to understand and better as a result.

On with the repair….

First things first, after a quick check of the obvious big components like the series regulator transistors etc, I very quickly needed a schematic diagram. Agilent were less that helpful here, the manuals they put out now days specifically have the detailed schematics removed from the documents despite there being a reference to them in the index. When I contacted Agilent and asked for a schematic I was told in no uncertain terms (after a 4 day response time) that they no longer make the schematics available, but they do offer a £450 exchange repair service – come on HP/Agilent, by all means offer the service but don’t stop those of us who want to hack around from doing so.  The solution was to buy an original printed service manual which did include the schematics; e-bay and $10 got me what I needed.  As luck would have it, while waiting for the manuals to arrive in the post, I also managed to find a manual on the net which still had the schematics present – not from any official Agilent source I might add…

I set out to work on fixing it and found I had to strip it down completely, removing the two boards, front panel, transformer and wiring from the chassis and spread it out on the bench. If you find yourself needing to repair one of these, be prepared to commit serious bench space to the exercise.  I have taken a bunch of photo’s if the teardown so you can see what all the bits look like.

Home » HP/Agilent E3631A Power Supply Teardown & Repair » HP/Agilent E3631A Power Supply Teardown
IMG_4987.jpg
IMG_4987.jpg
IMG_4988.jpg
IMG_4988.jpg
IMG_4989.jpg
IMG_4989.jpg
IMG_4990.jpg
IMG_4990.jpg
IMG_4991.jpg
IMG_4991.jpg
IMG_4992.jpg
IMG_4992.jpg
IMG_4993.jpg
IMG_4993.jpg
IMG_4994.jpg
IMG_4994.jpg
IMG_4995.jpg
IMG_4995.jpg
IMG_4996.jpg
IMG_4996.jpg
IMG_4997.jpg
IMG_4997.jpg
IMG_4998.jpg
IMG_4998.jpg
IMG_4999.jpg
IMG_4999.jpg
IMG_5000.jpg
IMG_5000.jpg
IMG_5001.jpg
IMG_5001.jpg
IMG_5002.jpg
IMG_5002.jpg
IMG_5003.jpg
IMG_5003.jpg
IMG_5004.jpg
IMG_5004.jpg
IMG_5005.jpg
IMG_5005.jpg
IMG_5006.jpg
IMG_5006.jpg
IMG_5007.jpg
IMG_5007.jpg
IMG_5008.jpg
IMG_5008.jpg
IMG_5009.jpg
IMG_5009.jpg
IMG_5010.jpg
IMG_5010.jpg
IMG_5011.jpg
IMG_5011.jpg
IMG_5012.jpg
IMG_5012.jpg
IMG_5013.jpg
IMG_5013.jpg
IMG_5014.jpg
IMG_5014.jpg
IMG_5015.jpg
IMG_5015.jpg
IMG_5016.jpg
IMG_5016.jpg
IMG_5017.jpg
IMG_5017.jpg
IMG_5018.jpg
IMG_5018.jpg
IMG_5019.jpg
IMG_5019.jpg
IMG_5020.jpg
IMG_5020.jpg
IMG_5021.jpg
IMG_5021.jpg
IMG_5022.jpg
IMG_5022.jpg
IMG_5023.jpg
IMG_5023.jpg
IMG_5024.jpg
IMG_5024.jpg
IMG_5025.jpg
IMG_5025.jpg
IMG_5026.jpg
IMG_5026.jpg
IMG_5027.jpg
IMG_5027.jpg
IMG_5028.jpg
IMG_5028.jpg
IMG_5029.jpg
IMG_5029.jpg
IMG_5030.jpg
IMG_5030.jpg
IMG_5031.jpg
IMG_5031.jpg
IMG_5033.jpg
IMG_5033.jpg

There were various faults with the PSU, numerous op amps and some CMOS logic IC’s were faulty as well as two open circuit 33k resistors. At a guess I would say there was some kind of big static or high voltage discharge into or across the outputs that caused the original fault. I had to isolate the various areas of the circuit and work on them individually, making assumptions about what should be present in terms of voltage levels and feed in lots of external signals to get to the bottom of each fault.  I struggled with the configuration of some of the analogue circuitry – fortunately for me I have a good friend who understands much more about analogue electronics than I do so some exchange of e-mails and sections of circuits with measurements kept me on track and expanded my own knowledge too – cheers Span.

Here are all the components I ultimately had to change…
IMG_5035

While fixing it I also managed to introduce some faults of my own. Specifically I managed to blow two of the HCNR200 opto couplers, easily done just with a slip of a multi meter probe shoring out pins 2 & 3 puts 15v with no current limit straight into the internal LED rendering it open circuit instantly.  I managed to blow four of them like this before I figured out what I kept doing – doh!

After working through these problems I finally got it working except — when placing a dummy load on the +6 output, the voltage I was measuring went up!  A bit more inspiration from my friend Span and a scope on the output and voila – it was bursting into oscillation under load, probably due to the 4 ohm wire wound load resistor. It turned out this was down to the fact that the electrolytic capacitors soldered onto the back of the binding posts on the front panel are actually there for stability reasons – obvious once you know. I had removed the output wiring from the front panel to make it easy to work on. Strapping 1000uF across the output solved the problem.

You can download the Agilent E3631 Service Guide which include the schematics

Having worked on this I have been inspired to have a go at designing my own programmable PSU from the ground up to see if I can match the specs but use more modern components and design approach – I will post info on progress if I get around to it.

[UPDATED:] I am getting around to it… http://gerrysweeney.com/fully-programmable-modular-bench-power-supply/

The meaning of Life, the Universe and Value!

I was inspired to write this blog entry from a thread in Facebook.  We all know that the internet is *free* right?  Using Google, G+, Facebook, Twitter and loads of others I could mention cost no money to use.  How can that be, how can companies be considered worth upwards of $100bn yet provide their services to their customers entirely for free.

The business model is actually dishonest because it takes advantage of a human condition we in the civilised world have developed around value – more specifically our own value!  Pretty much every civilisation has a universal mechanism for their people to measure value – it’s called money and it’s the fundamental device that enables trade and commerce so its important.  Money is also this device that people use to value themselves as an individual often comparing themselves to others; we call this wealth – think the Rich 500 list.  If you have lots of money one week you feel “flush” and you might go on a shopping spree or treat your friends to dinner, and when you’re doing these things you feel good and in some way elevated, important or in a leadership position within your social group. If you have more money than your peers you feel more successful – these feelings are addictive and that’s why craving money is natural, because it leads us to get those feelings.

Don’t get me wrong, we all need to measure our own success and money is certainly a great and universal barometer,  but one of the side effects of this is what the term “free” mean to us. We are taught from an early age that more money is good – get a good job – get a nice house etc.  So the natural assumption that you derive from this is “spend less and get more for it” is good – we all love those bargains right!

So anything that you get for “free” must be good – right? That is surely the best bargain of all.  It’s so easy to think that because the idea of getting something of some value to us without having to pay money for it appeals so much that we are blinded to what is really happening.   Bottom line is that anything you get for free you are actually paying for indirectly and yet you don’t know it and that’s what free services take advantage of – and if you work out the *real* cost to you personally it will almost certainly be a much higher cost than if you actually paid money for it.

Don’t talk rubbish I hear you say, how can that be?

I can’t really answer that for you, if you really want to know the answer to that you must answer it for yourself.  All  you really need to do is work out what is valuable to you individually in non-monetary terms and you will have your answer.   To help you on your way let me give you some questions you might want to ask yourself that might point you in the right direction of discovering your own value?

  • How much do you earn an hour on average over your working life?
  • How long is your life?
  • If you worked a normal life, spent nothing and saved all of your money how much would you end up with?  What would you do with it when you have it? And when would you do those things?
  • If you had a child, how much time (in hours) would you spend splashing around in a pool on holiday with him or her between the ages of 1 and 14?
  • How often have you found yourself on a mission that has involved you spending a vast amount of your own time justifying and claiming for that $10 meal expense or that parking ticket that you did not deserve?
  • How often have you put off the opportunity of going out or traveling to somewhere new?
  • How often have you sat their for tens of minutes waiting for your Microsoft Windows computer to update and reboot its self so you can get onto the internet and just accepted that as the norm?
  • How long did you stand in that queue in the January Sales to buy that top you wanted for 30% off the tag price?
  • How much time do you spend in Facebook rejecting those pesky apps that want you to give away your personal information?

I could go on but that should be enough.  Work out what it is that you personally value and then work out why things that are “free of charge” are not actually “free” to you – and once you have worked that out you may well find that getting things “free of charge” is not as valuable to you as your instincts would lead you to believe. But you already knew that right?

Is .NET the holy grail of software development in the modern age!! (Part 2)

I must first open this post by apologising for the long delay in my follow up to part one. It was pointed out to me by my social networks Kung Fu  practitioner (you know who you are Chris 😉  that leaving my blog for so long was signifying it’s death.  I don’t want to kill my blog just yet so I will do what I can to resurrect it, for those folk that know me probably know that I have a lot to say!

Back to the follow-up then – is .NET the right technology stack to build a professional, packaged, off the shelf software application with?

As with most things in software the answer is not clear cut.  There is no doubt that its possible to build a high quality windows desktop application using .NET, especially with the later versions of the .NET libraries, WPF being a great example.  So if one was to build a desktop application targeted at Windows then IMHO there is not much to choose from that is better.  It is worth remembering with .NET in this context; you must remember that the .NET runtime versions, components and myriad of service packs and OS hot fixes that you are bombarded with must be part of your ongoing application support strategy because unlike compiled languages like C++ where you can almost dictate and lock down specific library versions which remain by-and-large static. With .NET though, updates to runtime components can change things in the execution of your applications after your release which can cause new problems or changes in behaviour that did not exist when you ran your product through the QA cycle as part of you releasing it.

For me though, the much bigger problem is when client/server applications are developed.  Take a typical business application which has some kind of presentation (desktop or browser) application, business logic which is typically server-side logic accessible to the presentation layer via some sort of web services API and a back-end SQL database sitting behind the business logic.  With .NET and Microsoft’s development environment you can build all three tiers in a simple, easy, point and click, highly integrated way inside a single IDE. It’s easy, really easy to create a UI, create some web services, connect to a database, create code objects to serialise/deserialize data to/from the database all in a way that follows some idealistic design patterns and create an initial application quickly.  What’s wrong with this you may ask, isn’t that what every product sponsor wants – fast development, good standard design patterns and a product on time, hell yes please, I would have them all day but… I can’t tell you how many projects I have seen that follow this route only to find that the initial product put out in the field is slow, unreliable, full of bugs and basically unsupportable – often to the total surprise of the team that created the product in the first place.

So what goes wrong so?  Well, its first worth mentioning that there is absolutely nothing wrong with the .NET technology, the problem is with the expectatiation set by the “.NET dream” and the way in which .NET is used.  Shame on Microsoft for setting such a vivid dream of perfection, but shame of the developers and architects for believing it!   Just because you can create a web service with two clicks does not mean that is the “right” way to create your particular web service, that’s just the “easy” way to create your web service and that’s the problem – two clicks and the project has a new web service so makes progress fast.   Many application developers use the easy tools to create their skeleton but then spend ten times more developer hours trying to make the code created by the easy button fit their requirement.  The result is bloated code that is hard to understand and often ends up being a compromise on the original design intent; how many times have I seen software designs change just to fit the specific IDE, toolkit or individual developer’s idealism.  The other problem with such a highly integrated development environment is there is often an illusion that code will be better because the framework takes care of evils like pointers, memory management and so on.  Microsoft have done such a fantastic job at branding the “Managed code” idea, but this is an ideal aimed at managers who can now hire developers that are often less skilled than their lower level developer counterparts and the code created will be less buggy because the managed environment takes care of it for them.  When I hear “managed code” I think RCBPP or “Runtime Compensation for Bad Programming Practices”, which by the way is perfectly fine for applications that are not expected to be fast and scalable in a green-friendly way.  Bad programming practice should not be compensated for at runtime by executing CPU cycles each time the code must run, it should be ironed out at design time so the code can run many times faster for the same CPU cycles.

.NET developers please take note – just because .NET has a garbage collection engine within its “managed code” environment, this is not an instruction for you to absolve yourself of memory management and other responsible coding practices in your design. .NET code is just as susceptible to performance issues, memory fragmentation, race conditions and other serious runtime conditions that make applications perform either badly or in an unreliable way.

Things get even worse at the database design stage where the data model is defined. At this stage there is all too often an over abundance of triggers and stored procedures that actually go way beyond the data model and impose much of the business logic for the application.  A well designed three-tier architecture should be loosely coupled at each layer, the integration point should be at the web services API, not the database so putting application logic in the database is madness, and actually demonstrates a poor overall application design with too much architectural input being taken form the DBA/Data Architect.  None the less it happens, and much more than it ever should.  The database vendors love it because most features that one would use triggers and SP’s for are not standard; they are almost unique in behaviour to each vendor’s database system thus often tying the application to that database developer.  Developers please note – its only the database vendors that strongly advocate for putting application logic into the data model; for the rest of us the data level logic should be there to perform data related tasks only – if you have the need to call a stored procedure to perform some application logic or the integrity of your application logic depends on the database performing some functions through database logic then review your design – because its probably wrong.

Another problem I have with .NET is the seemingly defacto standard amongst some of the .NET developers I have come across seem to  have a belief that a “Web Service” is something that you create in the Visual Studio IDE presented as SOAP (Java folk, you are guilty of this too I am afraid).  This common belief is the result of how easy it has been made to achieve this particular task within the development environment, its easy – really easy, so it must be the right way right? NO – there are so many things wrong with this idealism.

Number one: If you need a web service that is going to be hit a few times an hour then perhaps this approach is fine but if you are trying to build a web service that will be efficient and consume the least amount of system resources so you can get a high and sustained transactional throughput then managed code is not the way to go – not unless you are hell-bent on keeping the Dell or Sun server ecosystem fed with new customers, and air conditioners all over the world burning resources and cash.

Number two:  SOAP is the standard that has become synonymous with Web Services for many .NET developers, yet more companies that have demonstrable experience delivering high quality web services API’s to the web on a global scale choose to use almost anything but SOAP. Why? Well SOAP is simply not the one-size-fits-all answer – sorry SOAP lovers… SOAP has some good ideas but is generally badly conceived. It’s complicated with over 40 pages of specification, some of which is almost nonsense, its overweight and is often only used in a context where the very basic need of an RPC, that is, send a request to a server and get a response – the overhead of SOAP in this case can be significant.  There are many alternatives to SOAP that are in common use today that process more efficiently, are more browser friendly, simpler to understand, implement and document across numerous server, desktop and mobile platforms – the problem with these alternatives is there is not a “make me one” button in the IDE so SOAP seems to win out amongst the SMB/Point solution application developers.

Number three: IMHO any use of the .NET stack should be more prevalent at the front end and less-prevalent and ideally non-existent at the back end. I personally like the idea that for a server, every CPU clock cycle possible works towards servicing the users request, that means less software layers, less runtime compensation for bad programming practices and better quality, well designed and well implemented code executing as close to the machine-level as possible.   This I know can be thought of as a rather old-fashioned perspective but in a perverse and somewhat gratifying way, as green computing becomes ever more important to us, this will be the only way to write code that will run in our future data centres – I am pretty sure that programmers that don’t have the capability to think, design and develop in this way will find themselves quickly outmoded as less layers, better software design and more efficient use of computing resources becomes a necessity.  Developers happy to live 20 layers above the CPU will find themselves as outdated as they think I am today for caring about how many CPU cycles it takes to service a request! By the same token, I don’t believe that a “managed code” runtime and do it quick buttons in an IDE turn great desktop application developers into application server guru’s

Well my rant is done; I am not sure I answered the question in a succinct way but in summary, while I think there is a place for .NET for as long as Windows exists on the desktop, I don’t think it’s for the big servers of our future.  As long as .NET continues to be positioned as a viable server technology with the IDE and tooling that can make almost anyone a .NET developer, we can only expect the continuing trend of *NIX operating systems to continue to serve the ever increasing numbers of web services and applications we use, relegating .NET and the often poorly implemented .NET business applications to the point and SMB niche products and solutions.

Is .NET the holy grail of software development in the modern age!! (Part 1)

I have spent the last 20 years of my working life involved with software development in some form or another.  My first programming experiences were on home computers like the Sinclair ZX81, Spectrum and Commodore 64 where writing in 6502 or Z80 assembler using just a word processor, a cassette recorder and a dot matrix printer if you were lucky was the only tooling to hand.  The first program I ever wrote on the PC was written in x86 assembler the DOS ‘debug’ tool and was a simple terminal emulator for connecting to a VAX over DECNet!  I stepped up to the ‘C’ language soon after that and eventually to C++.  I have also developed a bunch of AVR, ARM and PIC microcontroller code for numerous projects too. Now days, pretty much any language is usable to some degree or another, and my software thinking in terms of achieving a specific task has become somewhat language agnostic.  As you can tell from my past though, I am someone with a slightly unhealthily liking for low-level programming and I have clearly had way to much spare time, late nights, coffee and pizza!

Anyway, back to the topic in hand. What does all that have to do with .NET you are probably thinking?  Good question – but before I answer that, I must clarify what I mean when I say .NET for the purpose of this blog post.  I am specifically talking about the lowest level of the .NET stack which is the CLR and the .NET framework base libraries.

My affinity with the inner workings of computers and my low-level programming experience has become a bit of a curse for me in the modern programming age.  When working with C and C++ this low-level understanding can be a real advantage but when it comes to the higher level languages like those provided in .NET one must to a large degree ignore how the machine works and think just about  the language and the idealism that is presented within its design.

I should just state that much of what I am about to say applies equally well to Java, PHP, JavaScript, VB and other interpreted language environments and frameworks as it does to .NET and of course, it goes without saying that this is all my own humble opinion so take it as you will. Disclaimer Ends— 😉

.NET is an interpreted language. I say this because the ,NET code running is being managed by an execution engine which is best described as a byte-code interpreter – despite what the marketing hype may indicate with things like JIT, native code generation and so on, the very notion of “managed code” means this must be true. You are probably thinking, what is wrong with that? Well nothing in principle, but what is wrong though, is what people choose to use this type of software stack for.

There is no escaping that .NET is popular and has a great following – and by the way in my opinion it’s pretty good for some things too, I am not anti-.NET and I must say the development tools that Microsoft have created in Visual Studio are exceptional and by far my most favourite modern-day development environment (Apple, please take note :).  The problem is though, .NET is not ideal for everything – yet despite this, some people tend to hail NET as the next coming.  I can’t tell you how many organisations I have come across over the years who have set a “corporate policy” that states all applications must be based on .NET.  How can an organisation decide on such a policy, it’s crazy because it’s unenforceable if you happen to use Microsoft’s own products – many of which to this day are not developed in .NET (more on this later).  I suppose that is the power of the Microsoft marketing machine, best not sell this to developers said Microsoft, lets sell it to the executives instead.

Let’s take a look at what some of the key marketing messages put out at the time were in order to answer the “Why .NET” question: –

  • Managed code means fewer bugs,  reducing maintenance time by up to XX%
  • Mixed language environment allows all of your existing skills even across  multiple languages to work on the same project reducing time to market
  • Platform independent (really, that’s how .NET and the (then called) IL was touted almost 10 years ago, using .NET will ensure you have portable code!! Yeah right – where is that!)
  • Garbage collection, exception handling and other language features makes it easier to write code that works reliably.
  • The standard framework and foundation libraries mean your VB developers can transition to C# much faster.

The message here was not really aimed at programmers but at the management and executives of companies and development teams. Use .NET and you can standardise and save time, which will save lots of money and improve you competitive edge by reducing your time to market. .NET means your teams can write code in half the time and the software will work.

Clearly, this is the way to go right? Perhaps we should look at where .NET came from. It was not a brand new innovative idea that came to someone in a vision – it was simply a natural evolution of the old VB6 environment which was plagued with all sorts of problems that needed to be resolved in order to move forward.  Microsoft’s first stab at a component model was COM, which was an evolution of an even more ridiculous specification called OLE.  VB6 was used primarily within corporate application development world and things were messy, and for the most part COM was the problem but was also the keystone technology that held VB6 together.  Windows today is still plagued with the aftermath of COM nightmare that seemingly will never go away much to Microsoft’s wishes to the contrary I am sure.  VB6 supported components through COM but components were unreliable, difficult to work with and component vendors would write bad code that would bring the whole application down. Although for the most part, the third-party components were the problem, Microsoft did not help because COM was so badly implemented in the first place anyone writing a COM component would invariably get things wrong and the sheer complexity of the COM specification sealed its fate.

.NET with its managed code environment, assembly management and standard component library was clearly the headline wish list for VB7.  So really, .NET is an evolution of VB – a significant one granted, but an evolution none the less – this was more obvious in the early versions of the .NET framework back in 2002.  When VB6 was at its height, I remember its place was within the enterprise where corporate organizations developing their own internal applications, there were very few off-the-shelf products around that were developed using Visual Basic 6.

Reviewing the marketing messages above its pretty clear that Microsoft’s intention, initially at least, was to migrate their VB6 user community to .NET – their aim was the corporate in-house development teams, the ISV community also targeted only really adopted .NET later on as it started to improve.   The corporate in-house development market is a real sweet spot for .NET IMHO, and it’s probably the best environment for in-house corporate desktop and web application development teams, with the only real competition being Java.

Perhaps controversially though, I don’t believe that .NET is the right software stack for ISV’s to choose when developing the core of a professional, off the shelf and supportable software product – choosing .NET is far more advantageous to the ISV than it is to the customers that ends up using the product!

Why? Well, I have already exceeded my word count on this one so if I have not managed to send you to sleep already, I will explain what I mean in Part 2…

Did someone get fired for buying SOA?

I heard once that someone got fired for buying SOA. Urban myth? yeah, probably, but if anyone that worked for me “bought” SOA I think I would probably fire them too. Why so emotional you must be thinking – get a life!.  Actually, before going into the SOA question specifically, there are many things packaged up in the IT industry that are “for sale” and many more people are “sold” these packages. Did you ever hear the term “shelfware”?  This term is used to describe a piece of software, often very, very expensive software, that was purchased by an organisation but never deployed into production – in other words it was left “on the shelf”.  This is a pretty bad situation for the customers and suppliers to be in and I have even heard sales people tell excitable stories about how they have closed a big deal that just ended up being shelfware. The excitement comes from the fact that the vendor received all of the money for the software but suffered none of the headaches that can be associated with implementation or deployment.  Deplorable and downright dishonest behaviour but the software business can really be like that.

So, why not buy SOA, it sounds very useful and could revolutionize my company’s business systems.  Well here is the thing, SOA is not something you can actually buy, it’s a principle, I say again, it’s a PRINCIPLE.  just for clarification, A PRINCIPLE is a law or rule that has to be, or usually is to be followed, or can be desirably followed (thank you Wikipedia: http://en.wikipedia.org/wiki/Principle).  Think of it this way, imagine for a moment walking into a surgery saying to a consultant “how much will it cost to make me honest?”, that would be silly right?  You can not buy a principle, you principles are part of you, your personal DNA, a definition of who you are, and on a personal level most people understand this. Businesses have principles too and sadly, in business some people are vulnerable to persuasion – there are always people who will sell you something you desire. It’s funny, when I used the analogy of honesty I thought I would Google to see if anyone is willing to sell me some “honesty” and sure enough, in less than a minute I found this site: http://www.free-hypnosis-mp3-downloads.com/product.php?productid=23.  I am not going to make any judgements about what this site offers for $39.90, if it has value or if it will work, I will leave you to draw your own conclusions – hopefully though can see my point.  Buying (or being convinced to  buy) SOA is the IT systems equivalent of doing just that!.

The problem is, just like the honesty peddlers that exist in the world, there are companies that will “sell” you SOA, that’s right, for a mere $Xm you can buy the SOA principle from a vendor and transform your business systems – and if you want to believe that, go ahead and Google SOA and you will find plenty of willing SOA peddlers.

SOA has had a lot of bad (and good) press, have a look at this entertaining site (http://soafacts.com/), it shows just how anti-SOA some views are, this is the cynical end of SOA and many of the points in here could easily have been written by people who got fired for purchasing SOA.  Actually, the situation is a lot better now than it was five years ago because many more people understand that SOA is just a set of principles and not a product – in other words SOA is something you do and not something you buy.

I am a fan of SOA principles, but don’t much like the prescription of technology standards to deliver these principles upon, so here is my attempt to provide a more positive outlook on SOA facts (in the context of SOA being a set of principles only): –

  • Loose coupling for system design is flexible and powerful, this works at all levels of a system.
  • SOA principles have nothing to do with technology standards. Unlike some vendors would have you believe, SOA and SOAP for example are not synonymous, it just so happens that SOAP (which is a standard and not a principle), like numerous other standards are useful in building service oriented systems. Service orientation is one of the key principles of SOA.
  • SOA is not an all or nothing, pick from it the principles within which your systems are designed
  • SOA principles are really targeted at systems designers, architects and software developers and not aimed at business people as some assume. Instead, the systems that are built on SOA principles put the business people back in control of their own systems because they are easier at understand at a high level.
  • SOA is only a name, the principles of SOA existed well before SOA was coined as a name.
  • SOA was born as a way of describing specific technology standards in a business context. However, SOA has evolved to become a term that describes the principles, with the specific technology standards being disconnected from the principles – at least for those of us that truly understand that one can not purchase SOA

SOA is not the perfect set of principles for every situation, you have to be able to pick and choose which principles you apply.  I would use the analogy of religion – as a general rule religion is a good positive thing for society, it helps with social order, community and leadership but taken to an extreme can lead to radical views and totally unacceptable behaviour by often vulnerable people that follow extreme interpretations of those principles and beliefs. I suppose now I think about it, you could look at buying a global SOA solution from a single vendor as an indoctrination into a cult.

I really feel the need to go into the detail of SOA right now but I am conscious of the fact that I have already written more than enough to bore most readers, so instead if you do want to know more, there is a good explanation of SOA on Wikipedia: http://en.wikipedia.org/wiki/Service-oriented_architecture

Just to sum up – take look at the principles of SOA, if you like the idea of them and can see how to use them to the advantage of yourself or your organisation, adopt the principles and start using them in your designs – you don’t need to go to your boss and get sign-off for a SOA project, your boss won’t even know or care – but when the systems start to deliver real value and results, then your boss will start to care because he will feel compelled to go to his boss to get sign-off for a pay rise and a bonus for you as a reward for the magic you are able to do – just believe! And even if that does not happen – at least you will end up with a flexible and adaptable system.