So tell me team…. “How Long Will That Take?”

So tell me team…. “How Long Will That Take?”

I was inspired to write this blog post in response to a post I came across today on LinkedIn about sizing software projects. (link below). Sizing software projects is the thing that most everyone gets wrong, its hard and almost impossible to get an accurate estimate, why is that?

Well apart from scope creep, budget changes and all the usual business variables mentioned by Patrick in his blog post, developers and product teams will never be consistent in their output, not even if you average it out over a long time which is what SCRUM/other agile methodologies suggest when trying to establish velocity – that simply does not work, it is a fallacy. Writing software is an iterative and creative process so “how someone feels” will change output, and I am not talking about how someone feels about the project or work, I am talking about how someone feels about their cat dying, or their wife being pregnant or political changes or the weather, or the office banter or how unwell/well they feel today, in fact “life” guarantees this. So I am going to be a bit outrageous here and suggest an alternative way of thinking about this. Let us start with asking the most important question – “what is the point of estimating”? there are only two possible answers to that question….

1. You are going to undertake some work which you will charge your client for so you need to know what to charge them.

The only possible way you can give your client a fixed price for work that is essentially a creative process is by substantially over-pricing the work estimate and giving yourself lots of fat in the deal to give you the best opportunity of making a profit at the end of it. If you think that you can ask a team of developers to tell you how long it is going to take so you can make a “fair” markup you’re deluded. The best option you have in this scenario is to work backwards, you need to understand the need the client has at a high level, then you need to establish the value that your customer is getting from the thing you would deliver, then you put a price on it, you are looking for the best possible price the customer is willing to pay, you should not at this point be trying to establish “how much will it cost”, you must be asking the customer “how much are you prepared to pay”. Once you have a number, now you can work with your developers, but instead of saying “how long will it take” you are asking “can it be done in this timeframe…”, that may seem a subtle difference but it is actually huge because in answering that your developers will take “ownership” of the delivery commitment and that is what you need to stand any chance of being successful. The risk you are taking is on your team, not the project – if your team succeeds then you and your business do, if they fail so do you.

2. Your organisation wants to know how much and how long this new software thing is going to take/cost so they can “budget” and control and cost overrun.

The reason to budget is because managers and finance people (and he people that own the actual money that gets spent) generally need to *know* how much it costs in order to feel like they are doing their job. This really comes from an age where output was quantifiable (manufacturing for example), but creative IP output is much harder to quantify because there are so many variables that are outside of your control.

Think about this, you wake up one day and have a great idea to write a piece of software that will change the world; you believe it is going to make you your fortune. You are so confident that you leave your day job, set up your business and your first day you sit down and start to change the world – what is the first thing you are going to do?

I am going to bet that you will NOT crack open Excel and start to do a time estimate and a budget – Instead you will most likely start making your software. So you get on with it, now project forwards, you have created your software and you start to sell it things go well, in fact so well that you have to hire a manager or two, then you go and raise some funding to go global with your idea. Now something important happens, instead of spending your time making that thing you believe in, now the people who invested money (which may well be yourself) will want to protect that investment so they put management controls in place, and when the question “To get this thing to the next level, what do we need to do” is asked of you, and you answer “We need to build X, Y and Z” the dreaded question comes – “How long will that take?” which roughly translates to “how much will that cost”, this is because the people asking that question are in fact trying to protect cash and de-risk the investment – they don’t believe in the thing you are building in the same way that you do, the thing is just a means to an end – profit (which by the way is a good thing).

Going back to that first day, if you had tried to budget and determined it was going to take you six months before you could sell your first one, and after six months you realise you were not even half way there and you had another 12 months to go – would you stop? Well you would make sure that you could pay the bills and survive of course, but if you decide to stop, it would not be because of your budget, you would stop because with hindsight you no longer believed the idea was as good as you first thought – otherwise you would plough on regardless right?

So back to the boardroom then, and the “How long will it take”? question. Well the answer to that question should be, “I have no idea – how important is it”? Because either its important and it has to get done or its not that important.

You would be a little more committal than that but you get the idea. If you assume that an acceptable level of estimating effort was going to be 25% of the overall development effort (which has been my experience in software development) and if you have a thing that needs to get done because its strategically important for the business to flourish – then how long it takes is irrelevant, its either strategically important and it gets done, or its not. So if it “just has to get done” what on earth are you trying to estimate how long it will take for – just get on with it and use that 25% for making the thing you are trying to make – just like you did in the first six months of your enterprise.

You need to ask the same question about how important is it and what is it worth to the business, this is the question that the people trying to de-risk are not wanting you to ask, because they will find that question as difficult to answer as you will trying to answer the “how long will it take” question. Of course for trivial stuff like defects and general maintenance/tactical incremental development work this does not really apply, but for big projects that have strategic importance the “how long will it take?” question is a nonsense question to ask because any answer you get will be either fictitious or grossly over estimated, and both of these are bad.

If you want to get something strategically important created, hire a great team and empower them to get on with it – if you are making them spend their time telling you how much it will cost to develop instead of developing it then you are failing – not them. As a manager, entrepreneur, director or investor, hire software developers to do what they do best – make software, it is your job to take and manage the investment risk, if the team fail then you either hired the wrong team or you did not manage the business well enough to sustain the effort required to make it happen, asking them for an estimate is just a way of getting a stick to beat and blame them when things are not going well.

I have been managing (arguably very loosely) software development projects and a software business for the best part of 20 years, and I have learned a few things along the way. Perhaps more importantly I have been doing this largely investing my own money, so I think I know both sides of the “How long does it take” question very well.

The article I responded to
https://www.linkedin.com/pulse/software-project-estimation-broken-patrick-austin

Microchip PIC chips could have been the Power Behind Arduino!!!

Microchip PIC chips could have been the Power Behind Arduino!!!

So before I get underway, this article is about Microchip PIC micro-controllers. Please understand – I don’t want to get into a flamewars with Atmel, MPS430 or other fanboys, my personal preference has been PIC’s for many years, thats a statement of personal preference, I am not saying that PIC’s are better than anything else I am just saying I like them better – please don’t waste your time trying to convince me otherwise, I have evaluated most other platforms numerous times already – so before you suggest I should look at your XYZ platform of choice, please save your time – the odds are good I have already done so and I am still using PIC’s

OK, full RANT mode enabled…

As I understand it Microchip are in the silicon chip business selling micro-controllers – actually Microchip make some really awesome parts and I am guessing here but I suspect they probably want to sell lots and lots of those awesome parts right? So why do they suppress their developer community with crippled compiler tool software unless you pay large $$$, after all, as a silicon maker they *NEED* to provide tools to make it viable for a developer community to use their parts? It is ridiculous charging for the tools – its not like you can buy Microchip tools and then use them for developing on other platforms so the value of these tools is entirely intrinsic to Microchip’s own value proposition. It might work if you have the whole market wrapped up but the micro-controller market is awash with other great parts and free un-crippled tools.

A real positive step forward for Microchip was with the introduction of MPLAB-X IDE – while not perfect its infinitely superior to the now discontinued MPLAB8 and older IDE’s which were, err, laughable by other comparable tools. The MPLAB-X IDE has a lot going for it, it runs on multiple platforms (Windows, Mac and Linux) and it mostly works very well. I have been a user of MPLAB-X from day one and while the migration was a bit of a pain and the earlier versions had a few odd quirks, every update of the IDE has just gotten better and better – I make software products in my day job so I know what it takes, and to the product manager(s) and team that developed the MPLAB-X IDE I salute you for a job well done.

Now of course the IDE alone is not enough, you also need a good compiler too – and for the Microchip parts there are now basically three compilers, XC8, XC16 and XC32. These compilers as I understand it are based on the HI-TECH PRO compilers that Microchip acquired when they bought HI-TECH in 2009. Since that acquisition they have been slowly consolidating the compilers and obsoleting the old MPLAB C compilers. Microchip getting these tools is a very good thing because they needed something better than they had – but they had to buy the Australian company HI-TECH Software to get them, it would appear they could not develop these themselves so acquiring them would be the logical thing to do. I can only speculate that the purchase of HI-TECH was most likely justified both internally and/or to investors, on the promise of generating incremental revenues from the tools, otherwise why bother buying them right? any sound investment would be made on the basis of being backed by a revenue plan and the easiest way to do that would be to say, in the next X years we can sell Y number of compilers for Z dollars and show a return on investment. Can you imagine an investor saying yes to “Lets by HI-TECH for $20M (I just made that number up) so we can refocus their efforts on Microchip parts only and then give these really great compilers and libraries away!”, any sensible investor or finance person would probably ask the question “why would we do that?” or “where is the return that justifies the investment”. But, was expanding revenue the *real* reason for Microchip buying HI-TECH or was there an undercurrent of need to have the quality the HI-TECH compilers offered over the Microchip Compilers, it was pretty clear that Microchip themselves were way behind – but that storyline would not go down too well with investors, imagine suggesting “we need to buy HI-TECH because they are way ahead of us and we cannot compete”, and anyone looking at that from a financial point of view would probably not understand why having the tools was important without some financial rationale that shows on paper that an investment would yield a return.

Maybe Microchip bought HI-TECH as a strategic move to provide better tools for their parts but I am making the assumption there must have been some ROI commitment internally – why? because Microchip do have a very clear commercial strategy around their tools, they provide free compilers but they are crippled generating unoptimised code, in some cases the code generated has junk inserted, the optimised version simply removes this junk! I have also read somewhere that you can hack the setting to use different command-line options to re-optimise the produced code even on the free version because at the end of the day its just GCC behind the scenes. However, doing this may well revoke your licences right to use their libraries.

So then, are Microchip in the tools business? Absolutely not. In a letter from Microchip to its customers after the acquisition of HI-TECH [link] they stated that “we will focus our energies exclusively on Microchip related products” which meant dropping future development for tools for other non-Microchip parts that HI-TECH used to also focus on. As an independent provider HI-TECH could easily justify selling their tools for money, their value proposition was they provided compilers that were much better than the “below-par” compilers put out by Microchip, and being independent there is an implicit justification for them charging for the tools – and as a result the Microchip customers had an choice – they could buy the crappy compiler from Microchip or the could buy a far superior one from HI-TECH – it all makes perfect sense. You see, you could argue that HI-TECH only had market share in the first place because the Microchip tools were poor enough that there was a need for someone to fill a gap. Think about it, if Microchip had made the best tools from day one, then they would have had the market share and companies like HI-TECH would not have had a market opportunity – and as a result Microchip would not have been in the position where they felt compelled to buy HI-TECH in the first place to regain ground and possibly some credibility in the market. I would guess that Microchip’s early strategy included “let the partners/third parties make the tools, we will focus on silicon” which was probably OK at the time but the world moved on and suddenly compilers and tools became strategically important element to Microchip’s go-to-market execution.

OK, Microchip now own the HI-TECH compilers, so why should they not charge for them? HI-TECH did and customers after all were prepared to pay for them so why should Microchip now not charge for them? Well I think there is a very good reason – Microchip NEED to make tools to enable the EE community to use their parts in designs to ensure they get used in products that go to market. As a separate company, HI-TECH were competing with Microchips compilers, but now Microchip own the HI-TECH compilers so their is no competition and if we agree that Microchip *MUST* make compilers to support their parts, then they cannot really justify selling them in the same way as HI-TECH was able to as an independent company – this is especially true given the fact that Microchip decided to obsolete their own compilers that the HI-TECH ones previously competed with – no doubt they have done this in part at least to reduce the cost of (and perhaps reduced the team size needed in) maintaining two lots of code and most likely to provide their existing customers of the old compilers with a solution that solved those outstanding “old compiler” issues. So they end up adopting a model to give away limited free editions and sell the unrestricted versions to those customers that are willing to pay for them. On the face of it thats a reasonable strategy – but it alienates the very people they need to be passionate about their micro-controller products.

I have no idea what revenues Microchip derives from their compiler tools – I can speculate that their main revenue is from the sale of silicon and that probably makes the tools revenue look insignificant. Add to that fact the undesirable costs in time and effort in maintaining and administering the licences versions, dealing with those “my licence does not work” or “I have changed my network card and now the licence is invalid” or “I need to upgrade from this and downgrade from that” support questions and so on….this must be a drain on the company, the energy that must be going into making the compiler tools a commercial subsidiary must be distracting to the core business at the very least.

Microchip surely want as many people designing their parts into products as possible, but the model they have alienates individual developers and this matters because even on huge projects with big budgets the developers and engineers will have a lot of say in the BOM and preferred parts. Any good engineer is going to use parts that they know (and perhaps even Love) and any effective manager is going to go with the hearts and minds of their engineers, thats how you get the best out of your teams. The idea that big budget projects will not care about spending $1000 for a tool is flawed, they will care more than you think. For Microchip to charge for their compilers and libraries its just another barrier to entry – and that matters a lot.

So where is the evidence that open and free tools matter – well, lets have a look at Arduino – you cannot help but notice that the solution to almost every project that needs a micro-controller these days seems to be solved with an Arduino! and that platform has been built around Atmel parts, not Microchip parts. What happened here? With the Microchip parts you have much more choice and the on-board peripherals are generally broader in scope with more options and capabilities, and for the kinds of things that Arduino’s get used for, Microchip parts should have been a more obvious choice, but Atmel parts were used instead – why was that?

The success of the Arduino platform is undeniable – if you put Arduino in your latest development product name its pretty much a foregone conclusion that you are going to sell it – just look at the frenzy amongst the component distributors and the Chinese dev board makers who are all getting in on the Arduino act, and why is this? well the Arduino platform has made micro-controllers accessible to the masses, and I don’t mean made them easy to buy, I mean made them easy to use for people that would otherwise not be able to set up and use a complex development environment, toolset and language, and the Arduino designers also removed the need to have a special programmer/debugger tool, a simple USB port and a boot-loader means that with just a board and a USB cable and a simple development environment you are up and running which is really excellent. You are not going to do real-time data processing or high speed control systems with an Arduino because of its hardware abstraction but for many other things the Arduino is more than good enough, its only a matter of time before Arduino code and architectures start making it into commercial products if they have not already done so. There is no doubt that the success of the Arduino platform has had a positive impact on Atmel’s sales and revenues.

I think I was pretty close to the mark when I was thinking that because Atmel used an open toolchain based on the GCC compiler and open source libraries, when the team who developed the Arduino project started work on their Arduino programming language, having the toolchain open and accessible probably drove their adoption decision – that was pure speculation on my part and that was bugging me so I thought I would try to find out more.

Now this is the part where the product team, executives and the board at Microchip should pay very close attention. I made contact with David Cuartielles who is Assistant Professor at Malmo University in Sweden, but more relevant here is that he is one of the Co-founders of the original Arduino project. I wrote David and asked him…

“I am curious to know what drove the adoption of the Atmel micro controllers for the Arduino platform? I ask that in the context of knowing PIC micro controllers and wondering with the rich on-board peripherals of the PIC family which would have been useful in the Arduino platform why you chose Atmel devices.”

David was very gracious and responded within a couple of hours. He responded with the following statement:

“The decision was simple, despite the fact that -back in 2005- there was no Atmel chip with USB on board unlike the 18F family from Microchip that had native USB on through hole chips, the Atmel compiler was not only free, but open source, what allowed migrating it to all main operating systems at no expense. Cross-platform cross-compilation was key for us, much more than on board peripherals.”

So on that response, Microchip should pay very close attention. The 18F PDIP series Microcontroller with onboard USB was the obvious choice for the Arduino platform and had the tooling strategy been right the entire Arduino movement today could well have been powered by Microchip parts instead of Atmel parts – imagine what that would have done for Microchip’s silicon sales!!! The executive team at Microchip need to learn from this, the availability of tools and the enablement of your developer community matters – a lot, in fact a lot more than your commercial strategy around your tooling would suggests you might believe.

I also found this video of Bob Martin at Atmel stating pretty much the same thing.

So back to Microchip – here is a practical example of what I mean. In a project I am working on using a PIC32 I thought it would be nice to structure some of the code using C++, but I found that in order to use the C++ features of the free XC32 compiler I have to install a “free licence” which requires me to not only register my personal details on the Microchip web site but also to tie this request to the MAC address of my computer – so I am suspicious, there is only one purpose for developing a mechanism like this and thats to control access to certain functions for commercial reasons. I read on a thread in the Microchip forums that this is apparently to allow Microchip management to assess the demand for C++ so they can decide to put more resources into development of C++ features if the demand is there – a stock corporate response – and I for one don’t buy it. I would say its more likely that Microchip want to assess a commercial opportunity and collect contact information for the same reasons, perhaps even more likely for marketing to feed demand for a sales pipeline. Whats worse is after following all the instructions I am still getting compiler errors stating that my C++ licence has not been activated. Posting a request for help on the Microchip forum has now resolved this but it was painful, it should have just worked – way to go Microchip. Now because of the optimisation issues, I am now sitting here wondering if I should take the initiative and start looking at a Cortex M3 or and Atmel AVR32 – I wonder how many thoughts like this are invoked in Microchip customers/developers because of stupid tooling issues like this?

You do not need market research to know that for 32-bit Micro-controllers C++ is a must-have option in the competitive landscape – not having it leaves Microchip behind the curve again – this is not an extra PRO feature, it should be there from the off – what are you guys thinking! This position is made even worse by the fact that the XC32 toolchain is built broadly around GCC which is already a very good C++ compiler – why restrict access to this capability in your build tool. If Microchip wanted to know how many people want to make use of the C++ compiler all they need to do is ask the community, or perform a simple call-home on installation or even just apply common sense, all routes will lead to the same answer – none of which involve collecting peoples personal information for marketing purposes. The whole approach is retarded.

This also begs another question too – if Microchip are building their compilers around GCC which is an open source GPL project, how are Microchip justifying charging a licence fee for un-crippled versions of their compiler – the terms of the GPL require that Microchip make the source code available for the compilers and any derived works, so any crippling that is being added could simply be removed by downloading the source code and removing the crippling behaviour. It is clear however that Microchip make all the headers proprietary and non-sharable effectively closing out any competitor open source projects – thats a very carefully crafted closed source strategy that takes full advantage of open source initiatives such as GCC, not technically in breach of the GPL license terms but its a one-sided grab from the open source community and its not playing nice – bad Microchip…..

Late in the game, Microchip are trying to work their way into the Arduino customer base by supporting the chipKIT initiative. It is rumoured that the chipKIT initiative was actually started by Microchip to fight their way back into the Arduino space to take advantage of the buzz and demand for Arduino based tools – no evidence to back it up but seems likely. Microchip and Digilent have brought out a 32bit PIC32 based solution, two boards called UNO32 and Max32, both positioned as “a 32bit solution for the Arduino community” by Microchip, these are meant to be software and hardware compatible although there are the inevitable incompatibilities for both shields and software – oddly they are priced to be slightly cheaper than their Arduino counterparts – funny that 🙂

Here is and interview with Ian Lesnet at dangerousprototypes.com and the team at Microchip talking about the introduction of the Microchip based Arduino compatible solution.

There is also a great follow-up article with lots of community comment all basically saying the same thing.

http://dangerousprototypes.com/2011/08/30/editorial-our-friend-microchip-and-open-source/

Microchip have a real up hill battle in this space with the ARM Cortex M3 based Arduino Due bringing an *official* 32bit solution to the Arduino community. Despite having an alleged “open source” compiler for the chipKIT called the “chipKIT compiler” its still riddled with closed-source bits and despite the UNO32 and MAX32 heavily advertising features like Ethernet and USB (which Microchip are known for great hardware and software implementations) these are only available on the UNO32 and MAX32 platforms if you revert back to Microchip’s proprietary tools and libraries – so the advertised benefits and the actual benefits to the Arduino community are different – and thats smelly too….

OK, so I am nearing the end of my rant and I am clearly complaining about the crippling of Microchip provided compilers, but I genuinely believe that Microchip could and should do better and for some reason, perhaps there is a little brand loyalty, I actually care, I like the company and the parts and the tools they make.

One of my own pet hates in business is to listen to someone rant on about how bad something is without having any suggestions for improvement, so for what its worth, if I were in charge of product strategy at Microchip I would want to do the following: –

  • I would hand the compilers over to the team or the person in charge that built the MPLAB-X IDE and would have *ALL* compilers pre-configured and installed with the IDE right out of the box. This would remove the need for users to set up individual compilers and settings. Note to Microchip: IDE stands for Integrated Development ENVIRONMENT, and any development ENVIRONMENT is incomplete in the absence of a compiler
  • I would remove all crippling features from all compilers so that all developers have access to build great software for projects based on Microchip parts
  • I would charge for Priority Support, probably at comparable rates that I currently charge for the PRO editions of the compilers – this way those companies with big budgets can pay for the support they need and get additional value from Microchip tools, while Microchip can derive its desired PRO level revenue stream without crippling its developer community.
  • I would provide all source code to all libraries, this is absolutely a must for environments developing critical applications for Medical, Defence and other systems that require full audit and code review capability. By not doing so you are restricting potential market adoption.
  • I would stop considering the compilers as a product revenue stream, I would move the development of them to a “cost of sale” line on the P&L, set a budget that would keep you ahead of the curve and put them under the broader marketing or sales support banner – they are there to help sell silicon – good tools will create competitive advantage – I would have the tools developers move completely away from focusing on commercial issues and get them 100% focused on making the tools better than anything else out there.
  • I would use my new found open strategy for tooling and both contribute to, and fight for my share of the now huge Arduino market. The chipKIT initiative is a start but its very hard to make progress unless the tool and library strategy is addressed

Of course this is all my opinion with speculation and assumptions thrown in, but there is some real evidence too. – I felt strongly enough about it to put this blog post together, I really like Microchip parts and I can even live with the stupid strategy they seem to be pursuing with their tools but I can’t help feeling I would like to see Microchip doing better and taking charge in a market that they seem be losing grip on – there was a time a few years ago where programming a Micro-controller and PIC were synonymous – not any more, it would seem that Arduino now has that. All of that said, I am not saying that I do not want to see the Arduino team and product to continue to prosper, I do, the founders and supporters of this initiative, as well as Atmel have all done an amazing job at demystifying embedded electronics and fuelling the maker revolution, a superb demonstration of how a great strategy can change the world.

If I have any facts (and I said facts, not opinion) wrong I would welcome being corrected – but please as a reminder, I will ignore any comments relating to Atmel is better than PIC is better than MPS430 etc…that was not the goal of this article.

Please leave your comments on the article

Build a 10MHz Rubidium Frequency Standard and Signal Distribution Amp for my Lab

Having gotten myself a Rubidium Frequency Standard I found that the unit on its own is not that useful, its really just a component and needs really a supporting PSU and a decent enclosure to make it useful. I was searching around for something suitable when I was directed to a robust quality unit being sold on e-bay for just £20 with an unbelievable level of re-usable content and turned out to be an almost perfect solution to making the Rubidium Standard a useful Lab item. Rarely does such a fine marriage of junk bits come together to make something really useful?

I had a lot to cover, the whole thing was built in an afternoon and as a result, this is a long video at 1 hour 16 mins so be prepared…

Video of the project build. 1hr 16mi

The PIC Micro-controller – PIC12F675
The original plan was to use the PIC for three functions, the first was to make the power LED flash while the RFS was warming up and on solid when locked. The second was to generate a 1 PPS signal from the 10Mhz signal and the third was to generate a PWM signal to control the fan speed. As it turns out the RFS already has a 1 PPS output on Pin 6 of the DB9 connector so there was no need for this. It also transpired that the only fan I had to hand was a three wire fixed speed fan, so I also did not need the PWM signal, this left me with just the power LED to deal with which is what the PIC ended up controlling. Here is the schematic for the PIC and the source code.

And the compiled HEX file if you want to just program the chip without compiling

The Video Amp – Extron ADA 6 300MX HV
The video amp unit I used in this hack is made by Extron and the model number (on the front panel) is ADA 6 300MX HV. When I communicated with the seller, he said he had about 30 of them, so if this is useful to you and you want to make your own I would go grab yourself one before they are gone. The basic outline schematic for an input channel is here:

The video op amp chip used in this unit is a CLC409, the Texas Instruments CLC409 Data Sheet data sheet is attached to this article.

The heat sink I have ordered can be found on e-bay, search for “150x25x60mm Aluminum Heat Sink for LED”.

The switch mode PSU I used can also be found on e-bay, search for “Enclosed Power Supply SMPS,15V,2.4A,36W, it is made by TDK-Lambda and the part number is LS35-15”

See you next time.

Horizontal and Vertical flip transformations of a QGraphicsItem in Qt QGraphicsView

I have been reviewing and assessing Qt for a small software project. Having only spent a small amount of time with the development environment I have managed to create a small program that allows me to draw 2D graphics. I have been using C++ for many years and for GUI related work I have mostly used MFC. I like C++ for UI’s because it makes programs that are fast and responsive, I plan to post an article comparing .NET, Java and C++ for building desktop applications in the near future.

Back to the library evaluation, one of the things I needed to be able to do was to flip a vector graphic item either horizontally or vertically around its center axis, meaning after the flip the object should be in the exact same place on the world view as it was before the transformation. There are built in functions for doing rotations around a center point but not this specific function. The 2D graphics system in QGraphicsView/QGraphicsItem uses matrix transforms which are at best very complicated – you need a good understanding of mathematics and a lot of patience to be able to comprehend this staff – I needed to understand this: –

Matrix

WTF! – does this mean anything to you? No, me either, thank heavens for those bright folk that work this stuff out for us…. if you do need to know more about the maths behind this you check out this document: http://en.wikipedia.org/wiki/Transformation_matrix which from what I can tell gives you the explanations, but I have to confess I switched off very quickly.

Having read that document I wanted to give up – too hard – or I am too lazy, either way i got a beer from the fridge and got nowhere – but not being one to give up so easy I decided that instead of learning the maths I would go back to debugging and good old-fashioned trial and error! Actually the Qt library has done most of the hard work, the fact that I can achieve my goal with a little bit of debugging, trial and error is testimony to that.

I did search around for examples and while there were lots of questions about this, there would appear to be very little in the way of answers, perhaps I am just a maths luddite or maybe there are many more people that shy away from the maths involved and say nothing 🙂 In the end I came up with this which appears to work very well, and given there would not appear to be much information out there on this particular requirement, I thought I should make this example available. Having all the individual variables makes for easy debugging, you can see what’s going on. I also included a simple debug function for dumping the transform matrix data.

+1 for Qt – check it out http://qt.nokia.com/products/

Is .NET the holy grail of software development in the modern age!! (Part 2)

I must first open this post by apologising for the long delay in my follow up to part one. It was pointed out to me by my social networks Kung Fu  practitioner (you know who you are Chris 😉  that leaving my blog for so long was signifying it’s death.  I don’t want to kill my blog just yet so I will do what I can to resurrect it, for those folk that know me probably know that I have a lot to say!

Back to the follow-up then – is .NET the right technology stack to build a professional, packaged, off the shelf software application with?

As with most things in software the answer is not clear cut.  There is no doubt that its possible to build a high quality windows desktop application using .NET, especially with the later versions of the .NET libraries, WPF being a great example.  So if one was to build a desktop application targeted at Windows then IMHO there is not much to choose from that is better.  It is worth remembering with .NET in this context; you must remember that the .NET runtime versions, components and myriad of service packs and OS hot fixes that you are bombarded with must be part of your ongoing application support strategy because unlike compiled languages like C++ where you can almost dictate and lock down specific library versions which remain by-and-large static. With .NET though, updates to runtime components can change things in the execution of your applications after your release which can cause new problems or changes in behaviour that did not exist when you ran your product through the QA cycle as part of you releasing it.

For me though, the much bigger problem is when client/server applications are developed.  Take a typical business application which has some kind of presentation (desktop or browser) application, business logic which is typically server-side logic accessible to the presentation layer via some sort of web services API and a back-end SQL database sitting behind the business logic.  With .NET and Microsoft’s development environment you can build all three tiers in a simple, easy, point and click, highly integrated way inside a single IDE. It’s easy, really easy to create a UI, create some web services, connect to a database, create code objects to serialise/deserialize data to/from the database all in a way that follows some idealistic design patterns and create an initial application quickly.  What’s wrong with this you may ask, isn’t that what every product sponsor wants – fast development, good standard design patterns and a product on time, hell yes please, I would have them all day but… I can’t tell you how many projects I have seen that follow this route only to find that the initial product put out in the field is slow, unreliable, full of bugs and basically unsupportable – often to the total surprise of the team that created the product in the first place.

So what goes wrong so?  Well, its first worth mentioning that there is absolutely nothing wrong with the .NET technology, the problem is with the expectatiation set by the “.NET dream” and the way in which .NET is used.  Shame on Microsoft for setting such a vivid dream of perfection, but shame of the developers and architects for believing it!   Just because you can create a web service with two clicks does not mean that is the “right” way to create your particular web service, that’s just the “easy” way to create your web service and that’s the problem – two clicks and the project has a new web service so makes progress fast.   Many application developers use the easy tools to create their skeleton but then spend ten times more developer hours trying to make the code created by the easy button fit their requirement.  The result is bloated code that is hard to understand and often ends up being a compromise on the original design intent; how many times have I seen software designs change just to fit the specific IDE, toolkit or individual developer’s idealism.  The other problem with such a highly integrated development environment is there is often an illusion that code will be better because the framework takes care of evils like pointers, memory management and so on.  Microsoft have done such a fantastic job at branding the “Managed code” idea, but this is an ideal aimed at managers who can now hire developers that are often less skilled than their lower level developer counterparts and the code created will be less buggy because the managed environment takes care of it for them.  When I hear “managed code” I think RCBPP or “Runtime Compensation for Bad Programming Practices”, which by the way is perfectly fine for applications that are not expected to be fast and scalable in a green-friendly way.  Bad programming practice should not be compensated for at runtime by executing CPU cycles each time the code must run, it should be ironed out at design time so the code can run many times faster for the same CPU cycles.

.NET developers please take note – just because .NET has a garbage collection engine within its “managed code” environment, this is not an instruction for you to absolve yourself of memory management and other responsible coding practices in your design. .NET code is just as susceptible to performance issues, memory fragmentation, race conditions and other serious runtime conditions that make applications perform either badly or in an unreliable way.

Things get even worse at the database design stage where the data model is defined. At this stage there is all too often an over abundance of triggers and stored procedures that actually go way beyond the data model and impose much of the business logic for the application.  A well designed three-tier architecture should be loosely coupled at each layer, the integration point should be at the web services API, not the database so putting application logic in the database is madness, and actually demonstrates a poor overall application design with too much architectural input being taken form the DBA/Data Architect.  None the less it happens, and much more than it ever should.  The database vendors love it because most features that one would use triggers and SP’s for are not standard; they are almost unique in behaviour to each vendor’s database system thus often tying the application to that database developer.  Developers please note – its only the database vendors that strongly advocate for putting application logic into the data model; for the rest of us the data level logic should be there to perform data related tasks only – if you have the need to call a stored procedure to perform some application logic or the integrity of your application logic depends on the database performing some functions through database logic then review your design – because its probably wrong.

Another problem I have with .NET is the seemingly defacto standard amongst some of the .NET developers I have come across seem to  have a belief that a “Web Service” is something that you create in the Visual Studio IDE presented as SOAP (Java folk, you are guilty of this too I am afraid).  This common belief is the result of how easy it has been made to achieve this particular task within the development environment, its easy – really easy, so it must be the right way right? NO – there are so many things wrong with this idealism.

Number one: If you need a web service that is going to be hit a few times an hour then perhaps this approach is fine but if you are trying to build a web service that will be efficient and consume the least amount of system resources so you can get a high and sustained transactional throughput then managed code is not the way to go – not unless you are hell-bent on keeping the Dell or Sun server ecosystem fed with new customers, and air conditioners all over the world burning resources and cash.

Number two:  SOAP is the standard that has become synonymous with Web Services for many .NET developers, yet more companies that have demonstrable experience delivering high quality web services API’s to the web on a global scale choose to use almost anything but SOAP. Why? Well SOAP is simply not the one-size-fits-all answer – sorry SOAP lovers… SOAP has some good ideas but is generally badly conceived. It’s complicated with over 40 pages of specification, some of which is almost nonsense, its overweight and is often only used in a context where the very basic need of an RPC, that is, send a request to a server and get a response – the overhead of SOAP in this case can be significant.  There are many alternatives to SOAP that are in common use today that process more efficiently, are more browser friendly, simpler to understand, implement and document across numerous server, desktop and mobile platforms – the problem with these alternatives is there is not a “make me one” button in the IDE so SOAP seems to win out amongst the SMB/Point solution application developers.

Number three: IMHO any use of the .NET stack should be more prevalent at the front end and less-prevalent and ideally non-existent at the back end. I personally like the idea that for a server, every CPU clock cycle possible works towards servicing the users request, that means less software layers, less runtime compensation for bad programming practices and better quality, well designed and well implemented code executing as close to the machine-level as possible.   This I know can be thought of as a rather old-fashioned perspective but in a perverse and somewhat gratifying way, as green computing becomes ever more important to us, this will be the only way to write code that will run in our future data centres – I am pretty sure that programmers that don’t have the capability to think, design and develop in this way will find themselves quickly outmoded as less layers, better software design and more efficient use of computing resources becomes a necessity.  Developers happy to live 20 layers above the CPU will find themselves as outdated as they think I am today for caring about how many CPU cycles it takes to service a request! By the same token, I don’t believe that a “managed code” runtime and do it quick buttons in an IDE turn great desktop application developers into application server guru’s

Well my rant is done; I am not sure I answered the question in a succinct way but in summary, while I think there is a place for .NET for as long as Windows exists on the desktop, I don’t think it’s for the big servers of our future.  As long as .NET continues to be positioned as a viable server technology with the IDE and tooling that can make almost anyone a .NET developer, we can only expect the continuing trend of *NIX operating systems to continue to serve the ever increasing numbers of web services and applications we use, relegating .NET and the often poorly implemented .NET business applications to the point and SMB niche products and solutions.

Is .NET the holy grail of software development in the modern age!! (Part 1)

I have spent the last 20 years of my working life involved with software development in some form or another.  My first programming experiences were on home computers like the Sinclair ZX81, Spectrum and Commodore 64 where writing in 6502 or Z80 assembler using just a word processor, a cassette recorder and a dot matrix printer if you were lucky was the only tooling to hand.  The first program I ever wrote on the PC was written in x86 assembler the DOS ‘debug’ tool and was a simple terminal emulator for connecting to a VAX over DECNet!  I stepped up to the ‘C’ language soon after that and eventually to C++.  I have also developed a bunch of AVR, ARM and PIC microcontroller code for numerous projects too. Now days, pretty much any language is usable to some degree or another, and my software thinking in terms of achieving a specific task has become somewhat language agnostic.  As you can tell from my past though, I am someone with a slightly unhealthily liking for low-level programming and I have clearly had way to much spare time, late nights, coffee and pizza!

Anyway, back to the topic in hand. What does all that have to do with .NET you are probably thinking?  Good question – but before I answer that, I must clarify what I mean when I say .NET for the purpose of this blog post.  I am specifically talking about the lowest level of the .NET stack which is the CLR and the .NET framework base libraries.

My affinity with the inner workings of computers and my low-level programming experience has become a bit of a curse for me in the modern programming age.  When working with C and C++ this low-level understanding can be a real advantage but when it comes to the higher level languages like those provided in .NET one must to a large degree ignore how the machine works and think just about  the language and the idealism that is presented within its design.

I should just state that much of what I am about to say applies equally well to Java, PHP, JavaScript, VB and other interpreted language environments and frameworks as it does to .NET and of course, it goes without saying that this is all my own humble opinion so take it as you will. Disclaimer Ends— 😉

.NET is an interpreted language. I say this because the ,NET code running is being managed by an execution engine which is best described as a byte-code interpreter – despite what the marketing hype may indicate with things like JIT, native code generation and so on, the very notion of “managed code” means this must be true. You are probably thinking, what is wrong with that? Well nothing in principle, but what is wrong though, is what people choose to use this type of software stack for.

There is no escaping that .NET is popular and has a great following – and by the way in my opinion it’s pretty good for some things too, I am not anti-.NET and I must say the development tools that Microsoft have created in Visual Studio are exceptional and by far my most favourite modern-day development environment (Apple, please take note :).  The problem is though, .NET is not ideal for everything – yet despite this, some people tend to hail NET as the next coming.  I can’t tell you how many organisations I have come across over the years who have set a “corporate policy” that states all applications must be based on .NET.  How can an organisation decide on such a policy, it’s crazy because it’s unenforceable if you happen to use Microsoft’s own products – many of which to this day are not developed in .NET (more on this later).  I suppose that is the power of the Microsoft marketing machine, best not sell this to developers said Microsoft, lets sell it to the executives instead.

Let’s take a look at what some of the key marketing messages put out at the time were in order to answer the “Why .NET” question: –

  • Managed code means fewer bugs,  reducing maintenance time by up to XX%
  • Mixed language environment allows all of your existing skills even across  multiple languages to work on the same project reducing time to market
  • Platform independent (really, that’s how .NET and the (then called) IL was touted almost 10 years ago, using .NET will ensure you have portable code!! Yeah right – where is that!)
  • Garbage collection, exception handling and other language features makes it easier to write code that works reliably.
  • The standard framework and foundation libraries mean your VB developers can transition to C# much faster.

The message here was not really aimed at programmers but at the management and executives of companies and development teams. Use .NET and you can standardise and save time, which will save lots of money and improve you competitive edge by reducing your time to market. .NET means your teams can write code in half the time and the software will work.

Clearly, this is the way to go right? Perhaps we should look at where .NET came from. It was not a brand new innovative idea that came to someone in a vision – it was simply a natural evolution of the old VB6 environment which was plagued with all sorts of problems that needed to be resolved in order to move forward.  Microsoft’s first stab at a component model was COM, which was an evolution of an even more ridiculous specification called OLE.  VB6 was used primarily within corporate application development world and things were messy, and for the most part COM was the problem but was also the keystone technology that held VB6 together.  Windows today is still plagued with the aftermath of COM nightmare that seemingly will never go away much to Microsoft’s wishes to the contrary I am sure.  VB6 supported components through COM but components were unreliable, difficult to work with and component vendors would write bad code that would bring the whole application down. Although for the most part, the third-party components were the problem, Microsoft did not help because COM was so badly implemented in the first place anyone writing a COM component would invariably get things wrong and the sheer complexity of the COM specification sealed its fate.

.NET with its managed code environment, assembly management and standard component library was clearly the headline wish list for VB7.  So really, .NET is an evolution of VB – a significant one granted, but an evolution none the less – this was more obvious in the early versions of the .NET framework back in 2002.  When VB6 was at its height, I remember its place was within the enterprise where corporate organizations developing their own internal applications, there were very few off-the-shelf products around that were developed using Visual Basic 6.

Reviewing the marketing messages above its pretty clear that Microsoft’s intention, initially at least, was to migrate their VB6 user community to .NET – their aim was the corporate in-house development teams, the ISV community also targeted only really adopted .NET later on as it started to improve.   The corporate in-house development market is a real sweet spot for .NET IMHO, and it’s probably the best environment for in-house corporate desktop and web application development teams, with the only real competition being Java.

Perhaps controversially though, I don’t believe that .NET is the right software stack for ISV’s to choose when developing the core of a professional, off the shelf and supportable software product – choosing .NET is far more advantageous to the ISV than it is to the customers that ends up using the product!

Why? Well, I have already exceeded my word count on this one so if I have not managed to send you to sleep already, I will explain what I mean in Part 2…