Mar 02

So tell me team…. “How Long Will That Take?”

I was inspired to write this blog post in response to a post I came across today on LinkedIn about sizing software projects. (link below). Sizing software projects is the thing that most everyone gets wrong, its hard and almost impossible to get an accurate estimate, why is that?

Well apart from scope creep, budget changes and all the usual business variables mentioned by Patrick in his blog post, developers and product teams will never be consistent in their output, not even if you average it out over a long time which is what SCRUM/other agile methodologies suggest when trying to establish velocity – that simply does not work, it is a fallacy. Writing software is an iterative and creative process so “how someone feels” will change output, and I am not talking about how someone feels about the project or work, I am talking about how someone feels about their cat dying, or their wife being pregnant or political changes or the weather, or the office banter or how unwell/well they feel today, in fact “life” guarantees this. So I am going to be a bit outrageous here and suggest an alternative way of thinking about this. Let us start with asking the most important question – “what is the point of estimating”? there are only two possible answers to that question…

1. You are going to undertake some work which you will charge your client for so you need to know what to charge them.

The only possible way you can give your client a fixed price for work that is essentially a creative process is by substantially over-pricing the work estimate and giving yourself lots of fat in the deal to give you the best opportunity of making a profit at the end of it. If you think that you can ask a team of developers to tell you how long it is going to take so you can make a “fair” markup you’re deluded. The best option you have in this scenario is to work backwards, you need to understand the need the client has at a high level, then you need to establish the value that your customer is getting from the thing you would deliver, then you put a price on it, you are looking for the best possible price the customer is willing to pay, you should not at this point be trying to establish “how much will it cost”, you must be asking the customer “how much are you prepared to pay”. Once you have a number, now you can work with your developers, but instead of saying “how long will it take” you are asking “can it be done in this timeframe…”, that may seem a subtle difference but it is actually huge because in answering that your developers will take “ownership” of the delivery commitment and that is what you need to stand any chance of being successful. The risk you are taking is on your team, not the project – if your team succeeds then you and your business do, if they fail so do you.

2. Your organisation wants to know how much and how long this new software thing is going to take/cost so they can “budget” and control and cost overrun.

The reason to budget is because managers and finance people (and he people that own the actual money that gets spent) generally need to *know* how much it costs in order to feel like they are doing their job. This really comes from an age where output was quantifiable (manufacturing for example), but creative IP output is much harder to quantify because there are so many variables that are outside of your control.

Think about this, you wake up one day and have a great idea to write a piece of software that will change the world; you believe it is going to make you your fortune. You are so confident that you leave your day job, set up your business and your first day you sit down and start to change the world – what is the first thing you are going to do?

I am going to bet that you will NOT crack open Excel and start to do a time estimate and a budget – Instead you will most likely start making your software. So you get on with it, now project forwards, you have created your software and you start to sell it things go well, in fact so well that you have to hire a manager or two, then you go and raise some funding to go global with your idea. Now something important happens, instead of spending your time making that thing you believe in, now the people who invested money (which may well be yourself) will want to protect that investment so they put management controls in place, and when the question “To get this thing to the next level, what do we need to do” is asked of you, and you answer “We need to build X, Y and Z” the dreaded question comes – “How long will that take?” which roughly translates to “how much will that cost”, this is because the people asking that question are in fact trying to protect cash and de-risk the investment – they don’t believe in the thing you are building in the same way that you do, the thing is just a means to an end – profit (which by the way is a good thing).

Going back to that first day, if you had tried to budget and determined it was going to take you six months before you could sell your first one, and after six months you realise you were not even half way there and you had another 12 months to go – would you stop? Well you would make sure that you could pay the bills and survive of course, but if you decide to stop, it would not be because of your budget, you would stop because with hindsight you no longer believed the idea was as good as you first thought – otherwise you would plough on regardless right?

So back to the boardroom then, and the “How long will it take”? question. Well the answer to that question should be, “I have no idea – how important is it”? Because either its important and it has to get done or its not that important.

You would be a little more committal than that but you get the idea. If you assume that an acceptable level of estimating effort was going to be 25% of the overall development effort (which has been my experience in software development) and if you have a thing that needs to get done because its strategically important for the business to flourish – then how long it takes is irrelevant, its either strategically important and it gets done, or its not. So if it “just has to get done” what on earth are you trying to estimate how long it will take for – just get on with it and use that 25% for making the thing you are trying to make – just like you did in the first six months of your enterprise.

You need to ask the same question about how important is it and what is it worth to the business, this is the question that the people trying to de-risk are not wanting you to ask, because they will find that question as difficult to answer as you will trying to answer the “how long will it take” question. Of course for trivial stuff like defects and general maintenance/tactical incremental development work this does not really apply, but for big projects that have strategic importance the “how long will it take?” question is a nonsense question to ask because any answer you get will be either fictitious or grossly over estimated, and both of these are bad.

If you want to get something strategically important created, hire a great team and empower them to get on with it – if you are making them spend their time telling you how much it will cost to develop instead of developing it then you are failing – not them. As a manager, entrepreneur, director or investor, hire software developers to do what they do best – make software, it is your job to take and manage the investment risk, if the team fail then you either hired the wrong team or you did not manage the business well enough to sustain the effort required to make it happen, asking them for an estimate is just a way of getting a stick to beat and blame them when things are not going well.

I have been managing (arguably very loosely) software development projects and a software business for the best part of 20 years, and I have learned a few things along the way. Perhaps more importantly I have been doing this largely investing my own money, so I think I know both sides of the “How long does it take” question very well.

The article I responded to

If you found this post useful or interesting, please consider giving me a tib.
Tibs are pocket-change for the internet™.
Jan 15

Microchip PIC chips could have been the Power Behind Arduino!!!

So before I get underway, this article is about Microchip PIC micro-controllers. Please understand – I don’t want to get into a flamewars with Atmel, MPS430 or other fanboys, my personal preference has been PIC’s for many years, thats a statement of personal preference, I am not saying that PIC’s are better than anything else I am just saying I like them better – please don’t waste your time trying to convince me otherwise, I have evaluated most other platforms numerous times already – so before you suggest I should look at your XYZ platform of choice, please save your time – the odds are good I have already done so and I am still using PIC’s

OK, full RANT mode enabled…

As I understand it Microchip are in the silicon chip business selling micro-controllers – actually Microchip make some really awesome parts and I am guessing here but I suspect they probably want to sell lots and lots of those awesome parts right? So why do they suppress their developer community with crippled compiler tool software unless you pay large $$$, after all, as a silicon maker they *NEED* to provide tools to make it viable for a developer community to use their parts? It is ridiculous charging for the tools – its not like you can buy Microchip tools and then use them for developing on other platforms so the value of these tools is entirely intrinsic to Microchip’s own value proposition. It might work if you have the whole market wrapped up but the micro-controller market is awash with other great parts and free un-crippled tools.

A real positive step forward for Microchip was with the introduction of MPLAB-X IDE – while not perfect its infinitely superior to the now discontinued MPLAB8 and older IDE’s which were, err, laughable by other comparable tools. The MPLAB-X IDE has a lot going for it, it runs on multiple platforms (Windows, Mac and Linux) and it mostly works very well. I have been a user of MPLAB-X from day one and while the migration was a bit of a pain and the earlier versions had a few odd quirks, every update of the IDE has just gotten better and better – I make software products in my day job so I know what it takes, and to the product manager(s) and team that developed the MPLAB-X IDE I salute you for a job well done.

Now of course the IDE alone is not enough, you also need a good compiler too – and for the Microchip parts there are now basically three compilers, XC8, XC16 and XC32. These compilers as I understand it are based on the HI-TECH PRO compilers that Microchip acquired when they bought HI-TECH in 2009. Since that acquisition they have been slowly consolidating the compilers and obsoleting the old MPLAB C compilers. Microchip getting these tools is a very good thing because they needed something better than they had – but they had to buy the Australian company HI-TECH Software to get them, it would appear they could not develop these themselves so acquiring them would be the logical thing to do. I can only speculate that the purchase of HI-TECH was most likely justified both internally and/or to investors, on the promise of generating incremental revenues from the tools, otherwise why bother buying them right? any sound investment would be made on the basis of being backed by a revenue plan and the easiest way to do that would be to say, in the next X years we can sell Y number of compilers for Z dollars and show a return on investment. Can you imagine an investor saying yes to “Lets by HI-TECH for $20M (I just made that number up) so we can refocus their efforts on Microchip parts only and then give these really great compilers and libraries away!”, any sensible investor or finance person would probably ask the question “why would we do that?” or “where is the return that justifies the investment”. But, was expanding revenue the *real* reason for Microchip buying HI-TECH or was there an undercurrent of need to have the quality the HI-TECH compilers offered over the Microchip Compilers, it was pretty clear that Microchip themselves were way behind – but that storyline would not go down too well with investors, imagine suggesting “we need to buy HI-TECH because they are way ahead of us and we cannot compete”, and anyone looking at that from a financial point of view would probably not understand why having the tools was important without some financial rationale that shows on paper that an investment would yield a return.

Maybe Microchip bought HI-TECH as a strategic move to provide better tools for their parts but I am making the assumption there must have been some ROI commitment internally – why? because Microchip do have a very clear commercial strategy around their tools, they provide free compilers but they are crippled generating unoptimised code, in some cases the code generated has junk inserted, the optimised version simply removes this junk! I have also read somewhere that you can hack the setting to use different command-line options to re-optimise the produced code even on the free version because at the end of the day its just GCC behind the scenes. However, doing this may well revoke your licences right to use their libraries.

So then, are Microchip in the tools business? Absolutely not. In a letter from Microchip to its customers after the acquisition of HI-TECH [link] they stated that “we will focus our energies exclusively on Microchip related products” which meant dropping future development for tools for other non-Microchip parts that HI-TECH used to also focus on. As an independent provider HI-TECH could easily justify selling their tools for money, their value proposition was they provided compilers that were much better than the “below-par” compilers put out by Microchip, and being independent there is an implicit justification for them charging for the tools – and as a result the Microchip customers had an choice – they could buy the crappy compiler from Microchip or the could buy a far superior one from HI-TECH – it all makes perfect sense. You see, you could argue that HI-TECH only had market share in the first place because the Microchip tools were poor enough that there was a need for someone to fill a gap. Think about it, if Microchip had made the best tools from day one, then they would have had the market share and companies like HI-TECH would not have had a market opportunity – and as a result Microchip would not have been in the position where they felt compelled to buy HI-TECH in the first place to regain ground and possibly some credibility in the market. I would guess that Microchip’s early strategy included “let the partners/third parties make the tools, we will focus on silicon” which was probably OK at the time but the world moved on and suddenly compilers and tools became strategically important element to Microchip’s go-to-market execution.

OK, Microchip now own the HI-TECH compilers, so why should they not charge for them? HI-TECH did and customers after all were prepared to pay for them so why should Microchip now not charge for them? Well I think there is a very good reason – Microchip NEED to make tools to enable the EE community to use their parts in designs to ensure they get used in products that go to market. As a separate company, HI-TECH were competing with Microchips compilers, but now Microchip own the HI-TECH compilers so their is no competition and if we agree that Microchip *MUST* make compilers to support their parts, then they cannot really justify selling them in the same way as HI-TECH was able to as an independent company – this is especially true given the fact that Microchip decided to obsolete their own compilers that the HI-TECH ones previously competed with – no doubt they have done this in part at least to reduce the cost of (and perhaps reduced the team size needed in) maintaining two lots of code and most likely to provide their existing customers of the old compilers with a solution that solved those outstanding “old compiler” issues. So they end up adopting a model to give away limited free editions and sell the unrestricted versions to those customers that are willing to pay for them. On the face of it thats a reasonable strategy – but it alienates the very people they need to be passionate about their micro-controller products.

I have no idea what revenues Microchip derives from their compiler tools – I can speculate that their main revenue is from the sale of silicon and that probably makes the tools revenue look insignificant. Add to that fact the undesirable costs in time and effort in maintaining and administering the licences versions, dealing with those “my licence does not work” or “I have changed my network card and now the licence is invalid” or “I need to upgrade from this and downgrade from that” support questions and so on….this must be a drain on the company, the energy that must be going into making the compiler tools a commercial subsidiary must be distracting to the core business at the very least.

Microchip surely want as many people designing their parts into products as possible, but the model they have alienates individual developers and this matters because even on huge projects with big budgets the developers and engineers will have a lot of say in the BOM and preferred parts. Any good engineer is going to use parts that they know (and perhaps even Love) and any effective manager is going to go with the hearts and minds of their engineers, thats how you get the best out of your teams. The idea that big budget projects will not care about spending $1000 for a tool is flawed, they will care more than you think. For Microchip to charge for their compilers and libraries its just another barrier to entry – and that matters a lot.

So where is the evidence that open and free tools matter – well, lets have a look at Arduino – you cannot help but notice that the solution to almost every project that needs a micro-controller these days seems to be solved with an Arduino! and that platform has been built around Atmel parts, not Microchip parts. What happened here? With the Microchip parts you have much more choice and the on-board peripherals are generally broader in scope with more options and capabilities, and for the kinds of things that Arduino’s get used for, Microchip parts should have been a more obvious choice, but Atmel parts were used instead – why was that?

The success of the Arduino platform is undeniable – if you put Arduino in your latest development product name its pretty much a foregone conclusion that you are going to sell it – just look at the frenzy amongst the component distributors and the Chinese dev board makers who are all getting in on the Arduino act, and why is this? well the Arduino platform has made micro-controllers accessible to the masses, and I don’t mean made them easy to buy, I mean made them easy to use for people that would otherwise not be able to set up and use a complex development environment, toolset and language, and the Arduino designers also removed the need to have a special programmer/debugger tool, a simple USB port and a boot-loader means that with just a board and a USB cable and a simple development environment you are up and running which is really excellent. You are not going to do real-time data processing or high speed control systems with an Arduino because of its hardware abstraction but for many other things the Arduino is more than good enough, its only a matter of time before Arduino code and architectures start making it into commercial products if they have not already done so. There is no doubt that the success of the Arduino platform has had a positive impact on Atmel’s sales and revenues.

I think I was pretty close to the mark when I was thinking that because Atmel used an open toolchain based on the GCC compiler and open source libraries, when the team who developed the Arduino project started work on their Arduino programming language, having the toolchain open and accessible probably drove their adoption decision – that was pure speculation on my part and that was bugging me so I thought I would try to find out more.

Now this is the part where the product team, executives and the board at Microchip should pay very close attention. I made contact with David Cuartielles who is Assistant Professor at Malmo University in Sweden, but more relevant here is that he is one of the Co-founders of the original Arduino project. I wrote David and asked him…

“I am curious to know what drove the adoption of the Atmel micro controllers for the Arduino platform? I ask that in the context of knowing PIC micro controllers and wondering with the rich on-board peripherals of the PIC family which would have been useful in the Arduino platform why you chose Atmel devices.”

David was very gracious and responded within a couple of hours. He responded with the following statement:

“The decision was simple, despite the fact that -back in 2005- there was no Atmel chip with USB on board unlike the 18F family from Microchip that had native USB on through hole chips, the Atmel compiler was not only free, but open source, what allowed migrating it to all main operating systems at no expense. Cross-platform cross-compilation was key for us, much more than on board peripherals.”

So on that response, Microchip should pay very close attention. The 18F PDIP series Microcontroller with onboard USB was the obvious choice for the Arduino platform and had the tooling strategy been right the entire Arduino movement today could well have been powered by Microchip parts instead of Atmel parts – imagine what that would have done for Microchip’s silicon sales!!! The executive team at Microchip need to learn from this, the availability of tools and the enablement of your developer community matters – a lot, in fact a lot more than your commercial strategy around your tooling would suggests you might believe.

I also found this video of Bob Martin at Atmel stating pretty much the same thing.

So back to Microchip – here is a practical example of what I mean. In a project I am working on using a PIC32 I thought it would be nice to structure some of the code using C++, but I found that in order to use the C++ features of the free XC32 compiler I have to install a “free licence” which requires me to not only register my personal details on the Microchip web site but also to tie this request to the MAC address of my computer – so I am suspicious, there is only one purpose for developing a mechanism like this and thats to control access to certain functions for commercial reasons. I read on a thread in the Microchip forums that this is apparently to allow Microchip management to assess the demand for C++ so they can decide to put more resources into development of C++ features if the demand is there – a stock corporate response – and I for one don’t buy it. I would say its more likely that Microchip want to assess a commercial opportunity and collect contact information for the same reasons, perhaps even more likely for marketing to feed demand for a sales pipeline. Whats worse is after following all the instructions I am still getting compiler errors stating that my C++ licence has not been activated. Posting a request for help on the Microchip forum has now resolved this but it was painful, it should have just worked – way to go Microchip. Now because of the optimisation issues, I am now sitting here wondering if I should take the initiative and start looking at a Cortex M3 or and Atmel AVR32 – I wonder how many thoughts like this are invoked in Microchip customers/developers because of stupid tooling issues like this?

You do not need market research to know that for 32-bit Micro-controllers C++ is a must-have option in the competitive landscape – not having it leaves Microchip behind the curve again – this is not an extra PRO feature, it should be there from the off – what are you guys thinking! This position is made even worse by the fact that the XC32 toolchain is built broadly around GCC which is already a very good C++ compiler – why restrict access to this capability in your build tool. If Microchip wanted to know how many people want to make use of the C++ compiler all they need to do is ask the community, or perform a simple call-home on installation or even just apply common sense, all routes will lead to the same answer – none of which involve collecting peoples personal information for marketing purposes. The whole approach is retarded.

This also begs another question too – if Microchip are building their compilers around GCC which is an open source GPL project, how are Microchip justifying charging a licence fee for un-crippled versions of their compiler – the terms of the GPL require that Microchip make the source code available for the compilers and any derived works, so any crippling that is being added could simply be removed by downloading the source code and removing the crippling behaviour. It is clear however that Microchip make all the headers proprietary and non-sharable effectively closing out any competitor open source projects – thats a very carefully crafted closed source strategy that takes full advantage of open source initiatives such as GCC, not technically in breach of the GPL license terms but its a one-sided grab from the open source community and its not playing nice – bad Microchip…..

Late in the game, Microchip are trying to work their way into the Arduino customer base by supporting the chipKIT initiative. It is rumoured that the chipKIT initiative was actually started by Microchip to fight their way back into the Arduino space to take advantage of the buzz and demand for Arduino based tools – no evidence to back it up but seems likely. Microchip and Digilent have brought out a 32bit PIC32 based solution, two boards called UNO32 and Max32, both positioned as “a 32bit solution for the Arduino community” by Microchip, these are meant to be software and hardware compatible although there are the inevitable incompatibilities for both shields and software – oddly they are priced to be slightly cheaper than their Arduino counterparts – funny that 🙂

Here is and interview with Ian Lesnet at and the team at Microchip talking about the introduction of the Microchip based Arduino compatible solution.

There is also a great follow-up article with lots of community comment all basically saying the same thing.

Microchip have a real up hill battle in this space with the ARM Cortex M3 based Arduino Due bringing an *official* 32bit solution to the Arduino community. Despite having an alleged “open source” compiler for the chipKIT called the “chipKIT compiler” its still riddled with closed-source bits and despite the UNO32 and MAX32 heavily advertising features like Ethernet and USB (which Microchip are known for great hardware and software implementations) these are only available on the UNO32 and MAX32 platforms if you revert back to Microchip’s proprietary tools and libraries – so the advertised benefits and the actual benefits to the Arduino community are different – and thats smelly too….

OK, so I am nearing the end of my rant and I am clearly complaining about the crippling of Microchip provided compilers, but I genuinely believe that Microchip could and should do better and for some reason, perhaps there is a little brand loyalty, I actually care, I like the company and the parts and the tools they make.

One of my own pet hates in business is to listen to someone rant on about how bad something is without having any suggestions for improvement, so for what its worth, if I were in charge of product strategy at Microchip I would want to do the following: –

  • I would hand the compilers over to the team or the person in charge that built the MPLAB-X IDE and would have *ALL* compilers pre-configured and installed with the IDE right out of the box. This would remove the need for users to set up individual compilers and settings. Note to Microchip: IDE stands for Integrated Development ENVIRONMENT, and any development ENVIRONMENT is incomplete in the absence of a compiler
  • I would remove all crippling features from all compilers so that all developers have access to build great software for projects based on Microchip parts
  • I would charge for Priority Support, probably at comparable rates that I currently charge for the PRO editions of the compilers – this way those companies with big budgets can pay for the support they need and get additional value from Microchip tools, while Microchip can derive its desired PRO level revenue stream without crippling its developer community.
  • I would provide all source code to all libraries, this is absolutely a must for environments developing critical applications for Medical, Defence and other systems that require full audit and code review capability. By not doing so you are restricting potential market adoption.
  • I would stop considering the compilers as a product revenue stream, I would move the development of them to a “cost of sale” line on the P&L, set a budget that would keep you ahead of the curve and put them under the broader marketing or sales support banner – they are there to help sell silicon – good tools will create competitive advantage – I would have the tools developers move completely away from focusing on commercial issues and get them 100% focused on making the tools better than anything else out there.
  • I would use my new found open strategy for tooling and both contribute to, and fight for my share of the now huge Arduino market. The chipKIT initiative is a start but its very hard to make progress unless the tool and library strategy is addressed

Of course this is all my opinion with speculation and assumptions thrown in, but there is some real evidence too. – I felt strongly enough about it to put this blog post together, I really like Microchip parts and I can even live with the stupid strategy they seem to be pursuing with their tools but I can’t help feeling I would like to see Microchip doing better and taking charge in a market that they seem be losing grip on – there was a time a few years ago where programming a Micro-controller and PIC were synonymous – not any more, it would seem that Arduino now has that. All of that said, I am not saying that I do not want to see the Arduino team and product to continue to prosper, I do, the founders and supporters of this initiative, as well as Atmel have all done an amazing job at demystifying embedded electronics and fuelling the maker revolution, a superb demonstration of how a great strategy can change the world.

If I have any facts (and I said facts, not opinion) wrong I would welcome being corrected – but please as a reminder, I will ignore any comments relating to Atmel is better than PIC is better than MPS430 etc…that was not the goal of this article.

Please leave your comments on the article

If you found this post useful or interesting, please consider giving me a tib.
Tibs are pocket-change for the internet™.
Jan 04

Apple iMac 27″ Dark Side Screen Failure – The Manufacturing Fault Apple Will NOT Admit!!

I have been an Apple desktop user pretty much ever since they moved to the Intel architecture and I have been pretty pleased with my Apple computers. Unlike the bad experience I had with Windows and PC’s the iMac and OSX have been really great for the sorts of things that I do all the time. More recently though the quality of the OSX updates have been less than perfect and its starting to feel a bit like Microsoft all over again with regular OS updates that need a computer re-boot – anyway, I digress…!

One of the computers I use at work is an iMac 27″ and a few months ago the screen backlight failed, first it flickered and then half the screen went dark. So I call Apple and explain the problem and because my computer was a few months out of warranty they said my only option was to take the computer into an Apple store where they will fix it but I will have to pay for the repair – I decided not to do that because it would mean being without the computer and I need my computer at work so I decided to live with the half dark screen.

While looking around the apple support community forums I found out recently that although Apple are staying tight-lipped about this problem it would appear that *alot* of people are having this exact same problem and Apple is charging £400+ a pop to repair it – by replacing the entire screen panel it would seem….the problem has been dubbed “The Dark Side Screen Problem”…

Over on the Apple support communty “Kaos2K” found the actual root cause of the problem which is actually a manufacturing fault although Apple has been refusing to admit it so far. The problem is, heat from the backlight/screen seems to cause a surface mount 6-pin connector to break away from the board its soldered too, the only explanation for this is a poor solder joint at the time of manufacture. The thread that describes the problem can be seen here: –

More recently Apple have been sued over this problem by one of their customers:

I accept that the fault probably lays with LG who make the actual panel but still Apple should be fighting the corner on behalf of their customers. I anticipate Apple loosing the case and are likely going to be compensating everyone who has had this problem.

Using the information “Kaos2K” posted I decided to make a video on fixing my iMac.

Having undertaken this repair I have absolutely no doubt that this fault is down to a manufacturing defect relating to the quality and specification of the soldering of the 6-pin connecter too the LED strip used as part of the backlight, there is no way that connector should simply “fall off” as it seems to be doing. Given Apple is the biggest technology company in the world and are so very proud of their hardware (as they should be) its is an utter disgrace that they have not recognised this problem and stood by their customers. With so much in the news last about how much cash Apple have its a shame that in a position like that they have decided not to stand up and take responsibility – shame on you Apple, its stuff like this that will drive your loyal customers back to Microsoft….

If you found this post useful or interesting, please consider giving me a tib.
Tibs are pocket-change for the internet™.
Oct 27

Update – DIY HP/Agilent 53131A 010 High Stability Timebase Option PCB’s Available

Following on from the project to build a DIY OCXO upgrade option for my HP/Agilent 53131A Frequency Counter I had a large number of requests for me to make the PCB’s available so others can built there own. I now have some PCB’s and are making them available for sale. I will also be making some fully assembled and tested/verified boards including an OCXO, ready-made cable and mounting stand-off’s, there will only be a limited number of these so if you are interested, let me know, first come first serve and when they are gone they are gone.

Now, someone made a comment on hackaday stating that the reason why these OCXO’s are on e-bay is because they are no good. Well I can understand why one could draw that conclusion but having now played with in access of 50 of these OCXO’s I can tell you that they all pretty much violently agree with each other and they also agree with the Rubidium Frequency Standard I have, so given they are from different sources and all free-running I am pretty comfortable they are good and usable quality devices. I read a lot on the net about “burn in time” and the general consensus seems to be, the longer you age and heat a crystal oscillator the more stable (in terms of drift) it becomes. Now I don’t know how true that is, but if it is true, then by definition, using recovered OCXO’s must actually be a good thing. Of course if you feel the need to pay a couple of hundred dollars for a new OCXO then you can of course do that – but I am pretty sure that it does NOT guarantee you any better performance, it just buys you the right to get compensated if you happen to find it does not meet the performance specifications quoted by the manufacturer and it might make you feel a little more confident. Anyway I guess the results speak for themselves and for a home lab these OCXO’s are more than good enough I think.

Here is a short video showing the various configurations built and working as well as a quick overview of the PCB its self and the configuration options.

The Schematic



This is the revision “1F” board. When I designed the board, I picked a few of the OCXO types that are available on the second-hand (salvaged) market and designed it to accommodate these. My original plan was to support four types of OCXO, the Oscilloquartz 8663-XS that I used in my own counter, a Datum 105243-002, and Isotemp OCXO-131 and a Trimble 34310-T(2). However, when I laid the board out I did not have the Trimble 34310-T(2) device to hand and made a (wrong) assumption about the footprint – which is a bad schoolboy error I know – the upshot being that while this board has a footprint and markings for the Trimble 34310-T(2) it does not fit, the pins are about 1mm out and therefore the board IS NOT suitable for that particular device. It is feasible to drill holes, the pads are just about big enough but assuming there is some demand for these boards I will order a second batch with the footprint corrected.


The following OCXO’s are known (and shown in the video) to work. Any others might work but thats up to you to verify 🙂

  • Oscilloquartz 8663-XS
  • Isotemp OCXO131-100
  • Isotemp OCXO131-191
  • Datum 105243-002

Oscilloquartz 8663-XS

This is the first OCXO I used, its more expensive than most and seems good quality and very precise. It works off of 12v and provides a nice clean sine wave output. The voltage control input works on the 0-10V range. I could not find any data on the XS version so I don’t know what its performance specifications are. I have included a download for this device but it does not cover the XS version specifically. The data sheet states that this device requires 0-10v for the frequency control, and testing seems to bear this out too. However, to get the counter to calibrate properly I selected the 0-5v range on the DAC.

Oscilloquartz  OCXO8663-pins


Datum 105243-002

This was another OCXO that I included on the board for Andy (who built the first 3Ghz pre-scaller option board), again a nice device but its specified to work on 24V and the Oven and Oscillator have separate power connections. This is the only OCXO that has a built in trim pot which allows you to set the window for the voltage control range. Although this device is specified to run at 24V, I tested its operation at 12V and it works perfectly. I did put a four-pin plug onto the board to allow you to give it a separate 24V supply but it really does not need it. I do not have any data sheet for this device but it outputs a square wave which is not as nice as having a sine wave on the output – but for this application it does not matter at all. The Frequency control input on this device is very sensitive and while it works without problem at +/-10v its centred around 2V and 0-5v works just fine.

IMG_6161  OCXO-DATUM-pins copy


Isotemp OCXO131-1xx

This is a small, compact device and comes in two variants, the OCXO131-191 which is a 12V version and the OCXO131-100 which is a 5V version, and while both share the same PCB footprint and pinouts the case for the 5v variant is taller at 18mm high where as the 12v version is a compact 11mm high. These also provide a square wave output. Both versions of the device require 0-5v on the frequency control input although to get the counter to properly calibrate I had to configure it for the +/-5V option.

Isotemp-ocxos  OCXO131-pins


Kit of Parts

Some people asked me about providing a kit of parts. I am not really geared up to do that in a time-efficient way so apart from the bare PCB, I will not be able to provide individual parts or components.

Fully Built and Tested Option

I am planning to have a limited number of these option boards fully built and tested with all the components and the OCXO pre-installed as well as a suitable cable and mounting stand-off’s, ready to install and use in your counter 531xx HP counter. I do not have an exact cost for these as I still have to source some of the components but they are likely to be be in the £75 to £95 range. I will post an update when I have these available which should be in about 2-3 weeks time.

UPDATE: I now have some fully assembled and tested boards complete with cable and mounting parts needed for a simple plug-and-play 53100 series counter upgrade. (the boards will be black, the board shown in the photo was as a result of a screw-up by the PCB supplier)



Bill of Materials

RefDes Part No Notes
C1, C2, C4, C6, C7, C8, C10, C11 100nF 0805 SMD Jellybean part
C12 1uF 0805 SMD Jellybean part – Could probably use 100nF with no problem
C3, C5, C9 47uF 16v Electrolytic. Farnell Part No: 197-3306
J2 IDC2X8M Farnell Part No: 231-0066
L1, L2, L5 100uH Farnell Part No: 935-8056
L3, L4 1uH Farnell Part No: 221-5638
R1, R2 100R 0805 SMD Jellybean part
R3, R4 220R 0805 SMD Jellybean part
R8 220R 0805 SMD Jellybean part – This is not needed unless you want to mess with the comparator bias
U1 LM361M Very fast differential comparator
U4 ADR4550 You can use a REF02 or numerous other 5V reference parts here, the pinout is pretty standard
U5 AD7243AR This is the most expensive and hard to get part

Bare PCB’s Available

UPDATE: After my first trip to the post office today I have had to amend the pricing because of the outrageous postal charges, I have been told that I cannot send something thats not made of paper as a letter! that means for my friends outside of the UK the postage is more expensive, sorry about that. The good news is, the postage does not go up with the number of boards. I hope you understand.

UK Orders

For the sake of simplicity I have opted for a flat price of £10 each, which includes postage in the UK. If you want more than two boards please contact me directly via e-mail because the weight starts to impact postage.

 1 x HP/Agilent 53131A-010/GS OCXO Option Bare PCB Rev 1F inc. Postage £10
 2 x HP/Agilent 53131A-010/GS OCXO Option Bare PCB’s Rev 1F inc. Postage £20
 HP/Agilent 53131A-010/GS Rev 1G Fully Built and Tested with Trimble 34310-T OCXO, Cable and Mounting Kit £95.00 + £10.00 P&P

Non-UK Orders

If you are outside of the UK it costs me £3.50 for a small parcel so I have opted for a flat price of £9 for each board – but postage is on top of that. If you want more than three boards please contact me directly via e-mail.

 1 x HP/Agilent 53131A-010/GS OCXO Option Bare PCB Rev 1F £12.50
 2 x HP/Agilent 53131A-010/GS OCXO Option Bare PCB’s Rev 1F £21.50
 HP/Agilent 53131A-010/GS Rev 1G Fully Built and Tested with Trimble 34310-T OCXO, Cable and Mounting Kit £95.00 + £15 shipping (tracked and signed for)
If you found this post useful or interesting, please consider giving me a tib.
Tibs are pocket-change for the internet™.
Oct 17

Apple 27 iMac Teardown, SSD Hack and 2TB Upgrade

My 2010 27″ 1TB iMac ran out of disk space because of the video content I was creating. Despite Apple’s best effort to limit my upgrade options I manage to upgrade it to include a 250GB Solid Sate Drive (SSD) plus a 2TB Hard Disk Drive (HDD) and get my system back to its last working state but with lots more disk space and much faster boot and application load times. Plenty of gotcha’s along the way – I had to work out and do a few ad-hoc adaptations. I cover new disk preparation, computer tear down, SATA adaptation, HDD temperature sensor hack and data moving, and data and account recovery.

This is the longest video and the longest blog article I have written to date, and thats reflective of how much there is to cover – is not a simple task.

I removed a Seagate Barracuda 7200 1TB drive which is still in working condition, despite being one of the drives on a recall by Apple. They offered to replace the drive some time back but I refused as it was inconvenient to be without my computer for a week, let alone the hassle of taking it to and collecting it from an apple store.

I installed the following components.

  • Samsung SSD 840 Series 250GB
  • Western Digital Green 2TB – 6 Gb/s 64Mb Cache
  • SATA 90 degree Cable
  • SATA Power Splitter

The 2TB drive was a perfect physical fit but did not have a compatible on board temperature sensor so I had to fashion one from a transistor, I used a TO-92 2N2907 and used the base-emiter junction as a silicone temperature sensor. The SSD drive had to be installed with double-sided tape but being so small and light this was the best solution. The whole modification to the computer is 100% reversible.

WARNING: This is an EPIC video at 1hr 20mins so only watch if you are really interested and have some time on your hands 🙂

The following outlines the various commands and steps I took to make this work. I am not saying this is either the only way or the right way, its the way I used and it worked for me. The most important thing I wanted to achieve was to preserve all of my data and never have a single point of failure for my data throughout the entire procedure.

Before we do anything we must ensure that…

  • The computer is in a working state.
  • Have a *FULL* backup of the system – in my case this was on an Apple Time Capsule
  • Ensure the current system has *all* software and OS updates applied.
  • Ensure that you are logged in and have administrative rights on the account you normally use

Tools Used

Prepare SSD Drive

  1. Using the drive copy station, connect the SSD drive to the computer via USB
  2. Choose “Ignore” when you are prompted to initialise the disk you just connected.
  3. Using the Applications/Utilities/Disk Utility create a single partition on the drive – use all default options, call the volume “OS Boot” or other name you will recognise or makes sense to you
  4. Run the OSX Mountain Lion Installer application
  5. When shown the drive to install on, press the Show All Disks button and select the drive you just created a partition on
  6. During the install follow all instructions as if you are doing a new OS install, once complete it will want to re-boot, let it do what it wants. Eventually it will boot to your new drive
  7. During the installation it will want you to create the administration account, call this “Admin” or something else you do not already have on your current system.
  8. Don’t worry, your existing drive will still be intact. Once you have booted from the SSD drive and verified its OK, shut down the computer unplug the SSD and re-start, it will boot to the original drive once again.

Prepare New HDD Drive

  1. Connect the new drive to the USB using the drive copy station
  2. Choose “Ignore” when you are prompted to initialise the disk you just connected
  3. Using the Applications/Utilities/Disk Utility create a single partition on the drive – use all default options, call the volume “Users”
  4. You should now see your new 2TB disk in finder, the disk should be empty.

Now we need to create a temporary account to unlock your profile so we can copy your home folder to the new drive.

Create a temporary Admin Account

  1. Open System Preferences/Users & Groups
  2. Unlock the panel if required
  3. Add a new user, in the New Account field select Administrator
  4. Log out of the system, and log back in using this new account you have created

At this point you are ready to make a copy of your user profile(s) from your existing 1TB disk to your new 2TB disk. Open a new terminal window (Applications/Utilities/Terminal). You can list the accounts on your system by issuing the following command: –

ls -li /Users

For each account on your system that you want to re-locate to the new 2TB drive you should issue the following command: –

sudo ditto -v /Users/<ccount_name> /Volumes/Users/<account_name>

You should obviously replace the <account_name> part with the account name. In my case the command was: –

sudo ditto -v /Users/gerrysweeney /Volumes/Users/gerrysweeney

NOTE: You do NOT want to make a copy of the temporary admin account you created earlier that you are currently logged in as…

Do the hardware hacking

At this point you are ready to start the disk transplant process. You want to do the following things in order.

  1. Shutdown the computer cleanly
  2. Unplug all cables
  3. Take the computer apart and remove the 1TB hard disk
  4. Transfer the mounting braket and studs from the 1TB drive to the new 2TB drive
  5. Put the old 1TB drive into an anti-static back and put in a safe place – this is your fast recovery/back-out plan
  6. Remove the optical drive, screen backlight supply and main power supply boards, and then the main board – see video
  7. Install the new SATA cable for the SSD drive – either use the third SATA port or remove optical drive SATA cable and use that port – depends on your iMac model
  8. Replace the main board and PSU baords, take a lot of care routing the new and existing cables, there is very little room for error here
  9. Install the SSD drive using double-sided sticky pads – make sure its well fixed, you do not want it coming loose as it will interfere withe air flow that cools the GPU – this will Kill your iMac!!
  10. Re-install the optical drive (with or without being connected), the drive carrier acts as part of the air ducting…
  11. Install the new 2TB drive
  12. Fashion a suitable temperature sensor for the new HDD, I simply used a 2N2907 transistor which works a treat from what I can tell
  13. Re-assemble the computer
  14. Power up and make sure it boots to the new SSD drive
  15. Log into the “admin” account you created – this should be the only account on the system

If all has gone well you should have a fresh new OS and you should be logged in as administrator. The next thing we need to do is re-create the accounts and then re-locate their home directory.

Follow these next steps carefully…

First of all you need to create a new account for each account you have re-located. In my case the account I relocated is “gerrysweeney”. So go into the System Preferences/Users & Groups tool, unlock it if needed, then create the new account. By default, this will create a new account home folder on the SSD, thats OK, we will need to deal with that next. If you have more than one account you have re-located, create a new corresponding account for each one. When you create the account, use the same password as your previous account, this way your keychain will remain accessible and valid once you re-instate your home folder.

Now you have a new account with a new home folder on the SSD drive, the next thing is to modify that account and point it to the home folder you previously re-located. In my case the account was “gerrysweeney”. In the Users and Groups tool, press the “Ctrl” key and click on the account name, you should get a menu option called “Advanced Options…”, select this. The “Home Directory” field will be set to “/Users/gerrysweeney”, you should change this to “/Volumes/Users/gerrysweeney”. Again, do this for each account you have re-located.

UPDATE: Having done this I realised that later on I still have to set up a symbolic link to the home folder which is a couple of steps below. In lights of that you can simply skip the above step of modifying the account properties to re-locate the home directory. Once the symbolic link is in place it will work without doing this.

Now we need to change the permissions of each account’s home folder data to the account – this is important to re-syncronise all permissions. Open a terminal window and for each account you have re-located you should issue the following command: –

sudo chown -R gerrysweeney:staff /Volumes/Users/gerrysweeney

This command will change the ownership of every file in this folder, including the folder its self and all sub-folders and files. Now I am not sure this is exactly right, it is feasible there are system files that should be protected from the user but I don’t know – what I done worked OK so its pretty safe. In any case you need to do this for every account you have moved – remember to replace “gerrysweeney” with the right account name.

Now you have given the right ownership to your files, we now have to deal with the fact that much of your existing configuration will be looking for profile and other content from /Users/gerrysweeney, and the way we deal with that is by creating a symbolic link. In the terminal window you should issue the following command to delete the newly created home folder on the SSD drive that we no longer need.

sudo rm -rf /Users/gerrysweeney

Now you should create a symbolic link to map the expected home folder location to the relocated folder on the new 2TB drive. To do this you should issue the following command:

sudo ln -s /Volumes/Users/gerrysweeney /Users/gerrysweeney

You should do this for each account you have relocated. You can double check this operation worked by issuing the following command:

ls -li /Users
which should show you something like this...
total 8
 135869 drwxrwxrwt  13 root   wheel  442 13 Oct 20:22 Shared
 329606 drwxr-xr-x+ 13 admin  staff  442 14 Oct 10:18 admin
1056113 lrwxr-xr-x   1 root   admin   27 14 Oct 10:17 gerrysweeney -> /Volumes/Users/gerrysweeney

You can see that gerrysweeney is pointing to /Volumes/Users/gerrysweeney

The next thing we need to do is recover your applications. You have the option of simply re-installing them all but thats a right royal pain, I found this worked really well and mostly re-instated everything for me.

Get your original 1TB disk and using the Disk Copy Station connect to the computer via USB, you should be able to see this drive in finder. Make a note of the volume name for the drive – in my case this was “Macintosh HD”. In order to recover the applications, you basically need to copy all of your Library and Application Files to the SSD. To do this issue the following commands: –

  ditto -v /Volumes/Macintosh HD/Library /Library

  ditto -v /Volumes.Macintosh HD/Applications /Applications

  ditto -v /Volumes.Macintosh HD/Developer /Developer

Now with all that done, you should eject the original 1TB HDD using Finder and you should shutdown and re-start the computer. While re-booting you can unplug the Disk Copy Station, you should not need it any more.

At this point you should be able to log in using your original account – in my case this was “gerrysweeney”. If everything worked out OK then you will now have a working system, your desktop, applications and settings should all be restored and in the same state as they were before you started.

Set aside a day to do this, its a pain, there is a lot of fiddling around and there is a lot of waiting for stuff to copy. In the end though its a worthwhile mod that Apple does not want you to do.

Enable TRIM

UPDATE: I had found this after I finished the article and Anton pointed out in the comments I should include this information because it is important. Thank you Anton.

In order to maintain the healthy state of your SSD and prolong its life you need to enable the S.M.A.R.T features and make the OS and the SSD Drive communicate using the TRIM commands. This basically allows the OS and the SSD drive to play nice and help prolong the life of the SSD by minimising erase cycles, the OS simply informs the drives that blocks are no longer in use. Doing this is easy, you can simply install a piece of software called Trim Enabler which takes care of this for you. If you want to know more about TRIM you can read this wikipedia article

trim1  trim2



If you found this post useful or interesting, please consider giving me a tib.
Tibs are pocket-change for the internet™.
Sep 22

Racal-Dana 199x DIY High Stability DIY Timebase Hack for under $25

Having played with a number of OCXO’s and a Rubidium standard I tried (after repairing it) to calibrate the standard timebase in my Racal-Dana 1999 counter which is actually a simple TCXO. I could get pretty close but it was touchy and not easy to be totally precise. In response to that vlog article a number of people suggested it would be nice to do an OCXO modification to the Racal counter so I decided to do exactly that, and I done it on a budget too…

One of the problems with this counter is the lack of any 12v supply that is able to provide enough current to drive the OCXO’s oven, so I had a look around and found an OCXO made by Isotemp model OCXO131-100 that runs on 5V which is perfect for this build because the counter has a good 5v supply that can easily drive the additional current required – I have provided a download link for the data sheet for the OCXO I used below below. I ordered one from a seller on e-bay and used that as the basis for the hack.

The other key component needed to implement a stable OCXO board for this counter is a “temperature stable” variable voltage between about 1 and 4 volts, this is used to fine tune the OCXO to allow the oscillator to be calibrated. To get me a suitable reference voltage I have used a MAX6198A, chosen simply because I had some to hand – but also because they have pretty darn good temperature stability too.


Here is the schematic I used to create the OCXO board.

Data Sheets

Other Information

* The SMD Adaptors I used can be got from here: SMD Adapters – Set #1/

Catch you next time…

If you found this post useful or interesting, please consider giving me a tib.
Tibs are pocket-change for the internet™.