Microsoft says – Don’t Use IE!

Microsoft says – Don’t Use IE!

So I read this post and all the comments and think to myself – no one seems to be saying it!  So I will… (obviously)

For more years than I can remember, Microsoft has had a habit of pushing “the enterprise” down its own path, with an obvious (to me anyway) lock-in strategy.  IE over the years has shown this up time and time again.  I thought Chris Jackson’s comments were a little bit dismissive of Microsoft’s responsibilities here, and I wanted to state that the reason why IE/Edge is still required by many customers is that Microsoft’s platform, especially Microsoft’s development environments have made use of the IE-specific quirks, leaving vast swathes of enterprise-developed apps depending on IE. 

Even worse, so many ISV’s have jumped on the “easier to develop” enterprise software platform (started with VB back in the day, right through to .NET and its kin today) building software for sale that organizations have purchased and gotten tied into.  Be it, ASP.NET, or ActiveX or Silverlight (what a mess that was) the numerous browser quirks and non-standard, undocumented esoteric behaviors in the Microsoft browsers.  I think there was a time when Microsoft was trying to be the standard browser of choice, but failed miserably at it.  I do like Chris’s advice though, and as someone that is responsive web software development, I wish I did not have customers DEMAND we support IE11 because that’s their standard browser, it’s annoying and frustrating and not of our own making.  

Three years ago, we relegated development for IE to “best-endeavour” only, that means we will put reasonable effort into fixing anything obvious but have drawn the line and doing IE/Edge specific workarounds/hacks for our software. That has sadly left some of our customers stuck with different browsers for different applications, but we do not accept that is a problem of our making, we used to feel bad when our customers would tell us “well you are not Microsoft so fall in line” – not anymore! 

Now before I start to sound like I am hating on Microsoft, I must make clear that in recent years I think Microsoft has done a remarkable job, a remarkable turn-around even.  Windows 10 is orders of magnitude better than any Microsoft OS before it, Edge is not terrible and mostly works, although it’s still quirky. And hats off, O365 is a winner – very nicely done team Microsoft. 

Dear Microsoft, if it were up to me…

  • You have the capability, the developers, and the financial resources, probably more than most other software companies in the world, go and build a world-class standards-based browser, do for your browser what you already did for C++
  • Or, hurry up and develop your chromium-based browser and get shot of IE and Edge as soon as you can.
  • Go and help your customers remove their technical debt in relation to IE, its not their fault entirely, you created the environment – help your customers fix it
Why does C still exist, when C++ can do everything C can?

Why does C still exist, when C++ can do everything C can?

This was a question asked on Quora and the top voted answer was airing on the side of it being cultural or personal preference. I don’t think the answer is culture or preference; there is an excellent reason why both C and C++ exist today. C++ is not a good alternative to C in some particular circumstances.

Many people suggest that C++ generates more inefficient code, that’s not true unless you use the advanced features of the language. C++ is generally less popular for embedded systems such as microcontrollers because its code generation is far less predictable at the machine-code level, primarily due to the compiler optimizations. The smaller and more resource-limited the target system, the more likely that C is a better and more comfortable choice for the developer, and this is often the reason people suggest that C++ can not replace C, and that is a very good reason indeed.

However, there’s another even more fundamental reason that C remains a critical tool in our armory. Imagine you create your very own new CPU or computing architecture for which there is no C or C++ compiler available to you – what do you do? The only option you have is to write code using some form of assembly language, just as we did in the early ’80s when programming IBM PC’s and DOS, before even C, became mainstream. (yes there was a time when C was more obscure than x86 assembly!) Now imagine trying to implement a c++17 standards-compliant C++ compiler and STL library in assembly language, that would be a daunting, and unimaginable task for an organization of any size, right?

On the other hand, a C compiler and a standard C runtime library, while still not an insignificant effort, is a hell of a lot more achievable, even by a single developer. In truth, you would almost certainly want to write some form of assembler/linker first to make writing the C compiler simpler. Once you have a standards-compliant C compiler working well enough, then a vast array of libraries and code written in C becomes available, and you build out from there. If your target platform did require a c++17 standards compliance c++ compiler, you would write that in C.

The C language holds quite a unique position (possibly only comparable to early Pascal implementations) in our computer engineering toolbox, its so low level, you can almost visualize the assembly code that it generates, just by reading your C code which is why it lends its self so well to embedded software development. Despite this though, C remains high-level enough to facilitate the building of higher-level application program logic and quite complex applications. Brand new C++ compilers would most likely get written in C, at least for early iterations, you can think of C as an ideal bootstrap for more significant and more comprehensive programs like an operating system or a C++ compiler/library.

In summary, C has its place, and its hard to see any good reason to create an alternative to C, its stood the test of time, its syntax is the founding father of the top modern-day languages (C++, C#, Java, JavaScript, and numerous others, even Verilog). The C language is not a problem that needs to be solved, and the C language does not need to be replaced either. Like oxygen, C is old hat now, but it works well, and in the world of software development, we still need it.

Fully Programmable Modular Bench Power Supply – Part 7

Now I have the requirements for the control ranges I need its time to get down to the nitty-gritty and get a DAC up and running so we can make some measurements. The maximum dynamic range I need for this project is 6000 individual steps – this was identified in the calculations for the voltage control for the 0-6V rage in Part 5, so let us start there.

There are many options for DAC’s to choose from, I want to keep the cost and component count down so my starting point is a low-cost single component solution from Microchip, part number MCP4822. This is a dual 12-bit DAC with two channels and a built-in voltage reference. I get independent voltage and current control from one 8-pin chip – wow! However, there is a problem with this part, it only has 12-bit resolution which will only give me 4096 individual steps and my design calls for 6000. The problem with choosing components with a higher number of bits is that they start to get expensive. I want to see if it’s possible to extend the range of the DAC using software and a technique called “dithering” or “modulation”.

The idea here is pretty simple, to increase the resolution of the DAC you can continuously switch the output between two or more codes, feed the result into a low-pass filter and get the average voltage. If you switch between two adjacent codes with a variable mark space ratio like you do in PWM, it should be possible to extend the range of the DAC without creating large ripples on the voltage head making the low pass filter easy to construct. That is the theory at least, I need to try it and see what the results are in practice.

Before getting too complicated though, I thought it would be good to run the DAC in static mode, set some codes and measure what we get. The DAC has 4096 steps, the internal reference voltage is 2.048v and the chip has a x2 gain option so I should be able to program any voltage between 0v and 4.096v in 1mV steps by simply programming a digital code between 0 and 4096 into the DAC channel. To get this up and running I hooked up a PIC micro controller, the DAC chip and an RS232 serial interface. The firmware in the PIC will allow me to interact with the DAC through a simple serial terminal on my computer.

Before any MPS430, Atmel or Arduino die-hard fans start giving me advice on micro controller choice — forget it. They are all good parts, I just happen to personally like PIC’s because I know them and I have the tools and a whole bunch of them sitting here to play with – if you are not happy with my choice of micro controller thats tough…. I am not going to change it or enter into any debate over the pro’s and cons of other devices – I am sticking with PIC’s for this one and if you try to change my mind I will ignore you – sorry.

Here is the schematic diagram for the prototype I am using.

For the dithering I have decided to extend the DAC by 2 bits. Extending by two bits means I have to write a sequence of four codes continuously in succession to the DAC. I am writing approximately 1000 codes per second within a timer driven high-priority interrupt routine which ensures that the timing remains constant. Timing errors will introduce more DNL errors so the code stream needs to be constant and accurate. The codes written are the base code value followed by the base code value + 1. The two least significant bits from the now 14-bit word controls how many times each of the two values is written. For example, to get four steps between code 100 and code 101 we would write the following codes: –

100.00       100   100   100   100
100.25       100   100   100   101
100.5        100   100   101   101
100.75       100   101   101   101

Extending by 3 bits is an option and would mean I have to write a sequence of eight codes, again following the same explanation as above, here are the codes that would be written

100.00        100   100   100   100   100   100   100   100
100.125       100   100   100   100   100   100   100   101
100.250       100   100   100   100   100   100   101   101
100.375       100   100   100   100   100   101   101   101
100.500       100   100   100   100   101   101   101   101
100.625       100   100   100   101   101   101   101   101
100.750       100   100   101   101   101   101   101   101
100.875       100   101   101   101   101   101   101   101

I tried the three bits as an academic exercise but I have decided not to go to three bits because of the noise, ripple and integral errors generated. The cost of the filter circuitry and the expansion of the line items in the bill of materials would probably outweigh the cost of upgrading the DAC component to a higher resolution part.

I have selected a number of spot voltages in the range to benchmark what I get from the DAC. The following table sets out the results I measured. (I am using a calibrated HP 34401A meter for all measurements).

Spot Voltage DAC Code Measured
MCP4822
(12bit static)
Error DAC Code Measured
MCP4822
(14bit dith)
Error
0 0 0.0015V +0.0015V 0 0.0014V +0.0014V
0.001V 1 0.0015V +0.0015V 4 0.0016V +0.0002V
0.002V 2 0.0024V +0.0014V 8 0.0026V +0.0006V
0.003V 3 0.0034V +0.0004V 12 0.0036V +0.0006V
0.004V 4 0.0044V +0.0004V 16 0.0046V +0.0006V
100mv 100 0.1024V +0.0024V 400 0.1026V +0.0026V
500mv 500 0.5024v +0.0024 2000 0.5026V +0.0026V
501mv 501 0.5034v +0.0024 2004 0.5036V +0.0026V
1V 1000 1.0004V +0.0004V 4000 1.0006V +0.0006V
1.5V 1500 1.4975 -0.0025 6000 1.4977V -0.0023V
2.5V 2500 2.5014 +0.0014 10000 2.5016V +0.0016V
3V 3000 2.9993 -0.0007 12000 2.9995V -0.0005V
3.001V 3001 3.0002 +0.0002 12004 3.0005V +0.0005V
3.002V 3002 3.0012 -0.0008 12008 3.0014V +0.0006V
3.9v 3900 3.8968 -0.0032 15600 3.8969V -0.0031V
4.095v 4095 4.0916 -0.0034 16379 4.0916V -0.0034V

Well, that is disappointing given I am aiming for a precision of 1mV and to get a control voltage of 0-6V I need accurate 500µV steps. So whats wrong here? Having read the data sheet there are some gotcha’s that naively you might ignore as I did. Every DAC has two really important parameters called Integral Non-linearity (INL) and Differential Non-Linearity (DNL). The DNL defines the maximum deviation to expect from the “ideal” voltage for any given code, expressed in LSB’s (or counts from ideal) and INL is the accumulated DNL errors that occur over the whole range. Fundamentally, the DAC is based on a resistor string network and its not easy to make highly accurate resistors, as soon as you start needing more accuracy the cost of the part rises very steeply, and even with the best part money can buy there will still be errors. The more bits you extend the DAC by using dithering, the more error you introduce and the more noise you introduce too. While extending by two bits is probably acceptable with a decent low-pass filter, extending by three bits and beyond is not really practical. As an aside, the noise figures for the MCP4x22 parts are not that great – something one must consider when the reference voltage generated is going to be amplified, the noise will also be amplified.

In summary then, I want accuracy and precision but I want reasonable cost and even if I spend a lot of money I will still have errors. The lesson learned for me is I now no longer think of a DAC as an accurate programmable voltage source – it’s not, it is a close approximation only. The MCP4922 (MCP4822) is a nice part for the $$$ and useful for some things I have no doubt, but it’s not good enough for what I want to achieve in this project. Even with the resolution extension to 14-bits it falls short. Actually to be fair, even with the errors in the DAC this would be make a pretty good degree of control, it is probably more accurate than most of the lower end bench PSU’s out there, but my benchmark is the Agilent E3631A so I need to achieve much better than this. The MCP4x22 device is the best resolution DAC Microchip do so I must now search for other parts instead – Linear Technologies and Analog Devices are the logical starting point for my search.

There is one further possibility which I have yet to try, which is to combine both 12-bit DAC outputs to create a much higher resolution DAC, the block diagram for such a solution is shown in the data sheet for the part. This is well worth a look because if it works well enough, the cost of the two chips may well still be cheaper than an upgraded DAC part. I will build this out at some point and give it a try.

This project also needs to implement metering in order to monitor the output volts and current drawn by the load connected to the PSU, and this needs to be reasonably accurate to 1mV too, that’s 5-digits I need which in its self is a tall order. However, it occurred to me that if I could get an ADC that was accurate enough and a DAC with enough resolution to provide headroom for trimming it might be possible to build a self-calibrating system that trims the DAC output to match the desired programmed voltage each time you set a new voltage. That is what I will look at in Part 8.