Why do we create modern Desktop GUI apps using HTML/CSS/JavaScript?

Why do we create modern Desktop GUI apps using HTML/CSS/JavaScript?

This was a post on Quora and I thought I have more to say than I can write in a reply, so here goes, a blog article instead!

I have been involved with software development for nearly 30 years now, I have done front end and back-end stuff, I am responsible for the overall architecture and product strategy of a very large-scale application which spans front and back end. C++ is my background and as you might imagine by now, after 30 years I have an opinion here.

Let’s start with the obvious, there has been a meteoric rise of developing front end UI’s using HTML/CSS, this is not a fad, it’s a real thing, and its everything from desktop apps like Slack, Twitter, Facebook and many others, to mobile apps on Android and iOS, game UI’s, business applications, consumer applications, the list is basically endless. Even tools that developers use to create such UIs are themselves written in HTML/CSS and deployed using things like Electron. Visual Studio Code, Atom and many other dev tools fall into this category.

If you take a tool like Visual Studio Code, this is essentially a browser (Chromium), an embedded Node.js server with the HTML/CSS files all compiled into a deployable desktop binary. Because it is written in HTML/CSS targeting a browser, and because that browser (or variations of the same) generally target all notable platforms (Windows, Mac, Linux, Android iOS etc), then your desktop apps written this way are pretty much 100% cross-platform too for free. What is not to like…

In answer to the opening question, it really has bugger all to do with speed, in fact for UI stuff in particular, JavaScript in a browser will outperform your average native .NET developed UI of comparable complexity – by a very long way simply because the browser implementations are so highly optimised over a long an sustained periods of time (the notable exception to that claim being IE of course).

The problem with developing UI’s using HTML/CSS is that it is complicated, really complicated to do well.  The root of the problem here is, browsers, and the HTML/CSS specification was never conceived as a desktop UI tool, it’s just demand for this has driven the browsers to evolve that way. 

The browser standards have recognised this, and have made good headway in having different modes (known as display modes) which are optimised for different layout types, the classical natural document layout (Normal Flow) is what HTML was designed to do, but through continued evolution things like flex and grid have started to pave the way for far better separation of concerns between layout and design and facilitate layout schemes that are far more suited to modern desktop app style layouts.  

A perennial problem with desktop style UI’s in a browser has been the issue of content height, this is because of the way HTML rendering/layout works for Natural Flow, and developers, because they can, have found many work-arounds for this using JavaScript to sort of fudge the layouts, but this leads to poor quality, inflexible and unmaintainable code – and a lot of it too.  Then, to make things easier, dev teams will buy into a framework, like Angular JS, Vue.JS and those sorts of things, they are impressive and helpful, but they are even worse when it comes to obsolesce, for example build your application in Angular 1.x and the only realistic way of moving to Angular 2.x is to rewrite (at least substantially) your current codebase. Not because any framework is bad, they are all very good, but its because the browser and the thinking changes more quickly than most commercial application projects can accommodate.

And this leads to the next big problem, the now large and “expensive to create codebases” are locked into the design/implementation approach of the day, so even though the browsers have moved on significantly and the newer versions of the frameworks have much better things that would help, most codebases are essentially stuck at a point in time, and the only way out is a re-write completely – and for any significant commercial product that is simply not going to be an option – it is just too complicated to bring your customers on that journey.

That leaves evolution, that is taking advantage of new browser features as you add new stuff, and that works, but you very quickly end up with code that “no one wants to touch”. This is painful, expensive and is basically unavoidable, because today at least, any initiative to modernise and get back on track will take longer to do than it will take for the browser to evolve again.

Going back to native desktop apps then, perhaps that is a better long-term investment. Ten years ago, I would have agreed with that sentiment, after all, when it comes to desktops there are really only two players, Microsoft Windows and Apple OSX, so why not just target those, ideally with some shared code, and be done with it.  

Just for completeness I should mention Linux, it is the best computing platform to happen to the world and should be admired in every way – except one! the desktop/UI sucks, its awful in almost every way, you are essentially targeting X11 which was developed in the 70’s and it shows.  It works and it brings a couple of quirky features to the table which are quite nice, but, in essence, if you love Linux, just forget targeting GUI desktop stuff, you need a graphics screen, a browser (for a real UI) and a terminal window – that’s it… there is the Wayland stuff but its far too early, it’s also disjointed, barley supported and is grappling with compatibility with legacy stuff…

Ok so back to Windows and OSX.  I am probably more scathing here than I ought to be, but it is easy to be this way to be honest. Bottom line is this, Microsoft want desktop supremacy, so they have, over a sustained period of time, deployed an “enterprise lock-in” strategy.  Underpinning their stronghold with Visual Basic in the “enterprise in-house dev” arena back in the day, they embarked on their .NET journey.  The .NET technology itself is not half bad, but the real tragedy is the way they simply closed off the desktop UI developer community to all but their own .NET developer tech stack community, ensuring that every new OS/Shell feature added to Windows, were only materially exposed to developers through their proprietary .NET stack, so if you were working in Win32 (the API of the day at the time) or if you want to write cross-platform desktop apps, you were basically plain out of luck.  Microsoft played marketing lip-service of course, they supported multiple programming languages, but they favoured and ultimately championed C# and that was essentially your lock-in, you want to make a slick, modern Windows desktop application, you can, but it’s a closed shop, you need C#/.NET and of course for a long time, for native Windows, that’s still largely true, even today in 2022. 

I have experience with a largish scale .NET application, and it is a horrible thing to maintain, its slow, clunky, buggy as hell, and many components are now out of date, and we are locked into that point in time – that’s what happens, so really no better than the situation with HTML/JavaScript at all, in fact I would argue it’s a lot worse in the longer term.

What about OSX, absolutely great desktop UI, the best in the business IMHO from a user perspective (at least until very recently), so you can choose to be an “elite” and just build desktop apps for OSX right? What you find here though, is it is even worse than Microsoft – if that’s even possible, Apple built their UI on a framework that is essentially only accessible (for all practical purposes) via their own Objective-C language, and my goodness, what a terrible think that is. If Apple ever do wonder why at the height of their iMac/Intel popularity, their desktop never took more than a few percent of the global market, ask the developers who looked at their development environment and said, “no thanks”, I would put that down entirely to their very own version of an “elite developer tie-in strategy”, looking at Microsoft you would think no one could do worse than them in this regard – Apple did… and continue to do so today.  For OSX, there is a more modern choice today, it is called Swift, it is another proprietary language developed by Apple and is following Microsoft’s blueprint of .NET. Swift in use, is really just like a more friendly version of ObjectiveC, that dispenses with C-style pointers and looks a bit more like a JavaScript/C# mash-up, I expect the technology behind it is good, but it’s too proprietary, and no one apart from Apple die-hards are interested, or cares to the best I can tell. Most desktop applications that target the Mac today tend to be HTML/CSS/JavaScript in Electron type apps for that very reason.

Developing desktop apps in 2022 is essentially an HTML/CSS/JavaScript endeavour, and at the rate the open browser standards are evolving, I find it difficult to see why this is going to change any time soon, the performance of a well written UI in a modern browser out-performs a native desktop application in every category of sped, usability and presentation for most usual use cases.  Menu’s and forms *could* have a slight advantage in native apps over their browser counterparts, but there is not much in it at all and browsers are getting much better in this regard.  But for native desktop applications, when you start to want to use mix-ins of rich media like sound, video, 2D/3D graphics and suddenly the native UIs of the day, even from Microsoft and Apple are no match at all for what a modern browser has to offer.

When it comes to developing modern UI’s in HTML/CSS I am both excited for the future and frustrated by the sheer complexity involved today. The complexity itself is not a problem, but the codebases it ultimately leads to is a big and very expensive long term problem. If I could wave a magic wand, I would create an open working group, with the influence of the W3C behind me, to create a mandatory web standard for browsers that defines both a subset (to simplify and create an *appropriate* desktop security model) and extension of CSS/HTML that is specifically optimised for marking up and implementing desktop applications, and have that all built into modern browsers as standard. The goal would be to simplify the creation of desktop application UI’s and open the web platform up to people that currently won’t go near HTML/CSS with a bargepole.  This would move personal computing on a long way IMHO.

If this was done well I would expect to see a serious migration of desktop development over to browser development, much faster than we are seeing now, and tooling/runtimes/sandboxes like Electron and others, would be the new kids on the block.  

Even better, would be to see Microsoft, Apple (desktop and mobile), Android and Linux adopt such a standard and get behind it, moving away from the proprietary tie-in developer environments they currently impose on their users.  This could be a serious desktop strategy for Linux too – and that right there is probably the truth of why this last part would never happen.  Could you imagine a future where desktop applications were built like this, and in every sense of the word, “portable between operating systems” with the experience being identical on all platforms…. I expect Microsoft and Apple would see this as a very bad idea… that is, for as long as selling their operating system/hardware is an integral part of their go-to-market strategy.

What do you think?

Microsoft says – Don’t Use IE!

Microsoft says – Don’t Use IE!

So I read this post and all the comments and think to myself – no one seems to be saying it!  So I will… (obviously)

For more years than I can remember, Microsoft has had a habit of pushing “the enterprise” down its own path, with an obvious (to me anyway) lock-in strategy.  IE over the years has shown this up time and time again.  I thought Chris Jackson’s comments were a little bit dismissive of Microsoft’s responsibilities here, and I wanted to state that the reason why IE/Edge is still required by many customers is that Microsoft’s platform, especially Microsoft’s development environments have made use of the IE-specific quirks, leaving vast swathes of enterprise-developed apps depending on IE. 

Even worse, so many ISV’s have jumped on the “easier to develop” enterprise software platform (started with VB back in the day, right through to .NET and its kin today) building software for sale that organizations have purchased and gotten tied into.  Be it, ASP.NET, or ActiveX or Silverlight (what a mess that was) the numerous browser quirks and non-standard, undocumented esoteric behaviors in the Microsoft browsers.  I think there was a time when Microsoft was trying to be the standard browser of choice, but failed miserably at it.  I do like Chris’s advice though, and as someone that is responsive web software development, I wish I did not have customers DEMAND we support IE11 because that’s their standard browser, it’s annoying and frustrating and not of our own making.  

Three years ago, we relegated development for IE to “best-endeavour” only, that means we will put reasonable effort into fixing anything obvious but have drawn the line and doing IE/Edge specific workarounds/hacks for our software. That has sadly left some of our customers stuck with different browsers for different applications, but we do not accept that is a problem of our making, we used to feel bad when our customers would tell us “well you are not Microsoft so fall in line” – not anymore! 

Now before I start to sound like I am hating on Microsoft, I must make clear that in recent years I think Microsoft has done a remarkable job, a remarkable turn-around even.  Windows 10 is orders of magnitude better than any Microsoft OS before it, Edge is not terrible and mostly works, although it’s still quirky. And hats off, O365 is a winner – very nicely done team Microsoft. 

Dear Microsoft, if it were up to me…

  • You have the capability, the developers, and the financial resources, probably more than most other software companies in the world, go and build a world-class standards-based browser, do for your browser what you already did for C++
  • Or, hurry up and develop your chromium-based browser and get shot of IE and Edge as soon as you can.
  • Go and help your customers remove their technical debt in relation to IE, its not their fault entirely, you created the environment – help your customers fix it
Why does C still exist, when C++ can do everything C can?

Why does C still exist, when C++ can do everything C can?

This was a question asked on Quora and the top voted answer was airing on the side of it being cultural or personal preference. I don’t think the answer is culture or preference; there is an excellent reason why both C and C++ exist today. C++ is not a good alternative to C in some particular circumstances.

Many people suggest that C++ generates more inefficient code, that’s not true unless you use the advanced features of the language. C++ is generally less popular for embedded systems such as microcontrollers because its code generation is far less predictable at the machine-code level, primarily due to the compiler optimizations. The smaller and more resource-limited the target system, the more likely that C is a better and more comfortable choice for the developer, and this is often the reason people suggest that C++ can not replace C, and that is a very good reason indeed.

However, there’s another even more fundamental reason that C remains a critical tool in our armory. Imagine you create your very own new CPU or computing architecture for which there is no C or C++ compiler available to you – what do you do? The only option you have is to write code using some form of assembly language, just as we did in the early ’80s when programming IBM PC’s and DOS, before even C, became mainstream. (yes there was a time when C was more obscure than x86 assembly!) Now imagine trying to implement a c++17 standards-compliant C++ compiler and STL library in assembly language, that would be a daunting, and unimaginable task for an organization of any size, right?

On the other hand, a C compiler and a standard C runtime library, while still not an insignificant effort, is a hell of a lot more achievable, even by a single developer. In truth, you would almost certainly want to write some form of assembler/linker first to make writing the C compiler simpler. Once you have a standards-compliant C compiler working well enough, then a vast array of libraries and code written in C becomes available, and you build out from there. If your target platform did require a c++17 standards compliance c++ compiler, you would write that in C.

The C language holds quite a unique position (possibly only comparable to early Pascal implementations) in our computer engineering toolbox, its so low level, you can almost visualize the assembly code that it generates, just by reading your C code which is why it lends its self so well to embedded software development. Despite this though, C remains high-level enough to facilitate the building of higher-level application program logic and quite complex applications. Brand new C++ compilers would most likely get written in C, at least for early iterations, you can think of C as an ideal bootstrap for more significant and more comprehensive programs like an operating system or a C++ compiler/library.

In summary, C has its place, and its hard to see any good reason to create an alternative to C, its stood the test of time, its syntax is the founding father of the top modern-day languages (C++, C#, Java, JavaScript, and numerous others, even Verilog). The C language is not a problem that needs to be solved, and the C language does not need to be replaced either. Like oxygen, C is old hat now, but it works well, and in the world of software development, we still need it.

Fully Programmable Modular Bench Power Supply – Part 7

Now I have the requirements for the control ranges I need its time to get down to the nitty-gritty and get a DAC up and running so we can make some measurements. The maximum dynamic range I need for this project is 6000 individual steps – this was identified in the calculations for the voltage control for the 0-6V rage in Part 5, so let us start there.

There are many options for DAC’s to choose from, I want to keep the cost and component count down so my starting point is a low-cost single component solution from Microchip, part number MCP4822. This is a dual 12-bit DAC with two channels and a built-in voltage reference. I get independent voltage and current control from one 8-pin chip – wow! However, there is a problem with this part, it only has 12-bit resolution which will only give me 4096 individual steps and my design calls for 6000. The problem with choosing components with a higher number of bits is that they start to get expensive. I want to see if it’s possible to extend the range of the DAC using software and a technique called “dithering” or “modulation”.

The idea here is pretty simple, to increase the resolution of the DAC you can continuously switch the output between two or more codes, feed the result into a low-pass filter and get the average voltage. If you switch between two adjacent codes with a variable mark space ratio like you do in PWM, it should be possible to extend the range of the DAC without creating large ripples on the voltage head making the low pass filter easy to construct. That is the theory at least, I need to try it and see what the results are in practice.

Before getting too complicated though, I thought it would be good to run the DAC in static mode, set some codes and measure what we get. The DAC has 4096 steps, the internal reference voltage is 2.048v and the chip has a x2 gain option so I should be able to program any voltage between 0v and 4.096v in 1mV steps by simply programming a digital code between 0 and 4096 into the DAC channel. To get this up and running I hooked up a PIC micro controller, the DAC chip and an RS232 serial interface. The firmware in the PIC will allow me to interact with the DAC through a simple serial terminal on my computer.

Before any MPS430, Atmel or Arduino die-hard fans start giving me advice on micro controller choice — forget it. They are all good parts, I just happen to personally like PIC’s because I know them and I have the tools and a whole bunch of them sitting here to play with – if you are not happy with my choice of micro controller thats tough…. I am not going to change it or enter into any debate over the pro’s and cons of other devices – I am sticking with PIC’s for this one and if you try to change my mind I will ignore you – sorry.

Here is the schematic diagram for the prototype I am using.

For the dithering I have decided to extend the DAC by 2 bits. Extending by two bits means I have to write a sequence of four codes continuously in succession to the DAC. I am writing approximately 1000 codes per second within a timer driven high-priority interrupt routine which ensures that the timing remains constant. Timing errors will introduce more DNL errors so the code stream needs to be constant and accurate. The codes written are the base code value followed by the base code value + 1. The two least significant bits from the now 14-bit word controls how many times each of the two values is written. For example, to get four steps between code 100 and code 101 we would write the following codes: –

100.00       100   100   100   100
100.25       100   100   100   101
100.5        100   100   101   101
100.75       100   101   101   101

Extending by 3 bits is an option and would mean I have to write a sequence of eight codes, again following the same explanation as above, here are the codes that would be written

100.00        100   100   100   100   100   100   100   100
100.125       100   100   100   100   100   100   100   101
100.250       100   100   100   100   100   100   101   101
100.375       100   100   100   100   100   101   101   101
100.500       100   100   100   100   101   101   101   101
100.625       100   100   100   101   101   101   101   101
100.750       100   100   101   101   101   101   101   101
100.875       100   101   101   101   101   101   101   101

I tried the three bits as an academic exercise but I have decided not to go to three bits because of the noise, ripple and integral errors generated. The cost of the filter circuitry and the expansion of the line items in the bill of materials would probably outweigh the cost of upgrading the DAC component to a higher resolution part.

I have selected a number of spot voltages in the range to benchmark what I get from the DAC. The following table sets out the results I measured. (I am using a calibrated HP 34401A meter for all measurements).

Spot Voltage DAC Code Measured
MCP4822
(12bit static)
Error DAC Code Measured
MCP4822
(14bit dith)
Error
0 0 0.0015V +0.0015V 0 0.0014V +0.0014V
0.001V 1 0.0015V +0.0015V 4 0.0016V +0.0002V
0.002V 2 0.0024V +0.0014V 8 0.0026V +0.0006V
0.003V 3 0.0034V +0.0004V 12 0.0036V +0.0006V
0.004V 4 0.0044V +0.0004V 16 0.0046V +0.0006V
100mv 100 0.1024V +0.0024V 400 0.1026V +0.0026V
500mv 500 0.5024v +0.0024 2000 0.5026V +0.0026V
501mv 501 0.5034v +0.0024 2004 0.5036V +0.0026V
1V 1000 1.0004V +0.0004V 4000 1.0006V +0.0006V
1.5V 1500 1.4975 -0.0025 6000 1.4977V -0.0023V
2.5V 2500 2.5014 +0.0014 10000 2.5016V +0.0016V
3V 3000 2.9993 -0.0007 12000 2.9995V -0.0005V
3.001V 3001 3.0002 +0.0002 12004 3.0005V +0.0005V
3.002V 3002 3.0012 -0.0008 12008 3.0014V +0.0006V
3.9v 3900 3.8968 -0.0032 15600 3.8969V -0.0031V
4.095v 4095 4.0916 -0.0034 16379 4.0916V -0.0034V

Well, that is disappointing given I am aiming for a precision of 1mV and to get a control voltage of 0-6V I need accurate 500µV steps. So whats wrong here? Having read the data sheet there are some gotcha’s that naively you might ignore as I did. Every DAC has two really important parameters called Integral Non-linearity (INL) and Differential Non-Linearity (DNL). The DNL defines the maximum deviation to expect from the “ideal” voltage for any given code, expressed in LSB’s (or counts from ideal) and INL is the accumulated DNL errors that occur over the whole range. Fundamentally, the DAC is based on a resistor string network and its not easy to make highly accurate resistors, as soon as you start needing more accuracy the cost of the part rises very steeply, and even with the best part money can buy there will still be errors. The more bits you extend the DAC by using dithering, the more error you introduce and the more noise you introduce too. While extending by two bits is probably acceptable with a decent low-pass filter, extending by three bits and beyond is not really practical. As an aside, the noise figures for the MCP4x22 parts are not that great – something one must consider when the reference voltage generated is going to be amplified, the noise will also be amplified.

In summary then, I want accuracy and precision but I want reasonable cost and even if I spend a lot of money I will still have errors. The lesson learned for me is I now no longer think of a DAC as an accurate programmable voltage source – it’s not, it is a close approximation only. The MCP4922 (MCP4822) is a nice part for the $$$ and useful for some things I have no doubt, but it’s not good enough for what I want to achieve in this project. Even with the resolution extension to 14-bits it falls short. Actually to be fair, even with the errors in the DAC this would be make a pretty good degree of control, it is probably more accurate than most of the lower end bench PSU’s out there, but my benchmark is the Agilent E3631A so I need to achieve much better than this. The MCP4x22 device is the best resolution DAC Microchip do so I must now search for other parts instead – Linear Technologies and Analog Devices are the logical starting point for my search.

There is one further possibility which I have yet to try, which is to combine both 12-bit DAC outputs to create a much higher resolution DAC, the block diagram for such a solution is shown in the data sheet for the part. This is well worth a look because if it works well enough, the cost of the two chips may well still be cheaper than an upgraded DAC part. I will build this out at some point and give it a try.

This project also needs to implement metering in order to monitor the output volts and current drawn by the load connected to the PSU, and this needs to be reasonably accurate to 1mV too, that’s 5-digits I need which in its self is a tall order. However, it occurred to me that if I could get an ADC that was accurate enough and a DAC with enough resolution to provide headroom for trimming it might be possible to build a self-calibrating system that trims the DAC output to match the desired programmed voltage each time you set a new voltage. That is what I will look at in Part 8.