Why do we create modern Desktop GUI apps using HTML/CSS/JavaScript?

Why do we create modern Desktop GUI apps using HTML/CSS/JavaScript?

This was a post on Quora and I thought I have more to say than I can write in a reply, so here goes, a blog article instead!

I have been involved with software development for nearly 30 years now, I have done front end and back-end stuff, I am responsible for the overall architecture and product strategy of a very large-scale application which spans front and back end. C++ is my background and as you might imagine by now, after 30 years I have an opinion here.

Let’s start with the obvious, there has been a meteoric rise of developing front end UI’s using HTML/CSS, this is not a fad, it’s a real thing, and its everything from desktop apps like Slack, Twitter, Facebook and many others, to mobile apps on Android and iOS, game UI’s, business applications, consumer applications, the list is basically endless. Even tools that developers use to create such UIs are themselves written in HTML/CSS and deployed using things like Electron. Visual Studio Code, Atom and many other dev tools fall into this category.

If you take a tool like Visual Studio Code, this is essentially a browser (Chromium), an embedded Node.js server with the HTML/CSS files all compiled into a deployable desktop binary. Because it is written in HTML/CSS targeting a browser, and because that browser (or variations of the same) generally target all notable platforms (Windows, Mac, Linux, Android iOS etc), then your desktop apps written this way are pretty much 100% cross-platform too for free. What is not to like…

In answer to the opening question, it really has bugger all to do with speed, in fact for UI stuff in particular, JavaScript in a browser will outperform your average native .NET developed UI of comparable complexity – by a very long way simply because the browser implementations are so highly optimised over a long an sustained periods of time (the notable exception to that claim being IE of course).

The problem with developing UI’s using HTML/CSS is that it is complicated, really complicated to do well.  The root of the problem here is, browsers, and the HTML/CSS specification was never conceived as a desktop UI tool, it’s just demand for this has driven the browsers to evolve that way. 

The browser standards have recognised this, and have made good headway in having different modes (known as display modes) which are optimised for different layout types, the classical natural document layout (Normal Flow) is what HTML was designed to do, but through continued evolution things like flex and grid have started to pave the way for far better separation of concerns between layout and design and facilitate layout schemes that are far more suited to modern desktop app style layouts.  

A perennial problem with desktop style UI’s in a browser has been the issue of content height, this is because of the way HTML rendering/layout works for Natural Flow, and developers, because they can, have found many work-arounds for this using JavaScript to sort of fudge the layouts, but this leads to poor quality, inflexible and unmaintainable code – and a lot of it too.  Then, to make things easier, dev teams will buy into a framework, like Angular JS, Vue.JS and those sorts of things, they are impressive and helpful, but they are even worse when it comes to obsolesce, for example build your application in Angular 1.x and the only realistic way of moving to Angular 2.x is to rewrite (at least substantially) your current codebase. Not because any framework is bad, they are all very good, but its because the browser and the thinking changes more quickly than most commercial application projects can accommodate.

And this leads to the next big problem, the now large and “expensive to create codebases” are locked into the design/implementation approach of the day, so even though the browsers have moved on significantly and the newer versions of the frameworks have much better things that would help, most codebases are essentially stuck at a point in time, and the only way out is a re-write completely – and for any significant commercial product that is simply not going to be an option – it is just too complicated to bring your customers on that journey.

That leaves evolution, that is taking advantage of new browser features as you add new stuff, and that works, but you very quickly end up with code that “no one wants to touch”. This is painful, expensive and is basically unavoidable, because today at least, any initiative to modernise and get back on track will take longer to do than it will take for the browser to evolve again.

Going back to native desktop apps then, perhaps that is a better long-term investment. Ten years ago, I would have agreed with that sentiment, after all, when it comes to desktops there are really only two players, Microsoft Windows and Apple OSX, so why not just target those, ideally with some shared code, and be done with it.  

Just for completeness I should mention Linux, it is the best computing platform to happen to the world and should be admired in every way – except one! the desktop/UI sucks, its awful in almost every way, you are essentially targeting X11 which was developed in the 70’s and it shows.  It works and it brings a couple of quirky features to the table which are quite nice, but, in essence, if you love Linux, just forget targeting GUI desktop stuff, you need a graphics screen, a browser (for a real UI) and a terminal window – that’s it… there is the Wayland stuff but its far too early, it’s also disjointed, barley supported and is grappling with compatibility with legacy stuff…

Ok so back to Windows and OSX.  I am probably more scathing here than I ought to be, but it is easy to be this way to be honest. Bottom line is this, Microsoft want desktop supremacy, so they have, over a sustained period of time, deployed an “enterprise lock-in” strategy.  Underpinning their stronghold with Visual Basic in the “enterprise in-house dev” arena back in the day, they embarked on their .NET journey.  The .NET technology itself is not half bad, but the real tragedy is the way they simply closed off the desktop UI developer community to all but their own .NET developer tech stack community, ensuring that every new OS/Shell feature added to Windows, were only materially exposed to developers through their proprietary .NET stack, so if you were working in Win32 (the API of the day at the time) or if you want to write cross-platform desktop apps, you were basically plain out of luck.  Microsoft played marketing lip-service of course, they supported multiple programming languages, but they favoured and ultimately championed C# and that was essentially your lock-in, you want to make a slick, modern Windows desktop application, you can, but it’s a closed shop, you need C#/.NET and of course for a long time, for native Windows, that’s still largely true, even today in 2022. 

I have experience with a largish scale .NET application, and it is a horrible thing to maintain, its slow, clunky, buggy as hell, and many components are now out of date, and we are locked into that point in time – that’s what happens, so really no better than the situation with HTML/JavaScript at all, in fact I would argue it’s a lot worse in the longer term.

What about OSX, absolutely great desktop UI, the best in the business IMHO from a user perspective (at least until very recently), so you can choose to be an “elite” and just build desktop apps for OSX right? What you find here though, is it is even worse than Microsoft – if that’s even possible, Apple built their UI on a framework that is essentially only accessible (for all practical purposes) via their own Objective-C language, and my goodness, what a terrible think that is. If Apple ever do wonder why at the height of their iMac/Intel popularity, their desktop never took more than a few percent of the global market, ask the developers who looked at their development environment and said, “no thanks”, I would put that down entirely to their very own version of an “elite developer tie-in strategy”, looking at Microsoft you would think no one could do worse than them in this regard – Apple did… and continue to do so today.  For OSX, there is a more modern choice today, it is called Swift, it is another proprietary language developed by Apple and is following Microsoft’s blueprint of .NET. Swift in use, is really just like a more friendly version of ObjectiveC, that dispenses with C-style pointers and looks a bit more like a JavaScript/C# mash-up, I expect the technology behind it is good, but it’s too proprietary, and no one apart from Apple die-hards are interested, or cares to the best I can tell. Most desktop applications that target the Mac today tend to be HTML/CSS/JavaScript in Electron type apps for that very reason.

Developing desktop apps in 2022 is essentially an HTML/CSS/JavaScript endeavour, and at the rate the open browser standards are evolving, I find it difficult to see why this is going to change any time soon, the performance of a well written UI in a modern browser out-performs a native desktop application in every category of sped, usability and presentation for most usual use cases.  Menu’s and forms *could* have a slight advantage in native apps over their browser counterparts, but there is not much in it at all and browsers are getting much better in this regard.  But for native desktop applications, when you start to want to use mix-ins of rich media like sound, video, 2D/3D graphics and suddenly the native UIs of the day, even from Microsoft and Apple are no match at all for what a modern browser has to offer.

When it comes to developing modern UI’s in HTML/CSS I am both excited for the future and frustrated by the sheer complexity involved today. The complexity itself is not a problem, but the codebases it ultimately leads to is a big and very expensive long term problem. If I could wave a magic wand, I would create an open working group, with the influence of the W3C behind me, to create a mandatory web standard for browsers that defines both a subset (to simplify and create an *appropriate* desktop security model) and extension of CSS/HTML that is specifically optimised for marking up and implementing desktop applications, and have that all built into modern browsers as standard. The goal would be to simplify the creation of desktop application UI’s and open the web platform up to people that currently won’t go near HTML/CSS with a bargepole.  This would move personal computing on a long way IMHO.

If this was done well I would expect to see a serious migration of desktop development over to browser development, much faster than we are seeing now, and tooling/runtimes/sandboxes like Electron and others, would be the new kids on the block.  

Even better, would be to see Microsoft, Apple (desktop and mobile), Android and Linux adopt such a standard and get behind it, moving away from the proprietary tie-in developer environments they currently impose on their users.  This could be a serious desktop strategy for Linux too – and that right there is probably the truth of why this last part would never happen.  Could you imagine a future where desktop applications were built like this, and in every sense of the word, “portable between operating systems” with the experience being identical on all platforms…. I expect Microsoft and Apple would see this as a very bad idea… that is, for as long as selling their operating system/hardware is an integral part of their go-to-market strategy.

What do you think?

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.

C++ Multiple Thread Concurrency with data in Classes 2

C++ Multiple Thread Concurrency with data in Classes 2

Following on from my previous article, here is the second approach which uses a mutex to protect the data within the class from multiple threads. One of the main things you want to do when dealing with threading and concurrency with data is to encapsulate and localize the locking in one place so the details of locking are completely hidden from the user of the class.

The rules to follow for this pattern are very simple…

  • The data variables must be private within the class
  • The mutex that protects the data must also be private to the class
  • You should never return const references to your data in member functions, which means you must return copies of the data you need to access. This is ok for basic types but for strings and more complex types will mean copying data, or using shared pointers. I will cover both examples.

So, let’s start with making a basic class like in the previous article, the goal of the class is to contain a list of config params (key/value pairs) and to provide a 100% thread-safe interface to read/set values in this data set.

class config_data_t
{
   std::map<std::string, std::string> _config_vals;
   std::mutex _config_vals_mtx;

public:
   std::string get_val(const std::string& name) const
   { 
      // Lock our data, prevent any other thread while we are here
      std::lock_gaurd<std::mutex> locker(_config_vals_mtx);

      auto v = _config_vals.find(name);
      if(v !=  _config_vals.end())
         return v->second;
      return ""; 
   }

   void set_val(const std::string& name, const std::string& val);
   {
      // Lock our data, prevent any other thread while we are here
      std::lock_gaurd<std::mutex> locker(_config_vals_mtx);

      _config_vals[name] = val; 
   }
}

This should be self-explanatory, the getter and setter function acquire an exclusive lock before touching the _config_vals private member. The implementation is 100% thread-safe because only one thread can operate on the _config_vals data at any one time.

You will note that the get_val() function returns a std::string by value, the string copy happens on line 14.

Now let’s take a look at what happens if the above class was working with larger data value sizes, for this example I have replaced the std::string with a blob_t which could, for example, be any data size from 60k to 10Mb of data. The following modified class uses this new type

class config_data_t
{
   using blob_t = std::vector<char>;
   std::map<std::string, blob_t> _config_vals;
   std::mutex _config_vals_mtx;

public:
   blob_t get_val(const std::string& name) const
   { 
      // Lock our data, prevent any other thread while we are here
      std::lock_gaurd<std::mutex> locker(_config_vals_mtx);

      auto v = _config_vals.find(name);
      if(v !=  _config_vals.end())
         return v->second;
      return ""; 
   }

   void set_val(const std::string& name, const blob_t& val);
   {
      // Lock our data, prevent any other thread while we are here
      std::lock_gaurd<std::mutex> locker(_config_vals_mtx);

      _config_vals[name] = val; 
   }
}

Both the setter and getter are going to make copies of this data, and that would be a horrible implementation if the data sizes are large. In this case, I would want to implement this differently, and this is where the magic of std::shared_ptr comes into play, so let us re-work the above and discuss it below.

class config_data_t
{
   using blob_t = std::vector<char>;
   using blob_ptr_t = std::shared_ptr<const blob_t>;
   std::map<std::string, blob_ptr_t> _config_vals;
   std::mutex _config_vals_mtx;

public:
   const blob_ptr_t get_val(const std::string& name) const
   { 
      // Lock our data, prevent any other thread while we are here
      std::lock_gaurd<std::mutex> locker(_config_vals_mtx);

      auto v = _config_vals.find(name);
      if(v !=  _config_vals.end())
         return v->second;
      return nullptr; // Or throw an exception if you prefer
   }

   void set_val(const std::string& name, blob_ptr_t&& val);
   {
      // Lock our data, prevent any other thread while we are here
      std::lock_gaurd<std::mutex> locker(_config_vals_mtx);

      _config_vals.emplace(name, std::move(val)); 
   }
}

Now the change is quite subtle, but the overall performance of this will be much better, and still 100% thread-safe, essentially through the interface we are aiming to eliminate data copies. This is also using features that were introduced in C++11 so please be aware of this.

Ok, let’s start with the data storage, we now have a map of names to shared pointers, where the shared pointer is pointing to (and owning) a std::vector containing some arbitrarily large size of data. So first, let’s look at the get_val() which is now returning a shared pointer to one of the data items in the map. We are still doing a copy (line 14), but all we are copying is the pointer, not the data it points to. So once you call this get_val() function, the same block of data is now pointed to by both the shared pointer that was returned to you AND the pointer held in the _config_vals map. The data that is returned to you should be treated as a const value, in other words you must not modify or change the data in the vector that your pointer points to – this is critical. If you need to change a value you must call set_val() instead

When you call set_value() you are passing in a pointer to a data block that you created, and what we do here is pass the shared_ptr in by R-Value, which allows us to take ownership of the data block you have already created, removing the need for a data copy

Now let us suppose, the value we are setting, already has another value set, and – that other value was previously obtained by you, so you are holding a shared_ptr to it. Now, when you set a new value, the pointer in the map to the old data is destroyed, but because there is another pointer to that same data, the data its self remains until the last instance of a shared_ptr is destroyed.

So the principle here, like the class in the first article, you essentially never change data you can have a reference to, you simply replace it each time you do a change. This is what the shared_ptr is for and this pattern makes for a very thread-safe implementation.

This is very basic stuff, but you will be surprised how many times I have seen this done badly, even by me… it’s simple if you keep the rules simple and it is unlikely to go wrong if your user can use the class without having to think about threading/concurrency issues at all.

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.

C++ Multiple Thread Concurrency with data in Classes

C++ Multiple Thread Concurrency with data in Classes

I was recently looking at Quora feed and someone asked the question: –

If a class in C++ has a member function that executes on a separate thread, can that member function read from and write to data members of the class?

which I thought would be a good question to answer. However, it’s not likely to be a short answer so I thought I would write a quick blog article on the subject.

I will start by saying that when dealing with multiple threads (often just referred to as concurrency), its actually a lot simpler than you might think, but it does take a certain mindset before it feels natural. Anyway, back to the answer to the question.

The short answer to the question is, No, not safely, not without adding some form of concurrency protection. Things are a lot easier in C++11 but like anything to do with concurrency, it all depends on your implementation details and the nature of the data members, their purpose and the reason for needing multiple threads to read/write to that data.

As a general (but not exclusively true) rule, having one instance of a class that contains dynamic data at runtime, should not be operated on by more than one thread. Many threads can read without issue, but modifying is generally a bad idea.

If you need to modify data of a class, give the thread that modifying the data exclusive access to that class instance, this, while not enforced or required by C++, adopting this strategy will most certainly help humans reading your code understand intent.

One of my most frequently used patterns which is incredibly useful, and (almost) lock-free, and fast, and basically foolproof from a usability point of view and really simple to understand in a concurrent system. This is not the only solution, nor is it perfect for any situation, nor is it anything special, but it does really help in multi-threaded systems (servers especially).

Let’s say we have a class that contains some configuration information, for this purpose I will use a simple map of name/values

class config_data_t
{
   std::map<std::string, std::string> _config_vals;

public:
   const std::string& get_val(const std::string& name);
}

And this configuration is global, so we load it from a file and hold it in memory so that all threads can access the configuration information quickly and without having to worry about concurrency AND we have a thread that watches the config file, and if it changes, we need to reload our configuration information safely, without either locking or disrupting any other thread.

So the first thing we are going to need to do is to create a global instance of our config object and load it from the config file. To do this we are going to create a shared_ptr which is a global variable, make a new instance of our config class and load it with its data.

using config_data_ptr_t = std::shared_ptr<config_data_t>;
config_data_ptr_t _global_config_info;

void load_config()
{
   config_data_ptr_t config_info = std::make_shared<config_data_t>();

   // Do whatever you need to do in order to load the config_info->_config_vals with data

   // update our global variable
   _global_config_info = config_info;

   // ... rest of main() stuff
}

Ok, in the above code, we have a global variable _global_config_info shared pointer, then in main() we have made a new instance of a config_data_t object as a local shared pointer (config_info), populated it with data and finally, we have assigned our config_info pointer to our global _global_config_info pointer, at this point within the scope of load_data(), both config_info and _global_config_info are pointing to the same instance of the config class, when load_config ends, config_info goes out of scope leaving the config info pointed to by the _global_config_info shared pointer.

Within our program we want any thread to be able to access our configuration information, so we provide a global getter function to make this easy and intuitive.

const config_data_ptr_t config() 
{
    return _global_config_info;
}

This is a very simple function, but this is where half of the magic happens. What we are doing is returning a new shared pointer that is pointing to the same instance of the config_info_t class as the _global_config_info pointer is pointing at. The shared pointer copy is both very fast and is thread-safe, concurrency is built-in, this is a thread-safe operation and often is implemented under the hood in a lock-free way using the CPU’s interlocking atomic operations. So, once you have the pointer you may access configuration values as required. This can be done safely from as many threads as you like, without fear of any concurrency issues

const config_data_ptr_t cfg = config();

const std::string& val = cfg->get_val("some.config.val");

Now I said above, this is “half” the magic, because there is a rule you need to follow… you must never, under any circumstances modify the values held in the global config object, if you do, your thread(s) will run into concurrency issues. So if you load it and use it, no problem. But we said that we also need to be able to detect changes to the config file and load those changes when detected! Well, that’s the other half of the magic, because we are going to take advantage of the shared_ptr behavior and we are going to follow one simple rule…

We will never change the config values in a config object, if we detect a change in the configuration file we will simply re-load it.

if(has_config_file_changed())
{
   load_config();
}

At the point we call load config, other threads might have a pointer to the current config, but when we load the config into a new instance of the config_info_t class, and then we assign this pointer to the global pointer, the second half of the magic occurs. All existing threads that have a config pointer are pointing to the old instance (pre config file changes) of the config object, the global pointer gets updated to point to the freshly loaded config object. As each thread finishes with the config the shared_ptrs go out of scope, when the last shared_ptr of the old config object goes out of scope, the old config object is freed from memory. Subsequent threads that call config() will get a pointer to the freshly loaded config object.

So this is the basic principle and it works really well, and I bet there is even a known “pattern” but if there is, I have no idea what its called. Also, there are two unanswered questions that the above raises.

What if I want my program to be able to modify configuration changes on the fly?

Absolutely this is a common requirement and its easy, you simply adopt the same principle, you never write changes to the config object that is pointed to by the global variable, instead, you simply take a copy of the global object, make your changes and replace the global object with your changes object.


void set_config_values(const std::map<std::string, std::string>& vals)
{
  // Make a new empty config object
  config_data_ptr_t config_info = std::make_shared<config_data_t>();

  // Make a copy of the global config
  *config_info = *config();

  // Modify config items as required
  for(const auto& v : vals)
    config_info->_config_vals[v.first] = v.second;

  // update our global variable
  _global_config_info = config_info;
}

Surely copying a whole config object is really expensive if you just want to change a single value?

Yes and no, it depends on the nature of your data. If the configuration data is huge, then yes, this would not be the right pattern, but in my experience, most config data has some fundamental characteristics.

a) the configuration does not change much.
b) the configuration is generally quite small from a size and a number of items point of view.

As I said, this pattern is not for all situations, generally, the copy pattern is a play-off between simplicity and efficiency. What I hope this example highlights though, is that class members are not in any way protected from concurrency issues, you need to adopt a pattern/strategy that gives you the best results for the job to are trying to do.

I will follow-up with a second example, where the same class is used with locking protection, where the copying-on-write pattern is not efficient, and-or you will make a lot of config changes dynamically, which is the other common pattern for this type of data.

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.

Galaxy Dimension G3-520 Intruder Alarm Upgrade Reduces Wasted Energy

Galaxy Dimension G3-520 Intruder Alarm Upgrade Reduces Wasted Energy

I recently replaced my current intruder alarm system with a Galaxy Dimension G3-520. These alarm systems are quite advanced, professional intruder alarm systems made by Honeywell and from hardware design and overall architecture are most definitely on the professional end of the scale. On the second-hand market (aka e-bay) they are not very expensive and are still relatively modern systems.

This is also a very “hackable” system making it friendly for DIY’ers and IoT systems and experiments and stuff. I do plan to write some other, possibly more interesting articles about this system and its overall hackability, but I thought I would start with a simple one, the mains transformer – which actually sucks! Here is why…

One of my first observations when running the panel on my desk was how hot the mains transformer was running. I measured it at around 55 degrees Celcius. Now with the lid on, and with the panel installed in an enclosed space, this is not ideal. It was getting hot enough that I thought there was something wrong with the transformer. I bought a second panel and transformer, and low and behold the transformer was just as bad.

Looks and feels good quality, but performs quite badly

The transformer is an Iron Core transformer, typical construction that most people are familiar with, and these types of transformers are known to have some inherent inefficiencies. The quality of the design and construction of the transformer play a big part in how efficiently the transformer performs.

There are two sources of loss in a transformer. The first is known as load loss and is as a direct result of resistance in the copper wire that makes up the transformer windings. Cheap copper alloys have higher resistance generally, as you draw current through the wire, heat is generated as a result of the resistance in that wire. The second source of loss is Core Loss, which is where the transformer consumes power it’s self, even when no load is present. The heat generation is caused primarily by resistance in the iron core of the transformer. The transformer in this panel suffers from both problems quite severely, its a poor quality transformer.

I did not want to install the panel with this heat-generating cheap-ass transformer, so I decided to fix it. I had two options, replace with a Switchmode or replace with a better quality passive transformer. Switchmode PSU’s are excellent for high power levels and small spaces, but they do suffer from power losses too. Moreover, there is the question of long-term reliability which I feel would be a problem for this application; there is nothing so simple and reliable as a transformer.

So to fix this in my panel, I have replaced the transformer with a toroidal type. Toroidal transformers are a better design and inherently have much lower core losses than a more traditional transformer. A good quality toroidal transformer should also not suffer load losses if they are correctly rated.

VTX-146-080-218 Transformer 80VA 18VAC x 2

The panel is rated for 1.2A and 2.5A in its power and aux circuits, and needs to charge a Led Acid battery too, there are three 1A fuses and one 1.5A fuse on the board. So the power consumption of the panel and the accessories it can power from Aux 1 and Aux 2 totals about 44 watts (3.7A * 12V), and then you need at least another 18W to charge the standby battery so we are up to 63 watts of power. To give me about 20% headroom, I chose a transformer rated at 80W. Its an 80VA 18-0-18VAC secondary transformer made by a company called Vigortronix, part number VTX-146-080-218. You can get this from Farnell, or as I did, from an e-bay seller called Spiratronics, It cost me £23.50 shipped and included the mounting kit needed for a simple installation.

Drill a 5mm hole, and deburr. Make sure no metal fillings are left inside

To fit the new transformer was as simple as drilling a hole in the right place, making sure the transformer does not hit the two studs that fixed the old transformer to the chassis and wiring the transformer. Wiring is dead simple, see the image below.

Wiring up before bolting down

Primary side, you need the two windings in series for UK/EU and parallel for the US. For the UK, connect the grey and purple wires and insulate, connect the brown and blue to the (L)live and (N)neutral respectively. Secondary side, you need the two windings in parallel, so connect black and orange into one of the AC terminals on the PCB and connect the Red and Yellow wires and into the other AC terminal on the PCB.

Transformer installed and wired ready to go

The result. A transformer that does not get hot at all; it’s cold to touch after a week of running the panel and some attached peripherals. I have not measured it precisely (I may well do that at some point for the academic exercise), but basic measurements suggest there was around 3W of power loss in the old transformer with barely any load, that would be around £5/year in energy being wasted, not much but not trivial either. The significant benefit for me is that the panel is no longer generating the kind of heat that can build up in a closed space, its comforting to know that energy is not being wasted too and overall it just feels better, the transformer is no longer “just good enough.” its now correctly specified and just feels more professional.

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.

Huawei 5G Security Ban – I mean the US-China Trade War!

Huawei 5G Security Ban – I mean the US-China Trade War!

Being someone who has a reasonably good grasp of technology, I found it very strange that Huawei is banned from selling 5G networking technology into the US market. The official line from the US government was “China could have back doors into these systems which would create a national security risk” – that’s a great way of scaring people in the general population, but it did not make sense to me from a technical standpoint.

In practice, its quite challenging to sneak stuff like that into a system and hide it away, or to look at it another way, it’s quite easy for the buyers of the equipment to apply more diligence in its inspection and security assessments to avoid such a risk. So I have always had my suspicions about the real reason.

So turning the cynical part of my brain on, I start to think about the world stage for big Chinese tech companies, because they are beginning to gain ground globally, or at least are getting big enough to challenge the USA supremacy in the tech business. The USA is accustomed to being the only home of tech giants. But with the rise of companies like Huawei, Alibaba, and a few other rising stars, it is probably reasonable to think that the US might be getting a bit concerned about their global market stronghold. One aspect of the US economic domination comes from its ability to stay ahead of the game in tech, with the glitz and glamour of the Silicon Valley showboat (the Hollywood of the tech industry) being the shining source of all innovation in tech – at least that is what the marketing suggests.

Anyway, I read today that Donald Trump suggested in recent talks with China that Huawei 5G issues could be part of a deal concerning the escalating US-China trade talks! And this comes after the US declares a severe national security risk if Huawei is allowed to sell their technology for use in 5G networks and strongly encouraged its allies to follow suit and not use their equipment.

So what happened to the security risks then?

If you think about it, had Donald Trump instead said, don’t by Hauwai 5G networking tech because Hauwai are getting too big, and they have essential patents on some 5G technologies which means they will get a bigger slice of the pie from the global 5G roll-out, and we don’t like the fact that Hauwai is not an American company. With that positioning, I don’t think Donald Trump would have gained the support, either domestic or internationally, so it seems the threat of national security is a useful device to gain popular political support for something that would otherwise look somewhat underhanded.

I am generally in agreement with the idea that the US (and other countries) should fight for their rights and position in the global market; there is nothing instinctively wrong with that, and after all, I thought we are all aspiring to be in a free and fair global market. But there seems to be something sinister about misleading the world stage in the pursuit of this agenda and using your power and influence to force other countries to follow suit.

Now in the midst of this, you now have the likes of Google crippling the Android OS capabilities on Hauwai mobile phones. Is this an opportunity for Google to enhance the market opportunity for its range of new Pixel smartphones using the (eh-hem security threat) DT directive! There is that cynicism again! It is a dangerous game to be playing, and I would not under-estimate China’s ability to go-it-alone if they wanted to. There is an unhealthy belief in the US psyche that China needs US tech – I am not sure that’s true, but I am pretty sure that western tech needs China’s manufacturing capabilities and economic labor force.

The quality of tech coming out of China has been steadily getting better, and I am sure China is more than capable of building its own tech, every bit as good as what’s available already if commercial companies are supported/encouraged by a highly motivated government who have a political point to make.

I don’t fully understand the politics or culture in China, but one thing is for sure, over the last 50 or so years, the West has benefitted hugely from the low labor and manufacturing costs in China, huge companies like Apple have flourished because of this very enabling economic model. But as a result of this western consumption, there has been a lot of wealth created that has been flowing towards the far east, because we cannot get enough of the bargain basement stuff that we all love to have.

China is is a closed shop, it seems both complicated and prohibitive for the West to do business in China, easily at least, in the way we are used to at home, so it does feel like very one-way traffic, but you can’t help thinking that we must have anticipated this time would come. I do believe it is a good thing that China is challenged on this point, but I am not sure an outright and aggressively instigated trade war is the best way to go about this, trade should always be mutually beneficial.

Having a challenger to the US dominance in the tech area is a good thing as far as I am concerned, competition is essential, and I expect that the US could quickly lose its stronghold in this area. A country with the size and economic scale that China has, who is willing to make the big bets on rising global tech companies will create competition that to date, the US has thus far enjoyed almost worldwide exclusivity.

It will be interesting to see how the US-China trade war plays out; I expect this will start to dominate the headlines and will have an impactful effect on a lot of people globally. But the new headlines seem a welcome change, giving us in the UK at least, a much-needed distraction from Brexit!

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.

Are you a Software Developer or Software Engineer?

Are you a Software Developer or Software Engineer?

I recently answered this question on Quora and thought this would make a good blog article, I think the question is quite interesting for anyone who writes software for a living.

Over time, software developers have become specialist language developers, in other words, you get .NET developers, C++ developers, Java Developers, etc… and there are several reasons for this happening. It seems like every week there is a new language that promises to be the solution for all the problems with programming languages that have come before, or a new design pattern that emerges because of a new capability in a given language. On top of that, languages have become so complex and so a fair degree of specialization is required to make good use of what is available.

The problem with this programming language specialization is the focus tends to shift from business focus to language/technology focus and this is what a lot of traditional software developers have become. For example, .NET developers want to use the latest .NET x feature, or C++ developers feel they must use the newest coroutine capability in C++20. Suppose I need to implement a function that takes a file and transforms it into another file, using std::filesystem instead of POSIX filesystem functions, that may well be exciting for me as a C++ developer getting to use some of the new C++17 features for the first time, but the business value of my output is no different. In truth, the business value I deliver would be lower because it will take more of my time while I figure out that new language feature.

Software developers tend to focus on their specialization, “I am a C++ programmer” or “I am a .NET C# developer” is how someone might describe what they do.

Software engineers are a different type of developer. The language one might use is really just a tool, an engineer might have tool(s) of choice, but actually care more (a lot more) about the needs of the business and the problems they are solving, than they do about using the latest language feature and the newest compiler to solve the problem. The output of a Software Engineer is always more aligned with business value delivered than that of a Software Developer (using the above definitions).

In my experience, quite a large portion of software developers get bored of creating the same old business value stuff, and are not “fulfilled” either personally or from a resume/career progression point of view, if the focus is not on using latest compiler features, patterns or new things that will define their skill set. This is because they are more engaged with the technology and specialization domain than the value they are supposed to be creating. The second most common reason for software developers to change jobs (the first being the pay, of course) is to take the opportunity to work on that new cool thing that will excite them. If you fall into this category, then you are a software developer, and you will ultimately be a commodity to the organizations you work for.

Being an engineer is much more about being engaged and aligned with your companies strategic aspirations and goals. If you find yourself believing in your companies mission; and you naturally interact with customers, and immerse yourself in the problems your customers are facing daily – and – you understand the overall systems architecture – and – you are employed primarily to write software; you are almost certainly a Software Engineer – the type that all good software companies aspire to hire.

A good way to think about it is this. If the product you work on is “your product” and you have to not only write it but support it, sell it and evangelize about it, these are the things you would do as a software engineer within a company. Ok, you might not go out and do the selling directly, but you would certainly have a stake in the success of sales activity.

Now you cannot just change your title from Developer to Software Engineer and expect to get more job opportunities or better pay! If you are going to call yourself a Software Engineer, then you need to exhibit the characteristics of one, people that look for software engineers are experienced enough to recognize them, and see through the ones that are only pretending/hoping to be an engineer.

More and more companies are looking to hire Software Engineers and not Software Developers, be honest with yourself and ask yourself the question: –

Are you a Software Developer or Software Engineer?

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.