C++ Multiple Thread Concurrency with data in Classes 2

Following on from my previous article, here is the second approach which uses a mutex to protect the data within the class from multiple threads. One of the main things you want to do when dealing with threading and concurrency with data is to encapsulate and localize the locking in one place so the details of locking are completely hidden from the user of the class.

The rules to follow for this pattern are very simple…

  • The data variables must be private within the class
  • The mutex that protects the data must also be private to the class
  • You should never return const references to your data in member functions, which means you must return copies of the data you need to access. This is ok for basic types but for strings and more complex types will mean copying data, or using shared pointers. I will cover both examples.

So, let’s start with making a basic class like in the previous article, the goal of the class is to contain a list of config params (key/value pairs) and to provide a 100% thread-safe interface to read/set values in this data set.

class config_data_t
{
   std::map<std::string, std::string> _config_vals;
   std::mutex _config_vals_mtx;

public:
   std::string get_val(const std::string& name) const
   { 
      // Lock our data, prevent any other thread while we are here
      std::lock_gaurd<std::mutex> locker(_config_vals_mtx);

      auto v = _config_vals.find(name);
      if(v !=  _config_vals.end())
         return v->second;
      return ""; 
   }

   void set_val(const std::string& name, const std::string& val);
   {
      // Lock our data, prevent any other thread while we are here
      std::lock_gaurd<std::mutex> locker(_config_vals_mtx);

      _config_vals[name] = val; 
   }
}

This should be self-explanatory, the getter and setter function acquire an exclusive lock before touching the _config_vals private member. The implementation is 100% thread-safe because only one thread can operate on the _config_vals data at any one time.

You will note that the get_val() function returns a std::string by value, the string copy happens on line 14.

Now let’s take a look at what happens if the above class was working with larger data value sizes, for this example I have replaced the std::string with a blob_t which could, for example, be any data size from 60k to 10Mb of data. The following modified class uses this new type

class config_data_t
{
   using blob_t = std::vector<char>;
   std::map<std::string, blob_t> _config_vals;
   std::mutex _config_vals_mtx;

public:
   blob_t get_val(const std::string& name) const
   { 
      // Lock our data, prevent any other thread while we are here
      std::lock_gaurd<std::mutex> locker(_config_vals_mtx);

      auto v = _config_vals.find(name);
      if(v !=  _config_vals.end())
         return v->second;
      return ""; 
   }

   void set_val(const std::string& name, const blob_t& val);
   {
      // Lock our data, prevent any other thread while we are here
      std::lock_gaurd<std::mutex> locker(_config_vals_mtx);

      _config_vals[name] = val; 
   }
}

Both the setter and getter are going to make copies of this data, and that would be a horrible implementation if the data sizes are large. In this case, I would want to implement this differently, and this is where the magic of std::shared_ptr comes into play, so let us re-work the above and discuss it below.

class config_data_t
{
   using blob_t = std::vector<char>;
   using blob_ptr_t = std::shared_ptr<const blob_t>;
   std::map<std::string, blob_ptr_t> _config_vals;
   std::mutex _config_vals_mtx;

public:
   const blob_ptr_t get_val(const std::string& name) const
   { 
      // Lock our data, prevent any other thread while we are here
      std::lock_gaurd<std::mutex> locker(_config_vals_mtx);

      auto v = _config_vals.find(name);
      if(v !=  _config_vals.end())
         return v->second;
      return nullptr; // Or throw an exception if you prefer
   }

   void set_val(const std::string& name, blob_ptr_t&& val);
   {
      // Lock our data, prevent any other thread while we are here
      std::lock_gaurd<std::mutex> locker(_config_vals_mtx);

      _config_vals.emplace(name, std::move(val)); 
   }
}

Now the change is quite subtle, but the overall performance of this will be much better, and still 100% thread-safe, essentially through the interface we are aiming to eliminate data copies. This is also using features that were introduced in C++11 so please be aware of this.

Ok, let’s start with the data storage, we now have a map of names to shared pointers, where the shared pointer is pointing to (and owning) a std::vector containing some arbitrarily large size of data. So first, let’s look at the get_val() which is now returning a shared pointer to one of the data items in the map. We are still doing a copy (line 14), but all we are copying is the pointer, not the data it points to. So once you call this get_val() function, the same block of data is now pointed to by both the shared pointer that was returned to you AND the pointer held in the _config_vals map. The data that is returned to you should be treated as a const value, in other words you must not modify or change the data in the vector that your pointer points to – this is critical. If you need to change a value you must call set_val() instead

When you call set_value() you are passing in a pointer to a data block that you created, and what we do here is pass the shared_ptr in by R-Value, which allows us to take ownership of the data block you have already created, removing the need for a data copy

Now let us suppose, the value we are setting, already has another value set, and – that other value was previously obtained by you, so you are holding a shared_ptr to it. Now, when you set a new value, the pointer in the map to the old data is destroyed, but because there is another pointer to that same data, the data its self remains until the last instance of a shared_ptr is destroyed.

So the principle here, like the class in the first article, you essentially never change data you can have a reference to, you simply replace it each time you do a change. This is what the shared_ptr is for and this pattern makes for a very thread-safe implementation.

This is very basic stuff, but you will be surprised how many times I have seen this done badly, even by me… it’s simple if you keep the rules simple and it is unlikely to go wrong if your user can use the class without having to think about threading/concurrency issues at all.

C++ Multiple Thread Concurrency with data in Classes

C++ Multiple Thread Concurrency with data in Classes

I was recently looking at Quora feed and someone asked the question: –

If a class in C++ has a member function that executes on a separate thread, can that member function read from and write to data members of the class?

which I thought would be a good question to answer. However, it’s not likely to be a short answer so I thought I would write a quick blog article on the subject.

I will start by saying that when dealing with multiple threads (often just referred to as concurrency), its actually a lot simpler than you might think, but it does take a certain mindset before it feels natural. Anyway, back to the answer to the question.

The short answer to the question is, No, not safely, not without adding some form of concurrency protection. Things are a lot easier in C++11 but like anything to do with concurrency, it all depends on your implementation details and the nature of the data members, their purpose and the reason for needing multiple threads to read/write to that data.

As a general (but not exclusively true) rule, having one instance of a class that contains dynamic data at runtime, should not be operated on by more than one thread. Many threads can read without issue, but modifying is generally a bad idea.

If you need to modify data of a class, give the thread that modifying the data exclusive access to that class instance, this, while not enforced or required by C++, adopting this strategy will most certainly help humans reading your code understand intent.

One of my most frequently used patterns which is incredibly useful, and (almost) lock-free, and fast, and basically foolproof from a usability point of view and really simple to understand in a concurrent system. This is not the only solution, nor is it perfect for any situation, nor is it anything special, but it does really help in multi-threaded systems (servers especially).

Let’s say we have a class that contains some configuration information, for this purpose I will use a simple map of name/values

class config_data_t
{
   std::map<std::string, std::string> _config_vals;

public:
   const std::string& get_val(const std::string& name);
}

And this configuration is global, so we load it from a file and hold it in memory so that all threads can access the configuration information quickly and without having to worry about concurrency AND we have a thread that watches the config file, and if it changes, we need to reload our configuration information safely, without either locking or disrupting any other thread.

So the first thing we are going to need to do is to create a global instance of our config object and load it from the config file. To do this we are going to create a shared_ptr which is a global variable, make a new instance of our config class and load it with its data.

using config_data_ptr_t = std::shared_ptr<config_data_t>;
config_data_ptr_t _global_config_info;

void load_config()
{
   config_data_ptr_t config_info = std::make_shared<config_data_t>();

   // Do whatever you need to do in order to load the config_info->_config_vals with data

   // update our global variable
   _global_config_info = config_info;

   // ... rest of main() stuff
}

Ok, in the above code, we have a global variable _global_config_info shared pointer, then in main() we have made a new instance of a config_data_t object as a local shared pointer (config_info), populated it with data and finally, we have assigned our config_info pointer to our global _global_config_info pointer, at this point within the scope of load_data(), both config_info and _global_config_info are pointing to the same instance of the config class, when load_config ends, config_info goes out of scope leaving the config info pointed to by the _global_config_info shared pointer.

Within our program we want any thread to be able to access our configuration information, so we provide a global getter function to make this easy and intuitive.

const config_data_ptr_t config() 
{
    return _global_config_info;
}

This is a very simple function, but this is where half of the magic happens. What we are doing is returning a new shared pointer that is pointing to the same instance of the config_info_t class as the _global_config_info pointer is pointing at. The shared pointer copy is both very fast and is thread-safe, concurrency is built-in, this is a thread-safe operation and often is implemented under the hood in a lock-free way using the CPU’s interlocking atomic operations. So, once you have the pointer you may access configuration values as required. This can be done safely from as many threads as you like, without fear of any concurrency issues

const config_data_ptr_t cfg = config();

const std::string& val = cfg->get_val("some.config.val");

Now I said above, this is “half” the magic, because there is a rule you need to follow… you must never, under any circumstances modify the values held in the global config object, if you do, your thread(s) will run into concurrency issues. So if you load it and use it, no problem. But we said that we also need to be able to detect changes to the config file and load those changes when detected! Well, that’s the other half of the magic, because we are going to take advantage of the shared_ptr behavior and we are going to follow one simple rule…

We will never change the config values in a config object, if we detect a change in the configuration file we will simply re-load it.

if(has_config_file_changed())
{
   load_config();
}

At the point we call load config, other threads might have a pointer to the current config, but when we load the config into a new instance of the config_info_t class, and then we assign this pointer to the global pointer, the second half of the magic occurs. All existing threads that have a config pointer are pointing to the old instance (pre config file changes) of the config object, the global pointer gets updated to point to the freshly loaded config object. As each thread finishes with the config the shared_ptrs go out of scope, when the last shared_ptr of the old config object goes out of scope, the old config object is freed from memory. Subsequent threads that call config() will get a pointer to the freshly loaded config object.

So this is the basic principle and it works really well, and I bet there is even a known “pattern” but if there is, I have no idea what its called. Also, there are two unanswered questions that the above raises.

What if I want my program to be able to modify configuration changes on the fly?

Absolutely this is a common requirement and its easy, you simply adopt the same principle, you never write changes to the config object that is pointed to by the global variable, instead, you simply take a copy of the global object, make your changes and replace the global object with your changes object.


void set_config_values(const std::map<std::string, std::string>& vals)
{
  // Make a new empty config object
  config_data_ptr_t config_info = std::make_shared<config_data_t>();

  // Make a copy of the global config
  *config_info = *config();

  // Modify config items as required
  for(const auto& v : vals)
    config_info->_config_vals[v.first] = v.second;

  // update our global variable
  _global_config_info = config_info;
}

Surely copying a whole config object is really expensive if you just want to change a single value?

Yes and no, it depends on the nature of your data. If the configuration data is huge, then yes, this would not be the right pattern, but in my experience, most config data has some fundamental characteristics.

a) the configuration does not change much.
b) the configuration is generally quite small from a size and a number of items point of view.

As I said, this pattern is not for all situations, generally, the copy pattern is a play-off between simplicity and efficiency. What I hope this example highlights though, is that class members are not in any way protected from concurrency issues, you need to adopt a pattern/strategy that gives you the best results for the job to are trying to do.

I will follow-up with a second example, where the same class is used with locking protection, where the copying-on-write pattern is not efficient, and-or you will make a lot of config changes dynamically, which is the other common pattern for this type of data.

Are you a Software Developer or Software Engineer?

Are you a Software Developer or Software Engineer?

I recently answered this question on Quora and thought this would make a good blog article, I think the question is quite interesting for anyone who writes software for a living.

Over time, software developers have become specialist language developers, in other words, you get .NET developers, C++ developers, Java Developers, etc… and there are several reasons for this happening. It seems like every week there is a new language that promises to be the solution for all the problems with programming languages that have come before, or a new design pattern that emerges because of a new capability in a given language. On top of that, languages have become so complex and so a fair degree of specialization is required to make good use of what is available.

The problem with this programming language specialization is the focus tends to shift from business focus to language/technology focus and this is what a lot of traditional software developers have become. For example, .NET developers want to use the latest .NET x feature, or C++ developers feel they must use the newest coroutine capability in C++20. Suppose I need to implement a function that takes a file and transforms it into another file, using std::filesystem instead of POSIX filesystem functions, that may well be exciting for me as a C++ developer getting to use some of the new C++17 features for the first time, but the business value of my output is no different. In truth, the business value I deliver would be lower because it will take more of my time while I figure out that new language feature.

Software developers tend to focus on their specialization, “I am a C++ programmer” or “I am a .NET C# developer” is how someone might describe what they do.

Software engineers are a different type of developer. The language one might use is really just a tool, an engineer might have tool(s) of choice, but actually care more (a lot more) about the needs of the business and the problems they are solving, than they do about using the latest language feature and the newest compiler to solve the problem. The output of a Software Engineer is always more aligned with business value delivered than that of a Software Developer (using the above definitions).

In my experience, quite a large portion of software developers get bored of creating the same old business value stuff, and are not “fulfilled” either personally or from a resume/career progression point of view, if the focus is not on using latest compiler features, patterns or new things that will define their skill set. This is because they are more engaged with the technology and specialization domain than the value they are supposed to be creating. The second most common reason for software developers to change jobs (the first being the pay, of course) is to take the opportunity to work on that new cool thing that will excite them. If you fall into this category, then you are a software developer, and you will ultimately be a commodity to the organizations you work for.

Being an engineer is much more about being engaged and aligned with your companies strategic aspirations and goals. If you find yourself believing in your companies mission; and you naturally interact with customers, and immerse yourself in the problems your customers are facing daily – and – you understand the overall systems architecture – and – you are employed primarily to write software; you are almost certainly a Software Engineer – the type that all good software companies aspire to hire.

A good way to think about it is this. If the product you work on is “your product” and you have to not only write it but support it, sell it and evangelize about it, these are the things you would do as a software engineer within a company. Ok, you might not go out and do the selling directly, but you would certainly have a stake in the success of sales activity.

Now you cannot just change your title from Developer to Software Engineer and expect to get more job opportunities or better pay! If you are going to call yourself a Software Engineer, then you need to exhibit the characteristics of one, people that look for software engineers are experienced enough to recognize them, and see through the ones that are only pretending/hoping to be an engineer.

More and more companies are looking to hire Software Engineers and not Software Developers, be honest with yourself and ask yourself the question: –

Are you a Software Developer or Software Engineer?

Microsoft says – Don’t Use IE!

Microsoft says – Don’t Use IE!

So I read this post and all the comments and think to myself – no one seems to be saying it!  So I will… (obviously)

For more years than I can remember, Microsoft has had a habit of pushing “the enterprise” down its own path, with an obvious (to me anyway) lock-in strategy.  IE over the years has shown this up time and time again.  I thought Chris Jackson’s comments were a little bit dismissive of Microsoft’s responsibilities here, and I wanted to state that the reason why IE/Edge is still required by many customers is that Microsoft’s platform, especially Microsoft’s development environments have made use of the IE-specific quirks, leaving vast swathes of enterprise-developed apps depending on IE. 

Even worse, so many ISV’s have jumped on the “easier to develop” enterprise software platform (started with VB back in the day, right through to .NET and its kin today) building software for sale that organizations have purchased and gotten tied into.  Be it, ASP.NET, or ActiveX or Silverlight (what a mess that was) the numerous browser quirks and non-standard, undocumented esoteric behaviors in the Microsoft browsers.  I think there was a time when Microsoft was trying to be the standard browser of choice, but failed miserably at it.  I do like Chris’s advice though, and as someone that is responsive web software development, I wish I did not have customers DEMAND we support IE11 because that’s their standard browser, it’s annoying and frustrating and not of our own making.  

Three years ago, we relegated development for IE to “best-endeavour” only, that means we will put reasonable effort into fixing anything obvious but have drawn the line and doing IE/Edge specific workarounds/hacks for our software. That has sadly left some of our customers stuck with different browsers for different applications, but we do not accept that is a problem of our making, we used to feel bad when our customers would tell us “well you are not Microsoft so fall in line” – not anymore! 

Now before I start to sound like I am hating on Microsoft, I must make clear that in recent years I think Microsoft has done a remarkable job, a remarkable turn-around even.  Windows 10 is orders of magnitude better than any Microsoft OS before it, Edge is not terrible and mostly works, although it’s still quirky. And hats off, O365 is a winner – very nicely done team Microsoft. 

Dear Microsoft, if it were up to me…

  • You have the capability, the developers, and the financial resources, probably more than most other software companies in the world, go and build a world-class standards-based browser, do for your browser what you already did for C++
  • Or, hurry up and develop your chromium-based browser and get shot of IE and Edge as soon as you can.
  • Go and help your customers remove their technical debt in relation to IE, its not their fault entirely, you created the environment – help your customers fix it
Why does C still exist, when C++ can do everything C can?

Why does C still exist, when C++ can do everything C can?

This was a question asked on Quora and the top voted answer was airing on the side of it being cultural or personal preference. I don’t think the answer is culture or preference; there is an excellent reason why both C and C++ exist today. C++ is not a good alternative to C in some particular circumstances.

Many people suggest that C++ generates more inefficient code, that’s not true unless you use the advanced features of the language. C++ is generally less popular for embedded systems such as microcontrollers because its code generation is far less predictable at the machine-code level, primarily due to the compiler optimizations. The smaller and more resource-limited the target system, the more likely that C is a better and more comfortable choice for the developer, and this is often the reason people suggest that C++ can not replace C, and that is a very good reason indeed.

However, there’s another even more fundamental reason that C remains a critical tool in our armory. Imagine you create your very own new CPU or computing architecture for which there is no C or C++ compiler available to you – what do you do? The only option you have is to write code using some form of assembly language, just as we did in the early ’80s when programming IBM PC’s and DOS, before even C, became mainstream. (yes there was a time when C was more obscure than x86 assembly!) Now imagine trying to implement a c++17 standards-compliant C++ compiler and STL library in assembly language, that would be a daunting, and unimaginable task for an organization of any size, right?

On the other hand, a C compiler and a standard C runtime library, while still not an insignificant effort, is a hell of a lot more achievable, even by a single developer. In truth, you would almost certainly want to write some form of assembler/linker first to make writing the C compiler simpler. Once you have a standards-compliant C compiler working well enough, then a vast array of libraries and code written in C becomes available, and you build out from there. If your target platform did require a c++17 standards compliance c++ compiler, you would write that in C.

The C language holds quite a unique position (possibly only comparable to early Pascal implementations) in our computer engineering toolbox, its so low level, you can almost visualize the assembly code that it generates, just by reading your C code which is why it lends its self so well to embedded software development. Despite this though, C remains high-level enough to facilitate the building of higher-level application program logic and quite complex applications. Brand new C++ compilers would most likely get written in C, at least for early iterations, you can think of C as an ideal bootstrap for more significant and more comprehensive programs like an operating system or a C++ compiler/library.

In summary, C has its place, and its hard to see any good reason to create an alternative to C, its stood the test of time, its syntax is the founding father of the top modern-day languages (C++, C#, Java, JavaScript, and numerous others, even Verilog). The C language is not a problem that needs to be solved, and the C language does not need to be replaced either. Like oxygen, C is old hat now, but it works well, and in the world of software development, we still need it.

So tell me team…. “How Long Will That Take?”

So tell me team…. “How Long Will That Take?”

I was inspired to write this blog post in response to a post I came across today on LinkedIn about sizing software projects. (link below). Sizing software projects is the thing that most everyone gets wrong, its hard and almost impossible to get an accurate estimate, why is that?

Well apart from scope creep, budget changes and all the usual business variables mentioned by Patrick in his blog post, developers and product teams will never be consistent in their output, not even if you average it out over a long time which is what SCRUM/other agile methodologies suggest when trying to establish velocity – that simply does not work, it is a fallacy. Writing software is an iterative and creative process so “how someone feels” will change output, and I am not talking about how someone feels about the project or work, I am talking about how someone feels about their cat dying, or their wife being pregnant or political changes or the weather, or the office banter or how unwell/well they feel today, in fact “life” guarantees this. So I am going to be a bit outrageous here and suggest an alternative way of thinking about this. Let us start with asking the most important question – “what is the point of estimating”? there are only two possible answers to that question….

1. You are going to undertake some work which you will charge your client for so you need to know what to charge them.

The only possible way you can give your client a fixed price for work that is essentially a creative process is by substantially over-pricing the work estimate and giving yourself lots of fat in the deal to give you the best opportunity of making a profit at the end of it. If you think that you can ask a team of developers to tell you how long it is going to take so you can make a “fair” markup you’re deluded. The best option you have in this scenario is to work backwards, you need to understand the need the client has at a high level, then you need to establish the value that your customer is getting from the thing you would deliver, then you put a price on it, you are looking for the best possible price the customer is willing to pay, you should not at this point be trying to establish “how much will it cost”, you must be asking the customer “how much are you prepared to pay”. Once you have a number, now you can work with your developers, but instead of saying “how long will it take” you are asking “can it be done in this timeframe…”, that may seem a subtle difference but it is actually huge because in answering that your developers will take “ownership” of the delivery commitment and that is what you need to stand any chance of being successful. The risk you are taking is on your team, not the project – if your team succeeds then you and your business do, if they fail so do you.

2. Your organisation wants to know how much and how long this new software thing is going to take/cost so they can “budget” and control and cost overrun.

The reason to budget is because managers and finance people (and he people that own the actual money that gets spent) generally need to *know* how much it costs in order to feel like they are doing their job. This really comes from an age where output was quantifiable (manufacturing for example), but creative IP output is much harder to quantify because there are so many variables that are outside of your control.

Think about this, you wake up one day and have a great idea to write a piece of software that will change the world; you believe it is going to make you your fortune. You are so confident that you leave your day job, set up your business and your first day you sit down and start to change the world – what is the first thing you are going to do?

I am going to bet that you will NOT crack open Excel and start to do a time estimate and a budget – Instead you will most likely start making your software. So you get on with it, now project forwards, you have created your software and you start to sell it things go well, in fact so well that you have to hire a manager or two, then you go and raise some funding to go global with your idea. Now something important happens, instead of spending your time making that thing you believe in, now the people who invested money (which may well be yourself) will want to protect that investment so they put management controls in place, and when the question “To get this thing to the next level, what do we need to do” is asked of you, and you answer “We need to build X, Y and Z” the dreaded question comes – “How long will that take?” which roughly translates to “how much will that cost”, this is because the people asking that question are in fact trying to protect cash and de-risk the investment – they don’t believe in the thing you are building in the same way that you do, the thing is just a means to an end – profit (which by the way is a good thing).

Going back to that first day, if you had tried to budget and determined it was going to take you six months before you could sell your first one, and after six months you realise you were not even half way there and you had another 12 months to go – would you stop? Well you would make sure that you could pay the bills and survive of course, but if you decide to stop, it would not be because of your budget, you would stop because with hindsight you no longer believed the idea was as good as you first thought – otherwise you would plough on regardless right?

So back to the boardroom then, and the “How long will it take”? question. Well the answer to that question should be, “I have no idea – how important is it”? Because either its important and it has to get done or its not that important.

You would be a little more committal than that but you get the idea. If you assume that an acceptable level of estimating effort was going to be 25% of the overall development effort (which has been my experience in software development) and if you have a thing that needs to get done because its strategically important for the business to flourish – then how long it takes is irrelevant, its either strategically important and it gets done, or its not. So if it “just has to get done” what on earth are you trying to estimate how long it will take for – just get on with it and use that 25% for making the thing you are trying to make – just like you did in the first six months of your enterprise.

You need to ask the same question about how important is it and what is it worth to the business, this is the question that the people trying to de-risk are not wanting you to ask, because they will find that question as difficult to answer as you will trying to answer the “how long will it take” question. Of course for trivial stuff like defects and general maintenance/tactical incremental development work this does not really apply, but for big projects that have strategic importance the “how long will it take?” question is a nonsense question to ask because any answer you get will be either fictitious or grossly over estimated, and both of these are bad.

If you want to get something strategically important created, hire a great team and empower them to get on with it – if you are making them spend their time telling you how much it will cost to develop instead of developing it then you are failing – not them. As a manager, entrepreneur, director or investor, hire software developers to do what they do best – make software, it is your job to take and manage the investment risk, if the team fail then you either hired the wrong team or you did not manage the business well enough to sustain the effort required to make it happen, asking them for an estimate is just a way of getting a stick to beat and blame them when things are not going well.

I have been managing (arguably very loosely) software development projects and a software business for the best part of 20 years, and I have learned a few things along the way. Perhaps more importantly I have been doing this largely investing my own money, so I think I know both sides of the “How long does it take” question very well.

The article I responded to
https://www.linkedin.com/pulse/software-project-estimation-broken-patrick-austin