“Multi-Coring” and “Non-Blocking“ instead of “Multi-Threading” – with a Script

 
Author:  Follow: TwitterFacebook
Job Title:Sarcastic Architect
Hobbies:Thinking Aloud, Arguing with Managers, Annoying HRs,
Calling a Spade a Spade, Keeping Tongue in Cheek
 
 


Pages: 1 2 3 4

slide 40

Of course, as with ANY technology, (Re)Actors (and (Re)Actor-fest architectures) have their own limitations.

slide 41

Let’s expand our performance table (the one which was shown on a slide from Part II) with two new columns:

  • A chance to handle a respective approach correctly in a typical business-level app, and
  • types of Apps where the respective approach is usable to write app-level code

Now, let’s take a closer look at it. As discussed above, BLOCKING Shared-Memory (the one which uses mutexes at app-level to synchronize between threads), tends to fail badly BOTH in terms of performance AND in terms of being correct. As a result, I DO NOT see ANY room for mutexes in modern app-level programming – except when they’re VERY accidental to the task at hand.

Non-blocking shared memory (including rather exotic stuff such as non-blocking algorithms, memory fences, and RCU) are known to perform really well (at least within one NUMA node) – but complexity of these things for any non-trivial task (especially IF we’re not going to accept non-fixable bugs in production) is usually THAT high, that it is not really feasible for a vast majority of business programs out there (nor it is really necessary). Still, it remains a very reasonable choice for absolutely-latency-critical apps such as High-Frequency Trading (HFT).

“Classical” flavor of Message Passing (including (Re)Actors), tends to have very good performance- and excellent chances to handle it well in a typical business app. No wonder it is my personal preference to be used (except noted otherwise in this table).

The second flavor of Message-Passing, is Message-Driven (a.k.a. Data-Driven) programming, tends to improve calculation performance even further, but this performance gain is mostly HPC-oriented, and won’t make much difference for a typical business app (though IF you REALLY need heavy calculations, HPX would be my first recommendation).

As for no-contention stuff, it is widely used for web apps (in fact, assuming no-contention, though it is actually pushed down to the database level, with transaction isolation coming into play). Moreover, for LOW-WRITE-LOAD web apps I don’t object to this kind of architectures; it DOES work – as long as we have not-more-than a few writing transactions per second (or, more formally, as long as we can keep “serializable” isolation level for our DB connections). However, as we discussed above, in spite of all the theory saying that going for stateless no-sync apps is the only way to scale, such systems-taken-as-a-whole, are NOT really scalable. As a result, I am AGAINST trying to scale this kind of architecture beyond its limits (which happen to be around 100M writing transactions/year, but still covers LOTS of websites, including such monsters as BBC, CNN, etc., which have LOTS of reads but only VERY FEW writes).

One important thing to note is that this table is based only on anecdotal evidence (a.k.a. Real-World Experience), so Your Mileage May Vary.

slide 42

To provide a bit different perspective to the table from the previous slide, it can be rewritten as follows:

  • the ONLY field which I know where (Re)Actors won’t fly, is High-Frequency-Trading a.k.a. HFT. HFT guys-and-gals are dealing with latencies of the order of hundreds-of-nanoseconds, and traditional (Re)Actors are not AS good latency-wise. On the other hand, I have to mention “CAS-size (Re)Actors” [NoBugs17a] in this context, which MIGHT simplify development of non-blocking algorithms (CAS (Re)Actors is a very interesting subject per se, but unfortunately, we don’t have time to discuss them).
  • For HPC, Message-Passing MPI is successfully used for ages, but I do agree that HPX-like approach of being Message-Driven is more promising (still, mutexes are outlawed there).
  • For low-write-load web apps, a classical approach which essentially ignores all the synchronisation problems – DOES work, but ONLY as long as write load is relatively low. As soon as the write load goes higher – this approach doesn’t really scale (causing ALL kinds of trouble, from impossible-to-find data races to poor scalability).
  • And for everything-else-out-there, IMNSHO (Re)Actors are THE way to go. In particular, real-world experience has shown that (Re)Actor-based systems have been seen to perform 30x better, and 5x more reliably than the competition, while processing billions of messages per day, and tens of billions DB transactions per year.

WHAT ARE YOU WAITING FOR? ARCHITECT YOUR NEXT SYSTEM AS A (RE)ACTOR-FEST!

slide 43

To wrap it up, I also have to note that the subject of the (Re)Actors and related architectures (especially when speaking about fine details of implementing it) is a huuuge subject. As a result – it isn’t possible to fit all of this discussion into one 90-minute presentation, so pretty much inevitably there will be remaining questions. For further information, you can either to…

contact ‘No Bugs’ over e-mail (or via his website), or to wait for Vol. II/Vol.III of his upcoming book on “Development & Deployment of Multiplayer Online Games”. In Vol. II, he discusses (Re)Actors as such and Client-Side (Re)Actor-fest Architectures, and in Vol. III – he speaks about the Server-Side.

slide 44

Questions?

Don't like this post? Comment↯ below. You do?! Please share: ...on LinkedIn...on Reddit...on Twitter...on Facebook

Acknowledgement

Cartoons by Sergey GordeevIRL from Gordeev Animation Graphics, Prague.

Join our mailing list:

Comments

  1. Jesper Nielsen says

    I’m a little skeptical when it comes to business processes that typically go like:
    1: Read from DB
    2: Perform some business logic – perhaps reading more from DB as required
    3: Write to DB.

    If the flow 1-3 must be encapsulated in a transaction then it will block the entire application for the full duration if it’s implemented with a single writing connection. Synchronizing business logic and storage seems to be inevitable here?

    Multiple connections don’t have to wait for each other – except when they do due to row/column/table locks including false sharing from page locks, even escalating to full on deadlocks etc…

    Basically my point is – the “old” way of doing things is a mess, but how to avoid it if business logic is part of a transaction?

    • "No Bugs" Hare says

      > I’m a little skeptical when it comes to business processes that typically go like:

      This is _exactly_ the kind of business processes I’m speaking about :-). Keep in mind though that due to one single point, “read from DB” can be 99.9% done from in-app 100%-coherent cache(!!).

      > it will block the entire application for the full duration if it’s implemented with a single writing connection.

      Yes, but OLTP apps tend to have transactions which can be made _very_ fast (I’ve seen 500us or so on average for a very serious real-world app). If elaborating on it a bit more, it tends to go as follows: all the reads within transactions are EITHER “current state” reads (these are Damn Fast, and will get into that 500us average), OR “historical”. For “historical” reads (which BTW from my experience are fairly rare for OLTP systems – I’d say that less than 5% of overall transactions involve them), they can be made in a special read-only connection (they’re historical hence immutable(!)), processed, and then the result can be passed to the write connection for writing (and as there are no “long” reads involved, it will get under that 500us limit too).

      > how to avoid it if business logic is part of a transaction?

      It depends on the specific business logic we’re talking about – but up to now I did not see a real-world case where it is not possible (see also above re. “current but fast” reads vs “historical and immutable” reads). Also, FWIW, I had a long conversation on the whole thing with Hubert Matthews (BTW, you SHOULD see his ACCU2018 talk when it becomes available – he’s speaking about EXACTLY THE SAME things), and we were in agreement on pretty much everything; what was clear is that _one monolithic DB doesn’t scale, you have to split it along the lines of the transactions involved, AND should use ASYNC mechanisms to communicate between different sub-DBs_. Given that Hubert is one of the top consultants out there and deals with LOTS of various real-world systems (I have to admit that his experience is significantly wider than mine) – it should count for something ;-). The rest is indeed app-specific – but is certainly doable (my addition: and as soon as you got your DBs small enough – you can process them in one single connection ;-)).

      • Jesper Nielsen says

        There would also be the added latency between the business server and the storage server, unless they are the same (not very scalable I would think?) so we’re easily talking single or double digit milliseconds here if multiple reads and writes must be issued, which is a typical case.

        Not a problem in itself. Even ~100ms latencies are perfectly acceptable for many business processes but if the business processes are becoming serialized then it becomes problematic when scaling to many clients.

        On the other hand I just had a mental rundown of business processes I’ve been working with through the years and in fact I’ve rarely been in situations where Serializable isolation level was used. Typically we were talking Read Committed, which means that reads prior to writes might just as well be outside the transaction – and typically were. (In fact in a lot of cases prior to where I’m working now even writes for a business process weren’t batched in transactions even though they probably should have been…)

        So I guess in many cases designing processes to postpone all writing until the very end of the task, then issuing a set of writes as a transaction to the DB reactor should make it possible to interleave business tasks, with only a small writing transaction being serialized.

        • "No Bugs" Hare says

          > so we’re easily talking single or double digit milliseconds here if multiple reads and writes must be issued, which is a typical case.

          Yes, but I yet to see apps where for DB writes it is not enough.

          > if the business processes are becoming serialized then it becomes problematic when scaling to many clients.

          Nope :-). As I said – I can handle the whole Twitter (well, coherent part of it) on one single box, and I have my doubts that your app has more than that :-).

          > I’ve rarely been in situations where Serializable isolation level was used…

          Sure – and it is akin to writing to the same memory location without the mutex :-(. Most of the time, it will work, but if it doesn’t – figuring out where the client’s money went, becomes a horrible problem. I have to say that in that (Re)Actor-based system which moves billions dollars a year in very small chunks, in 10 years there were NO situations when the money wasn’t possible to trace (there were some bugs, but they were trivially identifiable so counter-transactions can be issued easily).

          > even writes for a business process weren’t batched in transactions even though they probably should have been

          An atrocity, but indeed a very common one. I remember when being at IBM 20 years ago, a horror story was told to me. Guys from some big company (let’s name it eB**) came to IBM, and asked to help optimize their DB. And IBM guys were like “what transaction isolation you guys are using?” And eB** guys were like “sorry, but what is transaction?” Curtain falls.

          And FWIW, it didn’t improve since across the industry :-(.

          > should make it possible to interleave business tasks, with only a small writing transaction being serialized.

          I’d say “parallelize read-only parts of the business tasks” – it is better than interleaving, it is real parallelization (somehow reminiscent of (Re)Actor-with-Extractors approach for in-memory Client-Side (Re)Actors).

          • Jesper Nielsen says

            >I’d say “parallelize read-only parts of the business tasks” – it is better than interleaving

            Yup that’s a more precise explanation of what I meant:)
            Still there will be instances where “bad stuff” can happen since this is pretty much equivalent to “read committed” with multiple writing connections.
            A restaurant table could easily get 2 overlapping bookings when business constraint validations are dealt with in parallelized reads, and unfortunately optimistic locking is a bit more complex than checking row versions in this case.

            “double digit ms” latencies easily become “single digit seconds” when as few as 100 clients enqueue work simultaneously.

  2. Paul says

    Reading your posts was like a dejavu. A lot of concepts you describe here are actually facts for a production project that I’m working on with millions of active users per day. Re-Actors (or Pro-Actors) are indeed such a magnificent thing when it comes to scaling or designing an interconnected web-like architecture.

    • "No Bugs" Hare says

      > A lot of concepts you describe here are actually facts for a production project that I’m working on with millions of active users per day.

      Sure; a quote from my book on Development and Deployment of Multiplayer Online Games:
      “Quite a few times, when speaking to a senior dev/architect/CTO of some gamedev company (or more generally, any company that develops highly interactive distributed systems), I’ve been included in a dialogue along the following lines:
      — How are you guys doing this?
      — Psssst! I am ashamed to admit that we’re doing it against each and every book out there, and doing this, this, and this…
      – [pause]
      — Well, we’re doing it exactly the same way.”

      🙂 🙂 I feel that most of my writings represent a “hidden knowledge” which is known in the industry but is not widely shared, so I am taking an effort to popularize it :-).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.