OLTP DB Optimizations 102 –100% Coherent App-Level Cache for Single-writing-DB-connection

 
Author:  Follow: TwitterFacebook
Job Title:Sarcastic Architect
Hobbies:Thinking Aloud, Arguing with Managers, Annoying HRs,
Calling a Spade a Spade, Keeping Tongue in Cheek
 
 
Caching

[rabbit_ddmog vol=”7″ chap=”Chapter 27(d) from “beta” Volume VII”]

There are only two hard things in Computer Science:

cache invalidation and naming things.

— Phil Karlton —

As we have already discussed in Chapter VI’s chapter on Databases – one thing which can help a Damn Lot™ performance-wise – is an application-level cache.

First of all, let’s make sure to define what we’re speaking about:

Opposed to a common-for-web-world practice of caching query results, 1we’ll be speaking about caching OBJECTS (~=”table rows”).

The reason for it is very simple – under typical OLTP loads (and unlike typical website DB loads) queries2 do NOT repeat often enough to benefit from caching.

A word of caution – as it was already discussed in Chapter VI –

We’ll be speaking about very specific kind of app-level cache which implies that we have ONE SINGLE MODIFYING DB CONNECTION.

Actually – this is by far the simplest way to have cache which is guaranteed to be coherent. Strictly speaking, in theory we may have coherent app-level caches with multiple writers, but it will require rather complicated cache coherency protocols; moreover, the whole question of multiple writers is closely related to the question of transaction isolation – which under significant load becomes a really bad can of worms <sad-face />. We will discuss multiple-writers-to-the-same-DB-tables (answering both “how to ensure cache coherence” and “how to guarantee ACID properties without forcing transaction isolation down the throats of app-level developers” questions) in Vol. IX’s chapter on Scaling, Take 2 – but TBH, even when implementing a system with 10B+ writing transactions/year, I have never seen a need to use multiple-writers over the same piece of data; moreover, in the only case when I implemented multiple-writers for a serious system – it still benefited (both performance-wise and maintainability-wise) from rewriting into a pure Shared-Nothing architecture along the lines discussed in Vol. VI’s chapter on Databases.

As a result – for the purposes of this chapter we’ll assume that whenever you have a performance problem – you’ll just separate your DB into several ones in a true Shared-Nothing fashion – and will handle inter-DB interactions via Inter-DB Asynchronous Transfer protocol, with ALL the DBs handled via single-writing-DB-connection, as discussed at length in Vol. VI’s chapter on Databases. Even the most-heavy-DB-wise games I know, are perfectly doable under this architecture; the only real-world system which I am not sure to  know how to handle in this manner – is post-2005-or-so NASDAQ, everything else is really peanuts…

Last but not least –

We’ll be speaking about caching which is guaranteed to be 100% coherent

Hare thumb up:we’re guaranteed to get exactly the same results as if we’d be querying the underlying database.In other words, we’re guaranteed to get exactly the same results as if we’d be querying the underlying database. This goes in a stark contrast with quite a few app-level caching solutions such as memcached or Redis, which tend to provide rather weak (if any) guarantees on cache coherency with the underlying database (at least if we don’t guarantee coherency at app-level ourselves).3


1 and somewhat-similar to the concepts behind “Entity Beans” but MUCH better 😉
2 Well, at least those queries which handle more than one row in DB <wink />
3 while Redis does have option “appendfsync always”, it is usually not really usable in seriously-loaded environments

 

Implementation – Manual

Within single-writing-DB-connection model, implementing a 100% read-only cache is easy as pie. Let’s say that we want to cache table USERS – which BTW usually happens to be the table which is the most useful to cache. What we need to do to have our cache is the following:

  • Have some kind of map (hash map, tree map, whatever-else), mapping USER_ID into user-data-we-want-to-cache
    • we’re not required to cache the whole USER, any frequently-used-part-of-it-will-do.
    • On the other hand, we MAY cache some of the USERS data which belong to the same USER (and is stored in DB via 1:N relation); one example of such data-which-me-may-want-to-cache-together-with-USER may be something like USER_INVENTORY.
  • Have all writing requests to USERS table modify a respective entry in the cache
    • 99% of the time it will be modifying USERS by USER_ID
      • Any other updates are very unusual for OLTP. Moreover – even when such massive-updates are necessary, they’re usually split into lots of per-USER_ID-requests (to ensure that we DON’T have a heavy request which runs over 100M USER rows and stalls everything while it is running). We’ll discuss more of such “background processing” in Chapter 28.
    • Hare with smiley sign:why settle for invalidation when we can modify cache accordingly, saving on the extra DB request when we’re dealing with the same USER again?Strictly speaking, to keep our cache coherent, simple invalidation of the appropriate entry will do – but why settle for invalidation when we can modify cache accordingly, saving on the extra DB request when we’re dealing with the same USER again?
    • Let’s keep in mind that we MUST update the cache entry ONLY when underlying DB has successfully committed transaction. Well, strictly speaking – we have to update the cache as the statement is issued, and roll it back in case of DB transaction rollback for whatever-reason, but in practice – the way single-writing-connection DB apps are written, (doesn’t make?) any difference (in particular, if we’re following usual VALIDATE-CALCULATE-MODIFY model discussed in Vol. II’s chapter on (Re)Actors – it means that we won’t get to modifications until we have already decided on which-statements-we’re-going-to-issue), so using doing strict update-cache-right-away-with-potential-rollback doesn’t have advantages over much simpler update-cache-only-on-successful-commit.
  • Have a mechanism of keeping your cache in check. If you’re ready to allocate 10G of RAM to your USERS cache – then with rather common 1K/USER, you’ll be able to get about 10M of your USERS into the cache, but what if you have more than that in your DB?
    • The simplest way to keep size of your cache in check – is to:
      • have each of your cache entries to have “last access time”. Then, we can do either of the following:
        • from time to time – enumerate all the entries in your whole cache (maybe in increments), calculating the distribution of your “last access times”. Then – you’ll be able to find out what should be “cut-off-time” to keep the cache size in check – and then enumerate the whole cache once again, dropping all the entries which are older-than-cut-off-time.
          • In practice, the distribution statistics can be kept up-to-date as your system runs (for example, as a histogram of “how many cache entries have ‘last-access-time’ between 9:00 and 9:01, between 9:01 and 9:02, and so on”). This way – cut-off-time can be calculated based on these always-maintained stats, and only one enumeration over the whole cache will be necessary to perform the cleanup.
        • keep an ordered set of the “last access times” of all your cached entries (which effectively becomes a “Least Recently Used” list, a.k.a. LRU list), and whenever we’re about to put a new entry into our cache which would exceed the maximum cache size – we simply drop the oldest entry from the ordered set. This method, while being more precise memory-wise than “occasionally-enumerate-over-whole-cache” one, is not that good performance-wise (because of the costs of maintaining such an ordered set).
        • Overall – it is currently unclear to me which method – “occasionally-enumerate-over-whole-cache” or “keep an LRU list at all times” – is better (but both can be made working, this is for sure).
      • Have some (better – all <wink />) of the read-only requests to USERS table replaced with searches in our map-of-users.
        • Again, most of such read-only requests will go via USER_ID; those requests which read USERS not via USER_ID – are usually infrequent enough to be completely ignored.
        • IPC inter-process communication or interprocess communication (IPC) refers specifically to the mechanisms an operating system provides to allow the processes to manage shared data.— Wikipedia — Performance gains at this point can be enormous. For a lookup within in-memory hash map, we’re speaking about 100-200 CPU cycles or so. And for usual DB access – it is more like “hey, let’s bind this prepared statement, then let’s marshal it, then let’s transfer the marshalled request over some kind of IPC, then wait for a DB thread to wake up, then unmarshal the data on the DB process side, then match it with the prepared statement, then execute an appropriate execution plan, then parse the relevant page and find the data-we’re-looking-for within the page, then marshal the data, then pass the data over IPC in the opposite direction, then wait for our-app thread to wake up, then unmarshal the data on the side of our app, and then return it to our app code”. Overall, in the case of SQL-going-to-RDBMS – we’re speaking about at least tens of thousands of CPU cycles4 <ouch! />, which means that our in-app cache is at least 100x faster (!!).

Of course, the 100x+ improvement above applies only to the single operation of getting-some-data-about-USER; however, in practice it occurs soooo often (especially within VALIDATE part of the VALIDATE-CALCULATE-MODIFY request life cycle) – that I’ve seen the system as a whole becoming 5x-10x faster when app-level cache is deployed. And believe me, 5x-10x of DB performance improvement is something which can really make your day.5

Also, let’s keep in mind that while USERS table is usually the most obvious candidate for caching – it is not the only thing to be cached, so keep looking for good cache candidates – and give them this 100x improvement pretty much for free <smile />.

Implementation – SQL Bindings Compiler

Overall – writing one single cache manually along the lines above, is perfectly doable and is not that big deal. However, maintenance costs (=”keeping in mind that we MUST update cache entry whenever we’re adding a new statement working with any of cached tables”) – and complexities of caching more tables, tend to be pretty bad if we’re doing our caching manually <sad-face />.

However, if we’re using an SQL Bindings Compiler (as discussed in[Hare]) – we can easily add code generation for app-level caching into the compiler, and hiding the whole caching thing from app-level code entirely!

In [Hare], we had the following greatly-simplified example of “SQL description file”:

#SQL DESCRIPTION FILE FOR WHATEVER-OUR-PROJECT-IS
TABLE {
  SQL ”CREATE TABLE users (userId VARCHAR(30),
  money INTEGER
  ),
  PRIMARY KEY(userId)”
};

STMT {
  SQL ”SELECT MONEY FROM USERS WHERE USERID=?” UNIQUE,
  GENERATE FUNCTION NAME moneyFromUsersByUserId
};

Out of it, our SQL Bindings Compiler would generate the following function:

//C:
public int moneyFromUsersByUserId(const char* userId) {
  //...
}

or

//Java
public int moneyFromUsersByUserId(String userId) {
  //...
}

or

#Python
def moneyfromUsersByUserId( userId ):
  #...

Whatever the language is – we can see that semantics of our generated function allows to hide all the app-level caching behind it entirely. In other words – if we’re using SQL Bindings Compiler – we’ll be able to add caching without any changes to the app-level code, just by adding some description to our TABLE specification:

#SQL DESCRIPTION FILE FOR WHATEVER-OUR-PROJECT-IS
TABLE {
  SQL ”CREATE TABLE users (userId VARCHAR(30),
  money INTEGER
  ),
  PRIMARY KEY(userId)”
};

CACHE {
  users (userId, money), max_entries=config
  //as a rule of thumb, max_entries should be read
  //  from a config file
};

Bingo! If our SQL Bindings Compiler is smart enough – it will be able to change all the generated functions so they’re both using and updating our app-level USERS cache.

Moreover, as long as all the accesses to the DB are made via SQL bindings compiler (i.e. there are no DB requests – at least writing ones – which go directly to DB without going through SQL Bindings Compiler) – we can stop worrying about forgetting to update our cache in some new statement. SQL Bindings compiler will do for us what-computers-are-really-good-with – not forgetting to do mundane things.

I rest my case for using SQL Bindings Compiler to generate code for app-level caches.

[[To Be Continued…

Tired hare:This concludes beta Chapter 27(d) from the upcoming book “Development and Deployment of Multiplayer Online Games (from social games to MMOFPS, with social games in between)”.

Stay tuned for further parts of Chapter 27, where we’ll continue our discussion on DB optimizations 102 (discussing group commits and compound SQL statements)]]

Don't like this post? Comment↯ below. You do?! Please share: ...on LinkedIn...on Reddit...on Twitter...on Facebook

[+]References

Acknowledgement

Cartoons by Sergey GordeevIRL from Gordeev Animation Graphics, Prague.

Join our mailing list:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.