Comments on this post from Reddit

TiltedPlacitan:

I was involved with writing and qualifying the RNG for a couple of well-known online poker sites. NOT Planet Poker.

Even a correct RNG and correct shuffling algorithm, audited by a qualified, independent third party will have its detractors.

You would not believe some of the email to our customer support department, and a couple of voice mails that I had played back to me.

Effnote:

Some people will always accuse pure randomness of being rigged when they get bad luck, whenever it's in real life or online.

doihaveto:

Hah, yeah, I wish the article was split into two sections, one about the implementation of proper randomness, and then a separate one about the perception of randomness (as opposed to perception currently bookending the implementation from both sides).

There are some awesome psych studies about human perception of randomness. One I remember (I wish I had a link to it!) had participants judge the randomness of various sequences of coin tosses produced by a bunch of different algorithms. And it turned out that the sequence judged by humans as "most random" was one produced by the algorithm that looked like "look at the previous value of the coin, and 60% of the time flip it, and 40% of the time leave it as is".

...which doesn't make any sense for a stationary process like coin flips, but you know, humans. We're sensitive to the frequency of change, not just frequency of occurrence!

TiltedPlacitan:

Once upon a time, someone asked me to write a version of binglaha. So, I wrote a non-biased dice roller.

Your example... less than 2M dice rolls a day. ...but mechanical and analog. I like that.

CHEERS

ZMeson:

As always, a great article. I wish there was more coverage on existing PRNG algorithms and why someone might want to use them. True RNGs are often very slow and for many applications can't be used except to periodically seed PRNGs.

Here is a great table put together by the creator of the PCG-family of PRNGs. (I haven't heard much about PCG since about a year ago, though it looked very promising at the time. I wonder how it's fared test-wise and practically too since then.)

ProfONeill:

Hi, author of PCG here. The table on the front page of the PCG website is meant as an “at a glance” quick (and slightly simplified) guide for the impatient. For people interested in random number generation, I’d recommend reading

and finally, there’s

There are other good RNGs besides PCG, including ones currently omitted from the site (SplitMix is simple and fast, for example, and recently adopted by Java, although I seem to recall that it’s trivially predictable, but I might be misremembering).

For those who’ve not seen much new on the PCG website lately, I do have some blog posts in the queue which I hope to push out soon with some interesting new things and updates. (I’d hoped to do more work on PCG over the summer, but then I got elected department chair starting July 1 and I thus have various other things on my plate to deal with — such is the life of an academic.)

But in the meantime there is something new, randutils for C++ became a C++17 proposal.

TiltedPlacitan:

I am only interested in secure RNGs. I would be interested in your take on the following other algorithms:

1) OpenSSL PRNG 2) ISAAC 3) AES-CTR DRBG

I'm having a very hard time trying to ever justify using anything else but AES-CTR DRBG in 256-bit, prediction-resistance-enabled mode going forward. i.e. for every generate, there is a sample to an entropy source, getting sponged into the internal state.

ProfONeill:

If you want cryptographically secure RNGs, you should look to the cryptography community and see what they’re using for popular stream ciphers. I’d say you’d want something old enough to have had a lot of scrutiny and new enough to be modern. The wikipedia summary table tells you more than I can. If you want more, ask a cryptographer.

But cryptographic uses are just one use of RNGs. For example, suppose you use random pivot quicksort. There what matters most is that your RNG is really fast because it’s going to called at each call to partition (and industrial-strength cryptographically secure generators are not fast at producing single outputs). You also want to protect against algorithmic complexity attacks so in in ideal world, that fast RNG should be hard to predict. It’s a secondary concern though because probably no one is going to do such an attack on your piece of code.

pixelrebel:

I like my random bits to be radioactive. I personally get mine from hotbits, which is determined by the random decay of ceasium-137.

ProfONeill:

I like my random bits to be radioactive.

Obviously you’re free to like whatever you want, but when I hear people talk about going to special websites to download “real randomness” (when there is ample entropy close by for the taking), it makes me think of the people who insist on gold speaker wire (or gold USB cables) because they think it makes their music sound better.

If you want actual randomness, take a photograph (ideally RAW, but JPEG is fine too) and calculate its cryptographic hash (e.g., SHA-512). That way you don’t have to trust a third-party website and the results are verifiable and beyond reproach. (And because CCDs are photo detectors, you even have quantum randomness in there too.)

But for just about everyone /dev/random is fine. When you create an https:// session, that’s what your system is looking to. If it’s good enough for people who care about Internet security, it’s probably good enough for you.

pixelrebel:

Yeah, it's just for fun and for personal use. I agree, you shouldn't trust a closed system for RNGs. I like your idea of hashing a photo. One thing to note, depending on the CMOS chip, the noise recorded is likely pseudo random. I've seen some pretty bad cameras with uniform noise.

ProfONeill:

If you take the SHA-512 sum from an 8 MP sensor, you have 24 million samples; I’d argue that even if you degrade it and compress it you still have way more entropy than you need.

But take a picture of a newly rolled dice in the foreground and tree blowing in the wind in the background if you like. ;-)

pixelrebel:

Yeah, good point. I guess the flaw I'm considering only applies if you take a photo with the lens cap on!

YeOldeDog:

Well, there are always flaws, even at a root level sensors have hot/dead pixels that are mapped out at production. Some cameras let you run a diagnostic program for a user to subsequently map out others that fail over time. Inferred values for hot/dead pixels are calculated based off values of adjacent pixels. Even a RAW image is not a sensor data dump.

sinyesdo:

But for just about everyone /dev/random is fine.

I think you mean /dev/urandom since /dev/random is blocking (on Linux) for... stupid reasons. On Linux, these days you'll want to use getrandom() with flags == 0 to get the desired behavior for all cases, including cryptographic applications.

no-bugs:

If it’s good enough for people who care about Internet security, it’s probably good enough for you.

As we've learned from Debian RNG disaster, this statement is clearly NOT to be taken for granted.

TiltedPlacitan:

I like avalanche noise across zener diodes, myself.

I am curious as to the methodology used in the Raspberry Pi's onboard HWRNG.

NasenSpray:

Couldn't find any papers, but it's probably using a CSPRNG that is repeatedly seeded with the output of a bunch of free-running ring oscillators.

TiltedPlacitan:

Puts out about 1MB/s of seemingly-nice entropy. Wish they were available when I had my last big project involving random number infrastructure. We ended up using 1U machines that had VIA C3s in them when we needed to scale up.

ZMeson:

Thanks for the reply. I'm glad to see randutils as a C++ proposal. I do think you simplified things a lot with randutils. Hopefully it gets standardized.

Veedrac:

This is a bit off-topic, but I've been curious about this for a while now.

In your paper you have a graph of state size versus failures of a given RNG (fig 15). There is a theoretical optimum, and several actual measurements.

You find that the real-world measurements can sometimes do better than the theoretical optimum because the tests are imperfect, and will miss some failures.

What I've realized is that it's possible to quantify this more explicitly by taking a 2n vector of incrementing values (divided uniformly among 32-bit outputs) and shuffling it with a perfect random source (probably approximated though use of some CSPRNG). Then you can test this stream, read cyclically, as your random source.

You might find storing 236 32-bit values (~270 GB) problematic, but note that you can simply generate these values live through use of a Fisher–Yates shuffle, with the unshuffled values stored efficiently in a 232-length vector of 4-bit counters using a run-length encoding scheme in only ~2 GB, which fits in memory. Note that 236 is the upper bound because you know LCG + FashHash get a perfect score with a 36-bit state space, and a perfect shuffle can't do worse.

This would give you a "true" lower bound, which IMO would do much better at quantifying how effective the test is, as well as how close the algorithms you give are to the actual ideal.

ProfONeill:

I like it! It’s completely do-able!

I suspect that if I sat down to implement it I’d spend a few minutes making sure that the implementation technique was as simple/efficient as possible while achieving the same effect — in other words, I’d probably change some details from the implementation you suggest.

Actually though, 32-bits * 236 is only 256 GB, which is big but not infeasibly so for doing it the dumb way.

Also, it turns out that SmallCrush is a good proxy for BigCrush and for that you only need 232, so for an initial go around, doing it the dumbest way possible with SmallCrush is probably the way to go.

Veedrac:

I'm glad you like it.

My main concern with the naïve technique is that a dumb Fisher-Yates shuffle of a massive array is going to do a random disk write every value produced, which means you're capped at ~0.5 MB/s even on a recent SSD. Google tells me that means you're in for week long write times for the longest array length!

My proposed optimisation was very hastily made (at minimum you're going to need some way to make indexing faster, like putting it in a binary tree), so I agree it's worth thinking about the specifics longer than I have.

ProfONeill:

Ah, I think you misunderstood. I wasn’t talking about using virtual memory, I was talking about using actual memory.

I’m lucky because I have (shared) access to a machine with 512 GB of RAM and more-or-less exclusive access to machines with 256 GB and 128 GB, so I can be wasteful if I want (e.g., when writing this blog post).

Veedrac:

Well, that does simplify things :).

Veedrac:

I ended up referencing the PCG paper elsewhere on Reddit and that reminded me about this conversation.

Has there been an update on this? If you'd rather offload the work, I'd be happy to try it out myself.

bames53:

Thanks for pointing out that C++17 proposal. I'd seen the earlier one based on your recommendations, but hadn't noticed this one. Next we need a proposal to add some of the newer PRNG algorithms.

sacundim:

I've looked at that PCG web site and it always leaves the flavor of snake oil in my mouth. On the one hand, it does seem like a decent PRNG. But on the other it's very, very oversold, and through deceptive tactics.

For example the "prediction difficulty" column of the table you link is bullshit, as are the comments on this page about the predictability of other RNGs. The author is trying to pass the PCG generators as occupying some middle ground between "secure" and "predictable," but:

  1. There is no value in having such a middle ground. If you need a secure RNG, you need a secure RNG. If you don't need a secure RNG, you don't care whether it's predictable, only that it doesn't bias your application.
  2. The claim that PCG is hard to predict boils down to nothing more than the author doesn't know how to predict it. See Schneier's Law.

By all means try it if you're interested—if it passes the statistical tests it can't be bad—but don't buy the hype.

ProfONeill:

I understand the criticism. You’re not the only one who thinks there should be no middle ground, either go with absolutely 100% secure or 100% trivially predictable.

But that extremely high security comes at a cost. If you park your bike, you wouldn’t want to only be able to choose between a secure concrete bunker or leaving it unlocked on the street. So I believe that there is value in a middle ground, having something that requires considerable effort to predict but doesn’t go to the same level of obfuscation as typical secure generators.

And I’m right there with you on Schneier's Law. No one should take my word for it.

FWIW, one of my plans for this summer (originally for the spring, actually) is some PCG challenges with $5000 in prize money. Setting up the contest took some time because my institution wasn’t sure of the legalities (what if they end up paying someone in a “bad” country!) but we think we have it sorted out, so there should be a contest “soon”. If the contest gets publicized and some time goes by and no one successfully predicts the output it adds a little more weight to the claim and diminishes the Schneier's Law issue (but I still don’t think you should rely on PCG for actual cryptography).

Finally, no one has to use PCG. There are other modern RNGs that are good too. If you don’t like it or don’t trust it, that’s cool with me. (If you think I’m wrong about some other RNG that I’ve talked about, feel free to email me. Likewise, if you think there’s some snake-oil there, I want to know why and address the issue.)

ZMeson:

but don't buy the hype

Haven't bought, but am trying to stay informed.

For example the "prediction difficulty" column of the table you link is bullshit

OK, but the table (without that column) is still interesting. And I'm not suggesting that IT Hare lift the table, just that he (she?) include a table like it highlighting different algorithms.

SrbijaJeRusija:

I am really confused about that table. I have experience with both PCG and xorshift family, and the latest xorshift are the fastest by far, bar none, so I am confused as to why PCG would put itself as faster...

Pre-post edit: as it does not look at the lastest xorshift algos... of course.

ProfONeill:

One of my projects for the summer is updating the PCG website, and doing so will include covering all of Vigna’s latest generators, some of which are indeed very speedy.

FWIW, Vigna’s website used to have a similar issue. For a long time, it didn’t list PCG or SplitMix. I’m pleased he’s updated it recently.

johan_de_alverman:

PCG looked very promising indeed, and it still does. I'm however a bit concerned that the paper has been under review for quite some time now at "ACM Transactions on Mathematical Software". I wonder whether the author of PCG could sheld some light on the review progress?

ProfONeill:

Providing an update about that is very much on my to-do list! I’ll have a blog post about all of that aspect soon.

sandwich_today:

V8 (Chrome) recently switched to xorshift128+ for random number generation and the comments on the post have some interesting discussion.

ProfONeill:

I saw that page myself, but alas not until months after the discussion in the comments had taken place.

FWIW, Vigna and I had a long conversation over email back in February 2015. He’s read the paper and thought about PCG in depth. We also talked about his generators vs mine and so he knows how they fair in comparison (each has its strengths). Our conversation seemed very cordial.

So I have to say that I was a bit perplexed by his comments. For example, he complains that I don’t list some of his more recent generators on the PCG website (on pages that haven’t been updated since early 2015), but for a very long time he didn’t list PCG on his PRNG shootout page. When he didn’t list mine, I didn’t try to claim to anyone that there was anything suspicious about that.

moschles:

This spectacular failure is all about the RNG which was used by Planet Poker, the company which used to run real-money(!) games at the time. They were using a pseudo-RNG with a merely 32-bit internal state to produce 52-card decks of cards.

http://i.imgur.com/T7GqVPi.gif

jms_nh:

cultural reference = ?

clintp:

Planet of the Apes, sometime in the first few minutes after they abandon ship but before their clothes are stolen.

_dno_:

Only 32 bits?! Hahaha!

log2(52!) = 225.58 bits

sacundim:

Pretty good article. If you enjoyed reading this, I recommend watching this video on the TSA "randomizer" app and, instead of believing the presenter, spotting his mistakes.

I think the most interesting recommendation in the article is the recommendation to run statistical tests on your game's events. That definitely does sound like a good idea!

It's very interesting however that the article is so negative on non-cryptographic-strength PRNGs like the Mersenne Twister:

I won’t say anything bad about Mersenne Twister as long as it is used for Monte-Carlo simulations and any other similar applications, but I certainly do not really like it for games; in particular, as a non-crypto generator, it it may easily be reversible. And as soon as your RNG is reversible, one may reconstruct the state of your RNG from its outputs, with potentially devastating-for-your-game results. Even if you use your RNG “just” to calculate a chance of critical hit, Mersenne Twister is potentially dangerous :-( .

I mean, certainly in a slow-paced poker game where there's money at stake, you need to assume that malicious adversaries will try to predict the RNG to their advantage. And I guess that you could argue that, strictly speaking, all games are an "adversarial setting," the sort of situation that cryptography is designed for.

But I still find it rather hard to accept that something like Super Mario Bros. or an FPS needs AES-CTR.

First, let’s note that this construct [AES-CTR] is pretty fast. On modern x86 CPUs, single core can generate 150M+ random bytes/second this way (and this is a Damn Lot).

This strikes me as the wrong metric. I'd think one would rather profile the app and see whether the RNG is a hot spot that's bottlenecking the program. I.e., the correct metric (I think) wouldn't be how many bytes it produces per second, nor even how much CPU time it's using, but rather whether the RNG is consuming resources that would otherwise have been used by something more important.

Also from my own observations, 150MB/s is a very slow RNG.

Oh, and of course, you can use your favorite block cipher (such as Chacha20) instread of AES128 too.

Nitpick: ChaCha20 isn't a block cipher, it's a stream cipher. It does generates its key stream a block at a time by running a pseudo-random function (not permutation like AES) in CTR mode.

For crypto-purposes, such an AES-CTR-based thing can generate up to 264 random bits before it requires re-seeding (mechanics of this limitation is rather complicated, but is related to birthday paradox). With 150M+ random bytes/second I’ve mentioned above, it means that we could run our PRNG continuously on one single core for 264 bits / 8 bits/byte / 150Mbytes/second / 86400 seconds/day / 365 days/year ~= 487 years. Good enough for a vast majority of the games out there even without reseeding.

Nitpick: this should be 264 cipher blocks, not bits. An AES block is 128 bits (16 bytes), so yeah, the math here is significantly underestimating the cycle. (Well, "cycle" is the wrong word; the failure mode isn't that the stream repeats, it's that an attacker can tell that the stream isn't random.)

CaptainAdjective:

I mean, certainly in a slow-paced poker game where there's money at stake, you need to assume that malicious adversaries will try to predict the RNG to their advantage. And I guess that you could argue that, strictly speaking, all games are an "adversarial setting," the sort of situation that cryptography is designed for. But I still find it rather hard to accept that something like Super Mario Bros. or an FPS needs AES-CTR.

I don't know... Nearly any game can theoretically be elevated to the level of professional play. And even if not, there's still the cheating angle. Maybe there isn't money being won or lost there, but people will go to great lengths to get their imaginary numbers to go up, and that comes at the expense of someone else's experience of a fun, fair game. Which surely has a cumulative effect.

sacundim:

Nearly any game can theoretically be elevated to the level of professional play. And even if not, there's still the cheating angle.

Let me try to spell out the scenario I'm picturing in my mind here. Some gamer dude is going to learn how to do this:

I don't rule it out completely, but I'm having a really hard time picturing it.

On the other hand card counting in blackjack is a real thing and looked down upon.

tending:

You're not thinking about bots.

sacundim:

You're right, I'm not.

no-bugs:

THANKS for nitpicks (of course...; though I should note that 62000 years is even better than 497 years for our purposes :-P). I hope I fixed them now.

but rather whether the RNG is consuming resources that would otherwise have been used by something more important.

Yes, but this is a statement which applies to everything out there, and all universal statements are pretty much useless. In practice, though, chances of RNG becoming a bottleneck are extremely low; moreover, if it ever happens, we can be 99.(9)% sure that it happens due to TONS (like 1e4+) of RNG calls within one "network tick". Ok, in such a (IMO still rather theoretical, but well - I cannot say I've seen everything) case we could play a game which combines crypto-RNG between ticks and non-crypto-RNG within the tick, getting all the performance we want, and preventing predictability from propagating between ticks (essentially eliminating pretty much all the problems due to predictability/reversibility). I've added it to the article (but I still DO NOT recommend using non-crypto RNGs across the ticks).

Cyb3rWaste:

Here is a fool proof way:

return 3; // decided by fair dice roll

davelupt:

The needful

rabbitlion:

It's worth noting that the SecureRandom issue was only a problem in the Android version of Java, it wasn't present in the reference implementation OpenJDK.

no-bugs:

Yep, I've mentioned it now, THANKS!

maxc01:

the rabbits are great

ZMeson:

Another thing to mention is that one should use standard functions (if available) or a widely used and tested library for the transformation of uniform deviates to other distributions. This would help developers avoid "modulo bias" and generate correct Gaussian distributions (when needed), etc.... (I don't know how much Gaussian, Poisson, and other distributions are needed in games, but avoiding modulo bias is important. And C++ at least has a standard way to avoid the bias using std::uniform_int_distribution.)

no-bugs:

See a passage about somebody producing an "optimised" (and Badly Wrong) version of std:: which has made it to production (ok, it wasn't random, it was hash, but the very same thing can happen with ANY library which doesn't have its focus at maths, and apparently at least SOME of std:: implementations don't). If I am running a Really-Critical Thing, I am deadly afraid of this kind of stuff (an "optimised" version of the 3rd-party library modified by "smarty pants" developer who doesn't have a clue of what he's doing, and which ruins the whole thing)...

ZMeson:

Sure, but most gaming programmers won't be familiar with dealing with different biases nor where they originate from. For them, standard libraries (or possibly a well-respected math or random library) is more likely to do things correctly than them on their own. In the case of a library having an incorrect implementation, that is the reason why people should be putting their results through testing (as you mention).

no-bugs:

In a sense - yes, however IMO the following logic applies. (a) IMNSHO, proper chi-square testing requires qualification which is at least comparable to writing a reasonably good implementation of near-trivial shuffling stuff. (b) if so - why taking chances tracking updates (libraries are changed all the time, and making sure that all the stats tests are run after EACH change to the library, it yet another thing to remember about - and easy to forget about too)... If we could freeze 3rd-party stuff - yes, testing would fly, but we usually cannot...

ZMeson:

IMNSHO, proper chi-square testing requires qualification which is at least comparable to writing a reasonably good implementation of near-trivial shuffling stuff.

Math qualifications, sure. Some places though may have SQA people or statisticians on contract who can look at data but are unable to program. I understand where you are coming from, but I think many small software houses may fall into this category of not having many "mathy" developers but still able to have an SQA person be able to run chi-square testing.

no-bugs:

Well, I'd rather not recommend dev shops who have NO math-aware devs, to work with Really RNG-Critical stuff (such as lotteries/casinos/...). There are LOTS of things to do wrong in this regard outside of 3rd-party libraries...

For not-really critical RNGs, it will fly though, and risks are not drastically high.

ZMeson:

I'd rather not recommend dev shops who have NO math-aware devs, to work with Really RNG-Critical stuff (such as lotteries/casinos/...)

Absolutely! :)

patlefort:

What about the rdrand instruction? It uses hardware RNG from intel chip.

Zarutian:

Are you sure that your code will only run on Intel x86-64 processors?

Why shackle it to an processor that is slowly being phased out?

shooshx:

This guy sure likes exclamation marks(!!!)

YeOldeDog:

Its an interesting field. The perception of what is a 'fair' outcome, to a human mind, is variable by sequence context. For example, if person A has sat for three days watching me flip a coin the fact I call out 'heads' six times in a row at one point will not trouble them at all. But if person B enters the room at the start of that particular sequence, their judgement would be I was cheating.

doihaveto:

Similarly to /u/sacundim, I feel like the article would benefit from being a little more nuanced about when and where to use RNGs of different strengths. Because a claim like this:

Especially as there are much better and easily available alternatives – DON’T use Mersenne Twister for games

... is only true in some specific cases. Such as: if your game involves people playing poker for money, and you generate one new deck of cards every 5 minutes, then yes, definitely use strong crypto RNG! You'd be foolish not to. :)

But on the other hand, if you're playing an RTS game with friends, and on each frame the AI has to roll the dice a bunch of times for each unit to determine what they should do, that crypto RNG would be expensive overkill. Mersenne Twister or even TinyMT should be more than enough. And if predictability is really a concern, one could use a crypto RNG to re-seed the MT on an interval.

I think part of the art in conveying "best practices" to novice readers also means explaining how experienced developers evaluate the various engineering tradeoffs in a given situation (such as problem fit, runtime cost, or dev cost). And just telling novices to use crypto RNGs everywhere does not do that... Just my $0.02 :)

no-bugs:

crypto RNG would be expensive overkill.

How expensive though? With 20 network ticks/second and 10 "game worlds" running per core, to get 1% of load spent on AES-CTR RNG, you'll need about 20000 dice rolls per network tick. Well, in my books it is still a "damn lot".

EDIT: actually, there is an easy solution to this problem (which I'm still not convinced of arising in practice, but well... ;-)). It's using AES-CTR across the ticks, and non-crypto-RNG within the tick; this way we'll have the best of both worlds, i.e. both "guaranteed" unpredictability and speed. I've added it to the article. THANKS! :-)

doihaveto:

How expensive though? With 20 network ticks/second and 10 "game worlds" running per core, to get 1% of load spent on AES-CTR RNG, you'll need about 20000 dice rolls per network tick.

Not saying not to use crypto RNGs, just that - horses for courses. Why pay for something when you don't need it? :)

(EDITED for clarity! sry, had a wall of text before :) )

EDIT: actually, there is an easy solution to this problem ... It's using AES-CTR across the ticks, and non-crypto-RNG within the tick

Like any good compromise, it seems to work well enough, and makes nobody happy! ;)

no-bugs:

Why pay for something when you don't need it? :)

Well, because understanding whether you need it or not, is much more expensive than paying those two cents ;-(. Potential abuses are Really Sneaky, and stuff such as lifting fog-of-war (due to AI RNG being reconstructible) may be Really Difficult to notice (and abusable too, especially if you're using 3rd-party library=well-known code for it).

notfancy:

and you generate one new deck of cards every 5 minutes

If you do, never ever start a shuffle from a sorted deck every time. Reshuffle the last shuffle, so that your state is the PRNG state + the deck.

Osmanthus:

I think that using an algorithm for gambling games is invalid. One principle with gambling is that "the dice have no memory", or the likelyhood of any particular outcome does not change because of a previous roll. However this does not bear out with any algorithm because the likelyhood of outcomes does change as rolls are made. You can see this by realizing that the list of numbers generated maps to a finite ordered set and each roll removes an option from the list, so the ratio of possibilities changes. This should be exploitable.

CanYouDigItHombre:

A friend once told me, if I want a simple RNG just pick a large number that's much bigger than int32 and keep adding it.

        for(int i=0; i<100; ++i)
        {
            Console.WriteLine((548784213254848423L * i)& 0x7FFFFFFF);
        }

Results:

0
426098599
852197198
1278295797
1704394396
2130492995
409107946
835206545
1261305144
1687403743
2113502342
392117293
818215892
1244314491
1670413090
2096511689
375126640
801225239
1227323838
1653422437
2079521036
358135987
784234586
1210333185

Pretty shitty the last digit is counting down but the rest of it looks random

krokodil2000:

Check out the second to last digit:

9
9
9
9
9
4
4
4
4
4
9
9
9
9
8
4
3
3
3
3
8
8
8

Looks so random.

CanYouDigItHombre:

Ah shit lol. I wonder if my number was shitty or if he didn't stress the shitty part enough

no-bugs:

In fact, it looks as a home-grown LCG (more precisely - Lehmer RNG): see https://en.wikipedia.org/wiki/Linear_congruential_generator . Well, I've seen RNGs worse than that ;-).

systembreaker:

That's all well and good but for sake of argument, why are you writing your own RNG and spending all that effort testing it as in the article? Use a mature one.

Grimy_:

As expected from ithare, the writing is extremely poor. That’s too bad, the subject matter was interesting!

no-bugs:

Well, if the largest complaint about such an article is about quality of writing - I think I've done my job pretty well :-).

ahminus:

As someone who has worked directly in the industry on exactly the kind of games your writing is intended for, the content is excellent, and I don't even find the writing (well, translation) to be seriously lacking.

no-bugs:

Well, I already have one. Finding another one to proofread on such a short notice will be a pain in the ahem, neck... :-(. EDIT: asked another supposedly English speaker, and hopefully fixed this one.

no-bugs:

Could be, but good proofreaders are damn difficult to find, especially when they need to proofread 5K words over a weekend :-(.

ZMeson:

There's plenty of good proofreaders on reddit! ;-)

I'm joking ... a bit. Shoot, you present good topics in interesting ways. I'm OK with a bit of bad grammar and am willing to point out things out (and even sometimes correct them) free of charge. You're overall doing a service to the community and I appreciate your work! :)

On the other hand, it IS important to review your RNG algorithms, and MAYBE even to publish them. However, before pushing the button “upload code to your website”, you need to make sure that your RNG is really And you won’t be able to check it yourself (even if you’re a security pro) – just because it is YOUR code, it always feels better than it should (you “know” how it works instead of scrutinizing it). That’s why it is paramount to have a THIRD-party review (ideally – by a different company specializing in security, though you MIGHT be able to get away with a completely different team within your own company).

This is really the only paragraph that stood out to me in terms of grammar. You have an incomplete sentence ending with "you need to make sure that your RNG is really". I'm also just having trouble following it in general -- too many parenthetical statements. If you can rewrite it to use only one or two parenthetical statements, I think it will be easier to read.

no-bugs:

Thanks! Hope that I've fixed it now...

ZMeson:

Much, much better.

jms_nh:

If I were retired I'd do it for you. I have a natural eye for proofreading + keep telling my self i'll open a proofreading business. (find typos in the newspaper all the time + it bugs the crap out of me)

but right now I have a real job + a family

no-bugs:

but right now I have a real job + a family

Yep, that's the problem :-(. Whoever is able to do it - doesn't have time to do it, and whoever has time - well, is not ideal for the job :-(

Fiennes:

I wish I had the time to help - I love reading your stuff and honestly the quality of the writing - knowing that English is not your native language - doesn't really get in the way of the material.

no-bugs:

Thanks for your support! :-) BTW, if you see something specific which feels outrageously wrong (like the thing p_ql has pointed out above, though please don't report missing articles and "it's"/"its" misuses, these are two things which I will leave to the editor for sure) - feel free to give me a shout, I will be happy to fix these things.

no-bugs:

I'm writing one 5000-word post a week, and keeping more than two (upcoming and next one) within my head would be too much for me.

no-bugs:

this "poor writing" reputation will spread and stick

I hope not. These writings are officially positioned as "beta" (with a Big Fat disclaimer at the very top of each post), and there will be a completely different editor (="more than proofreader") to work on it before it gets published as a book. So, whoever doesn't want to read half-baked stuff - can always read my articles in Overload (where they have their own editor), or wait until the book is ready...

TillWinter:

I'm confused, what age group is the target audience? The information density and quality seems to target children around the age 10, but the topic is so specific. So confused. Can someone explain what IT-Hare is?

no-bugs:

I'm confused, what age group is the target audience?

Judging from the quotes on the KS page

https://www.kickstarter.com/projects/341701686/the-most-comprehensive-book-on-multiplayer-game-de/description

target audience is mostly about Managing Directors, Chief Architects, Core Tech Programmers, Technical Directors, CTOs, and so on :-). My (somewhat) educated guess is that on average they should be a little bit older than 10yo ;-).

To comment in this thread, join this Reddit discussion.