You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

All of the recent changes to the bayes storage backend where not done lightly. Most updates were run through various benchmarks to make sure they did not slow down the system.

I like to think that this benchmark represents the majority of operation most often occurring with the Bayes subsystem. As always, feedback is appreciated and if you can think of a way to improve the benchmark please speak up.

I would also like to publish BayesBenchmarkResults from various systems, for various setups and backends.

For this benchmark we rely on six email buckets that will be processed in each of the phases. Three of the buckets should be ham, three should be spam. The buckets should be in rough date order, with the oldest messages being in bucket one. In addition there will be two "forget" buckets that consist of half the messages from the first ham and spam buckets.

The benchmark code requires the bucket files to have specific names:

hambucket1.mbox

spambucket1.mbox

hamforget1.mbox

spamforget1.mbox

hambucket2.mbox

spambucket2.mbox

hambucket3.mbox

spambucket3.mbox

I suggest at least 1000 messages per bucket, for sure it should not be less than 200, and maybe even 300 depending on how much autolearning happens in phase 2. Obviously, the more messages you have the more accurate your results will be but your benchmarks will take longer. I tend to use 2000 messages per bucket (1000 in the forget buckets) and my tests take between 40 minutes to 2 hours to finish, depending on the backend and setup.

The benchmark works in several different phases. After each phase is run we might run an sa-learn --dump magic, check the size of the database (assuming we can) and dump the database for reference with sa-learn --backup. The benchmark is generally run with auto expire turned off.

Phase 1:
This is the learning phase, here we run sa-learn on hambucket1.mbox and spambucket1.mbox, getting the timings for each.

NOTE: sa-learn here will automatically sync Phase 2:
This is the spamd scanning phase. We startup a spamd and then startup a forking script that throws all messages in hambucket2.mbox and spambucket2.mbox at the daemon using spamc.

NOTE: If available, this phase will learn/update to the bayes_journal. Once all messages have been scanned a sa-learn --sync is performed to sync the journal file to the database.

Phase 3:
This phase performs an sa-learn --force-expire.

Phase 3:
This is the forget phase. We use sa-learn to forget all the messages in hamforget1.mbox and then do it again for spamforget1.mbox.

Phase 4:
This is the spamassassin scan phase. Here we scan the hambucket3.mbox and then the spambucket3.mbox using the spamassassin script.

I suggest running each benchmark 3 times to make sure your test is not influenced by other system activities.

TODO: Post the code. The code is semi-ugly and not really ready for general consumption, it is close but I'm just not ready yet. Soon though, if you really want to get the code and run the benchmarks just keep bugging me and I'll eventually get around to packaging it up. In fact, I'll probably end up just putting in directly in the source (MichaelParker)

  • No labels