168 MM per 24 hours (extrapolated and with a small message), but I want more. Any ideas?

OptiBiz1

Active Member
We reached 7 million messages per hour on our dedicated server today (see attached image with the server HW config, not that it matters for the performance but there are actually 6 * 2 TB SATA RAID10 disks, instead of 6 * 1 TB. It's the SSD disks that are interesting since the database is stored there). We use an external delivery server supplier.

A pretty small message, but still 168 MM/24 hours (extrapolated speed, 7 * 24), which is amazing. Still the CPU and RAM is not that much utilized. RAM is only peaking at 21% and CPU at 60% (the server CPUs are to my knowledge E5-2670 V2, pretty old that is) so it doesn't seem that we can fry bacon on the server. :)
(and I am fully aware of that these speeds will not happen when we have multiple customers on the server running multiple campaigns. Of course things will slow down per server)

I have this wild theory that the IP stack might be a bottleneck here. This is waaaaaaaaay out of the borders of my Linux skills, which is why I post this here:

The question is whether the IP stack can be "crowded" when pushing data on one IP address.
Each of servers have five IP addresses and IF IP gets "crowded" (e.g. that a certain amount of RAM is allocated, or a max amount of messages in the queue is reached etc), then it might be an idea to try to send data over the extra IP addresses that the server has. I mean it's still the same NIC, and that can take some 1 Gbps (in theory) and the extra IP addresses still belong to that same NIC, so it's not a question of capacity of the NIC, rather the capacity of IP.

Another idea is that we might become more efficient if we use a more efficient queue handler on our side, something that gathers all the outgoing messages and funnels them into existing connections that totally bursts the delivery server using a preconfigured amount of connections on our side.

1) Any ideas on the matter of the IP stack, anyone?

2) Is there some queue that handles this very efficiently? We use SwiftMailer, which I think uses Postfix and then there is a queue handling built in. Maybe there is something better or some super efficient config setting that we can do?

3) Could it be that if we store the PHP files too on the SSD, that we gain some significant speed advantage there?

Thank you!
 

Attachments

  • server.jpg
    server.jpg
    75.5 KB · Views: 108
Last edited:
1) Any ideas on the matter of the IP stack, anyone?
I don't think an IP address can get crowded in a way or another.
It's all about the network stack used at OS level and these things can be configured a lot.
For example you can configure the number of allowed file descriptors, i.e: https://unix.stackexchange.com/questions/84227/limits-on-the-number-of-file-descriptors / https://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/ / https://unix.stackexchange.com/questions/447583/ulimit-vs-file-max

2) Is there some queue that handles this very efficiently? We use SwiftMailer, which I think uses Postfix and then there is a queue handling built in. Maybe there is something better or some super efficient config setting that we can do?
There is a delivery server type in mailwizz, called Pickup Directory. What this does, is instead of sanding the emails to a remote smtp server, which generally takes a few ms, it instead writes the email in a folder of your choice, and then you can use a 3rd-party tool to pick those emails and process them as you wish, i.e: send them to a queue or to another servers.
A while ago, i developed this tool: https://github.com/twisted1919/mdp-go which you can use as example. It's not supported anymore in the sense i'll never update it, but it does contain valid ideas you can use to make mailwizz put the emails in a folder and then have a 3rd-party pull from that folder.

3) Could it be that if we store the PHP files too on the SSD, that we gain some significant speed advantage there?
No, i don't think that's really your bottleneck in any way.
 
We reached 7 million messages per hour on our dedicated server today (see attached image with the server HW config, not that it matters for the performance but there are actually 6 * 2 TB SATA RAID10 disks, instead of 6 * 1 TB. It's the SSD disks that are interesting since the database is stored there). We use an external delivery server supplier.
As per our private discussion, you are doing great! But again congrats ;)

1) Any ideas on the matter of the IP stack, anyone?
Consider your IP just as reference / identification of a destination to your server. Your concern could be of Network/Bandwidth which your DC (Datacenter) is providing with server 1GBit/s, that seems good but is there any limit which is provided by DC? Though I can see they are charging €2 per extra TB but maximum limit after 100TB is not mentioned. Try & see if they have more bandwidth speed available.

3) Could it be that if we store the PHP files too on the SSD, that we gain some significant speed advantage there?
Not exactly, but kind of yes if by PHP files you meant all files of MailWizz. If you have lot of files accessing done in terms of MailWizz images, etc. because of storing thousands of templates, files, etc. then SSD can be beneficial as it has faster Read & Write speed. So yeah using it gives a plus point for system to access files bit faster.
 
Last edited:
Instead of increasing pressure on a server, why you not horizontally Scaling?
1. Install mailwizz on multi-server. same PHP App on each server (each server should have a license). (server 1: maindomain.com, server2: server2.maindomain.com ...)
2. Using a remote database.
3. For File Consistency, you can use GlusterFS (we wrote a plugin for assets and separated all assets in a CDN)
4. In this case, the load balance problem is slightly different. This is enough, makes all request to maindomain.com, You don't need forward request to other servers (It means you don't need handle session!), Now let cronjob run in each server.
Finish! It's not the right way, but it really works!
 
Last edited:
Instead of increasing pressure on a server, why you not horizontally Scaling?
1. Install mailwizz on multi-server. same PHP App on each server (each server should have a license). (server 1: maindomain.com, server2: server2.maindomain.com ...)
2. Using a remote database.
3. For File Consistency, you can use GlusterFS (we wrote a plugin for assets and separated all assets in a CDN)
4. In this case, the load balance problem is slightly different. This is enough, makes all request to maindomain.com, You don't need forward request to other servers (It means you don't need handle session!), Now let cronjob run in each server.
Finish! It's not the right way, but it really works!

Thank you! I am sure that everything of what you say makes perfect sense, it's just that none of us are skilled enough to set up and maintain such an environment, which is why we keep it simple. Once we have enough of cashflow, I will hire a few Linux geeks to run all our servers.
 
Last edited:
Hi @OptiBiz1
How you are sending 7 million messages per hour, can you show me your cron setting or any server setting. I also have high config dedicated server but it takes more than 12 hours to send 5 M messages. MTA is working fine, IPs reputation are also good. But Mailwizz takes huge time to deliver 5M messages.
Thanks in advance.
 
Hi @OptiBiz1
How you are sending 7 million messages per hour, can you show me your cron setting or any server setting. I also have high config dedicated server but it takes more than 12 hours to send 5 M messages. MTA is working fine, IPs reputation are also good. But Mailwizz takes huge time to deliver 5M messages.
Thanks in advance.

I can't show you anymore and even if I could, you would not be able to re-create our situation. Much has changed since the above first message. We wrote our own queue handler, which from MW looks like a virtual delivery server, so that it creates a configurable amount of SMTP connections and then it reuses those to nuke the delivery server(s) with messages. With PHPmailer/SwiftMailer we need to run 1000 concurrent connections towards PMTA, but with a queue handler written in GoLang we are now down to 60 concurrent connections to deliver in the same speed as 1000 for PHP. The new queue handler still have some bugs (it's too fast) so we still use the one implemented in PHP. No matter what variant we use, we have went in for horizontal scaling since this was much cheaper.

Doing it with our queue handler makes the sending much more efficient. No need to create new processes for every send and we control how many sendouts we do at all times. Thereby we save on CPU and RAM. With the GoLang variant the CPU and RAM is even lower than with the variant written in PHP.

We can send like crazy now. The only limit is the NIC and during the maxed out tests we didn't see that at first, but after a while it got apparent that the 1 GBps NIC is the main bottleneck.
 
Last edited:
Back
Top