Many of you are curious about our network configuration and how we distribute such a massive workload across such a small team. There was a lot of thought put into designing our system, and we tried to keep it as simple as possible to eliminate bottlenecks and bugs.
Let's start with the network topology and server layouts.
Our first Digital Ocean Droplet (PA-HEAD1):
- 8GB RAM
- 4 CPU (2.00GHZ)
- 80GB SSD storage
- 5TB Monthly Transfer
This is our main system. It runs this very blog (more on that later), hosts the API for our OTA system, and is the central point for distribution. It also serves as one of the download nodes, in case the others are tied up.
The remaining Digital Ocean Droplets were all cloned from the same snapshot as each other, with slightly less powerful specs. (PA-NODE1 - PA-NODE5):
- 2GB RAM
- 2 CPU (2.00GHZ)
- 40GB SSD storage
- 3TB Monthly Transfer
All of these nodes are identical and spread out throughout the world (2 in New York, 1 in San Francisco, 1 in Amsterdam, and 1 in Singapore). They handle the workload of the distributed downloads and regionalized access to the files. We use a round-robin DNS to distribute the download links.
All of our current production Digital Ocean Droplets run Debian 7 (64-bit) Wheezy, on top of a small software stack consisting of:
- NGINX (with php5-fpm)
- NODEJS (only on PA-HEAD1 as of now)
Our automated build environment consists of the following:
(3) Dell R710 servers running ProxMox on the bare hardware, with dedicated virtual machines for building the ROM. The dedicated VM specs are as follows:
- 16 CPU
- 24GB RAM
- 500GB on raw RAID5 8x146GB 15000 RPM SAS drives
- Running Ubuntu Server 12.04.4
(1) VM running Debian 7 wheezy, that is our Jenkins buildbot director
- (1) VM with similar specs to the buildbots for test builds by team members
Like mentioned earlier, we like to keep it simple. Stay tuned if you would like to get into the technical aspects of the Paranoid Android Network.