Socialize is well known for the robust infrastructure of its drop-in social platform, which handles over 20 requests per second — over 1.5 million requests in the past 24 hours across thousands of mobile apps and millions of end users. But how does a team of just ten employees set up and manage such an infrastructure? Here’s a sneak peek behind the curtain so you can see how it’s done.
Our databases, servers, Partner API and SDKs have been architected to always push meta data to our systems. Each component is built as stand-alone infrastructure, allowing us to audit and optimize intelligence on how each piece is performing. We then leverage tools by Splunk to turn our massive log files into actionable intelligence, and we use TeamCity for continuous integration. Even further behind the scenes we are using Nginx, Apache, load-balanced Amazon AWS & RDS, Robotium, Selenium, gh-unit, OCMock, KIF, Google App Engine and other best-in-class technologies.
It all comes together on a 50″ flatscreen monitor and 32″ iMac that our Quality Assurance engineer monitors and all our employees can see. We even have a flashing green light that turns red when a build breaks or we detect latencies in our system, and all our developers are notified when there’s a problem.
Here’s a video of Sean describing the systems and how we use them to ensure Socialize is always available and performing flawlessly in your app:
More pictures of our system: