Why Bootdev — Dynamic CDN

In the old days, we put images, css / js, woff etc any assets to CDN, so that clients can download somewhere that geographically optimised.

Around end of 2012 – early 2013, new idea comes out like we should CDN everything, so that we can reduce the complex architecture with memcache, page cache cluster (Varnish) or even microccache etc. Just 1 layer cache and having everything to CDN. Like architecture below.

dnamic cdn

Your website domain name will directly point to CDN with CNAME. And then the CDN will point to your load balancer address or your web server. So it is like a proxy. When you do create, it will 100% bypass CDN and goes to web server, when u do UPDATE, the CDN will update from web server then invalidate itself, when you do DELETE, the CDN will bypass request to web server and invalidate its cache. When you read, it read from the CDN, not from web server. So the CRUD action can be completed.

You will need your own cache invalidation strategy, like send update to cloudfront / using versioning object or url.

Here is a sample conf of how we bypass some URLs to go to web server, and making Drupal works.

E1EB6A9C-17EC-446D-AD59-80B471A4F962 62367506-DDC3-4E5C-8F05-24E2D20DBBBB

With AWS cloudfront, you can bypass header ORIGIN, so that you can preform CORs actions. Also you can use similar header bypass feature to detect mobile/PC. With such architecture well setup, theoretically, you can have unlimited PV, as your server wont be really hitted. Your bound will be write DB bound only, which is not a concern in most case.

If you don’t want to understand all these, but want to lower your cost and have higher traffic and faster response, contact bootdev at founders@bootdev.com ! We can deploy Dynamic CDN to you in minutes no matter you are using AWS or not. We can point our CloudFront account to your server, it can be Azure, Linode, or any bare-meter. It just need to be Drupal, and you can enjoy the best performance ever.

ref: https://media.amazonwebservices.com/blog/cloudfront_dynamic_web_sites_full_1.jpg

why BootDev — AWS management (EC2 decrease performance by time)

In our experience, if your EC2 got resource left for a period of time, lets say 1-2 months, even AWS say it is not, but we always experience performance drop. For example: server response time drop from 400ms to 600ms . Or CPU raised from 40% to 70% in average.

In many case, many users will not 100% * 24 * 7  consume their server resources. So, in my experience, AWS want to reduce cost and reallocate some resource privately to other clients. If your server is actually low utilization, it is OK. But, in my situation, we need high response time and we reserve the CPU for that. We will feel performance drop.

How we deal with that ? That is what auto-scaling group should do. If in your auto-scaling group, your server will keep renew itself. Like turning off old servers and start new servers, the EC2 resource allocation will maintain at best performance.

Such issue especially obvious in Drupal. Drupal PHP request involve in many hooks / functions / third party calls etc. Server CPU requirement is higher than other web PHP applications. So, renewing server can help to keep better performance.

How to deploy auto-scaling group ? Call BootDev 🙂

Hope above information can help anyone that feel AWS sometimes drop performance by time 🙂