Hardening Azure Applications by Suren Machiraju & Suraj Gaurav

Hardening Azure Applications by Suren Machiraju & Suraj Gaurav

Author:Suren Machiraju & Suraj Gaurav
Language: eng
Format: epub
Publisher: Apress, Berkeley, CA


95% of requests should complete within 2 seconds.

99% of requests should complete within 5 seconds.

Do Not Over-Engineer

You should engineer your application to align with your business needs, but be sure to avoid over-engineering. You should also use your judgment to distinguish what matters from what does not, and thus prioritize your available resources. For example, let’s say there is a particular user activity that takes one minute to complete. After your investigation, you conclude that significant architectural changes will shave 5 to 10 seconds off the end-to-end process. You should not undertake this fix if your users are accepting of the one-minute latency they are used to. Instead, you should evaluate all your options and business and customer needs before investing time and effort in fixing latency issues, especially those that bring minimal changes.

Real-World Case Study on Latency at One of the Biggest Software Companies in the World

The SEO engineering team at a large software company created a feature that allows users to enter a query containing multiple keywords and returns results for the entire query in addition to the statistical results for each keyword. It is useful to know which keywords may produce better search results at affordable costs. This feature worked as expected in the development environment. However, it was totally unusable when running in production. Upon investigation, the team discovered that the logic in their code was actually making API calls for each keyword, and each API call was making multiple internal calls to the server. This chatty interface may not pose a problem when running in an on-premises environment, where the network latency is low. But it is definitely a significant problem when running in a hybrid environment, where the front-end server receiving user requests is hosted in a cloud environment and the back-end service analyzing and processing queries based on keywords is running on an on-premises data center connected via the public Internet. The result was very high latency. The engineering team resolved this through a redesign that batched the calls to the server.

The lesson in this story is that you, as an engineer, should always consider potential latency issues during the design and development phase, and bake solutions into the feature. Latency is a real issue and should not be an afterthought.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.