Mastering Non-Functional Requirements by Sameer Paradkar

Mastering Non-Functional Requirements by Sameer Paradkar

Author:Sameer Paradkar [Paradkar, Sameer]
Language: eng
Format: azw3
Publisher: Packt Publishing
Published: 2017-05-18T04:00:00+00:00


Key drivers

The key drivers are as followings:

Natural calamities, like floods or earthquakes

The primary/main site goes offline due to hardware or software failure

The system crashes or becomes unresponsive

The application fails due to non-availability of external systems, networks, or databases

Methodology

This process involves the following key aspects:

Site level: A DR or redundant site, which is similar to the primary site, can be created to handle natural or unforeseen disasters. This is the normal strategy for disaster recovery (DR) and business continuity. Ensure a DR site is present, which should have a mirror replica of the code and data from the primary site. This serves as a backup site in case of total failure of the primary site. Due to ease of replication and hardware abstraction, the administrators can build the DR environment easily and quickly. Data can be effortlessly synchronized between the primary and DR site using the replication features provided by virtualization.

DR system should be set up to handle any unforeseen natural disasters. A DR system can also be used as standby nodes to handle additional workload during peak times.

Backup and recovery: Majority of databases support automatic backup models, that can be configured to back up the data to mirror or backup databases. The standby database cluster in the mirror or backup location would provide seamless failover and recovery capability.

Data mirroring processes involvs synchronizing the data between the primary site and a remote location, such as a disaster recovery site.

Disaster recovery and business continuity: This includes the standard set of procedures to be followed, in the event of critical incidents or natural disasters that bring down the entire primary data centre. In order to achieve this, we set up a disaster recovery site, which acts as a failover site for business continuity.

Recovery using checkpoint and rollback: This technique is mainly leveraged for data intensive systems wherein the software always creates a checkpoint during its consistent or stable state. The application stores its entire state into a persistent storage at regular intervals. During a fault scenario, the fault handler detects the fault and rolls back the application state to the last-known stable checkpoint, which contains a valid, consistent, application state. This technique is widely used in database and operating systems; the same technique is also leveraged for application software. A data-driven application can persist its user session into persistent storage at regular intervals. Each storage will be identified by its timestamp and user credentials. When a user session is corrupted due to incidents or other unforeseen circumstances, the fault handler process can recreate the user session from the previously known valid checkpoint data.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.