FAQ-SP-001
What is the guarantee that the Azure Server uptime will be maintained at 99.95%? What if, if it is not?
Global sensitive banking sector data center operations are today managed by Global Cloud Service providers simply because the reliability of the 99.95% has the highest probability > 99%. Downtimes happen maybe once in a year, if at all, on a specific day and are then global and not just local in nature. If they are local, then it is limited to less than perhaps 30 minutes if at all in our experience of being on the platform for over 8 years. While Microsoft guarantees the backup of the cloud data centers, these situations have till date never happened.Also, if the customers are extra concerned about the same, they have the provision to go in for REAL TIME BACKUP replication on a different (chosen by the customer) DATA CENTER in a different seismic zone at a SEPERATE MONTHLY SAAS COST payable by the customer, commercials. This way there is extra layer added for 0 downtime. There are no documents provided by Azure to ZingHR and therefore no such commercial agreements are available for the customer to see. However, the customer can go to the Azure site and see the financial guarantees stated on the website applicable for any global Azure customer which you become through ZingHR.
FAQ-SP-002
How does ZingHR arrive at 99.95% uptime?
Server monitoring is a crucial Azure Cloud Operational Feature. Read more here.
https://azure.microsoft.com/en-in/support/legal/sla/app-service/v1_4/
FAQ-SP-003
What is the corresponding penalty when 99.95% Azure Sever Uptime guarantee is breached?
99.95% SLA translation
Yearly: 4h 22m 58s. For the last 8 years, barring one incident when the entire region went down for hours last year, we have had a 0-incident experience. Please subscribe to replication services at extra commercials to achieve 0 Downtime. Speak to your ZingHR Customer Support Representative. Please read here.https://status.azure.com/en-us/status/history/
FAQ-SP-004
What is the current concurrency peak on the product platform?
The design of the product, code,architecture and engineering is done to achieve infinite concurrency. Practically though we are able to achieve a concurrency of more than 40000 users of the same application at the same time for the same process and more are getting enabled.
FAQ-SP-005
What is the application average response time in seconds for each action taken on the user interface? (Web/Mobile)
We average about 9 seconds for most processes under heavy load conditions and an average user/employee bandwidth on web app or mobile app. For processing applications by HR/Admin, time lines of computes are completely load specific.
FAQ-SP-006
Is there a minimum bandwidth that will provide access into ZingHR?
Minimum can range from 500 kbps. Ideal would be to have about 1 mbps or more
FAQ-SP-007
Does ZingHR have scalability on application servers? Does zinghr use in-memory database?
For scalability ZingHR applications servers are hosted on a number of Virtual Machines behind a Load Balancer. Even if one of the VMs goes down, due to the load balancer other VMs will still support the application. This ensures no downtime.
ZingHR uses Microsoft Azure Redis Cache. Azure Cache for Redis is a fully managed, in-memory cache that enables high-performance.
FAQ-SP-008
Current Infrastructure & Application Design? Application Development Platforms used? Databases?
For WEB : ASP.Net C#, .Net framework 4.7, MVC, AngularJS, ReactJS, HTML 5, JQuery, JavaScript.
For Mobile: .Net c#, .Net Core 3.1, Apache Cordova, LUIS, Azure ML, T-SQL, Node js
MicroServices : Solution is deployed in Linux OS, Docker Swarm , with Traefik as application Gateway.
Applications are developed in .Net Core 3.1 and 6.0.
Logging is done with ELK(Elastic, Kibana and Logstash).
New Portal UX dashboard is deployed in Docker only, built with React JS + Next Js
MongoDB is used for zingid application.
For Database : SQL Server 2012, 2016 and 2019, MySQL, Document DB, and Mongo DB
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article