Wednesday, May 30, 2012

Recovery Testing: The neglected child of QA

Recovery testing is the neglected child of Quality Assurance.  This can be true for many reasons, often software testing is done dynamically, where defects are found and tracked.  These defects are gathered, dispatched and repaired, but what about unforeseen defects?  Testing to a high confidence level is often expensive, and most systems settle for confidence somewhere in the mid to high ninety percentile range, however, in the event of a total software failure, can the system rebound?  Recovery testing looks at this very issue, and tries to ensure that any unforeseen failures are limited and recovered from gracefully.  The different types of recovery testing are cold spare, hot spare and automatic fail over.  Each of these three styles of recovery testing should bring up an exact duplication of the service, whether it be a web application or a database so that the user is effected as little as possible and the problem is potentially gone unnoticed.  The best way to ensure that recovery testing goes smoothly is to perform frequent backups and experimental failover to make sure that the backup system is prepared, however this is often a nuisance to larger companies that pride themselves on having strong uptime percentages.  Though these two needs are of equal importance, recovery testing gets put off in favor of consistent uptime.  What could be the reasoning for not testing a recovering system?  Perhaps coding compilation is done overnight and would severely inhibit work 

What standards are there for testing? What about certification? Are there useful web resources for QA professionals?

Standards for any piece of software being tested depend on the multiple factors.  The software solution's purpose and environment specifically dictates the level of testing and certification necessary.  Consider a piece of software that runs a heart monitor within an operating room, this piece of software is mandated to meet certain health information standards.  Standards like extensive testing and information certification are two crucial aspects to such a system that runs in an important environment.  Another area where standards for testing and certification exist are seen in fail-proof systems, Airline navigation and landing gear, altimeters and cockpit pressure regulators, these too run in an environment where the system cannot afford to fail.  NASA's manned an unmanned space exploration projects are also subject to this same level of testing and necessary certification.  Being far away from Earth, even a small software bug could ruin billions worth in investments, blemishing the reputation of NASA and putting their entire association in jeopardy.  NASA is a specific case, being a government entity they set their own standards for certification and testing to reach their desired quality, but in the private sector oftentimes if the software is any way attached to sensitive information and/or crucial systems, governments tend to set these standards to reflect good practices.  Looking back to the heart monitor example,  having previously worked for a company that developed software for the emergency room, it is not uncommon to have a heart monitor linked to a much larger automated system of vitals tracking.  This tracking of vitals presumably can be found online with other personal information this is useful to both doctors, patients and practitioners alike.  Due to patient-doctor confidentiality in the United States, in 1996 the United States Department of Health and Human Services realized the increase in health information technology and passed the Health Insurance Portability and Accountability Act.  This standard for health information technologies was put in place to ensure that all patient related information used in health systems would be encrypted, viewable only by the parties involved, or destroyed depending on the age of the data.  Experiencing a company ensuring HIPPA compliance first hand during my tenure at a health IT provider, it was easy to see how the standards shaped the work environment through a rippled effect that started all the way from our in-house IT to the testing and development teams.  Everyone was mindful of the standards in place and the importance of it all.  Testing to ensure that these standards were met was not my particular duty, though it was evident in the software.  The government's respective departments often hold similar standards to other forms of software.  The Federal Communications Commission has standards on the internet, various radio wavelengths, The Food and Drug Administration have testing regulations and standards for manufacturers of drugs, agriculture, horticulture.  Sources to this information on standards for testing and certification are all existent online via the department's website.

Wednesday, May 9, 2012

Software Quality

What do we mean by "Software Quality"? How can we define "quality"?


I believe that the notion of quality is mostly dependent on context.  A piece of old furniture, for instance can be an antique if it is of high quality or unique in some regard.  A painting can be of high or low quality also, but a computer programmer may not be able to spot the difference.  It requires a background and an eye for detail,  so what could a programmer look for when determining a certain software's 'quality'?  As a programmer myself I'd like to think that the quality of a program is dictated specifically by the 'quality' of its code, but in the real world business environment there are other determining factors undoubtedly.  These determining factors are inherent in any business scenario.  Does the product fully address the issue(s) at hand? Was the product created within the allotted time and resources? Is it profitable?  All of these things combine to potentially make a high quality business product, but the innards of the program itself must also meet the quality standards of code for it to be a high quality piece of software overall.   Considering a program like Google Chrome, this is not an over and done with kind of program, it is constantly evolving and being reformed.  Rather than reinventing the wheel during each iteration, a well formed development and testing cycle must first be implemented.  This lifecycle adds another element to the overall quality of the software, then comes the actual code itself.  Depending on the needs of the business, code may need to be multiplatform, compact, written in a certain programming language, or all of the above.  These qualities speak to the internal status of the program, which is one component of software's quality.  The other would be external quality, meaning the appearance of the software rather than the innards.  How does the outward appearance work?  How strong is the software's usability?  Is this iteration of the product visibly different than last iterations?  Could users find any of the outward changes frustrating?  All of these questions are linked to elements that form overall external quality.  These requirements combine to determine the overall code quality of software.  Factor in the external and internal code qualities with business quality and I believe that you can properly gauge the overall quality of any piece of software.