From Google's Agile testing videos I was about to learn many things about how to characterize an agile testing environment. I can now draw parallels between what google considers an agile environment's strengths and weaknesses to what I've observed through my own work in what was an agile atmosphere.
This kind of environment I feel lends itself to informality, and through google's discussion on agile projects and development I believe this is reinforced. Often Elisabeth Hendrickson talked about how ideas were flowing on a verbal and fast paced manner. This kind of informality that is inherent in this kind of environment puts an added amount of responsibility on the quality assurance team or quality assurer. This means that to some degree the quality assurance team is seen as a bottleneck on a project and sometimes vilified by developers by inhibiting this fast paced environment. This doesn't seem to compare to my personal experiences in an agile environment where the software quality was done through multiple tiers of tiers, and the development team always had their hands full with various defect and implementation related tasks.
In response to the last video that questions the ability to gauge metrics in an agile environment I believe that this is one of the toughest aspects that go into planning prior to the beginning of an agile development cycle. Obviously with the hectic nature of the development, certain problems would arise that wouldn't in the traditional atmosphere, this also means that these metrics must be interpreted and gathered quickly else the will no longer be valid given the speed of development, limiting their usefulness greatly. So to resolve this, and depending on a few factors specific to the development team that is undertaking this project, I would stick to a small amount of crucial, easily recordable, lightweight metrics that can be automated to display a visual of the information rather than waste time finding the correct interpretation of a metric. These metrics must be rigorously recorded and considered else they run the risk of fully undermining the agile process through a serious slowdown. Though those may be some difficult tasks, agile testing also has some inherently good qualities that are noted throughout these videos. The nature of the agile environment leads to code that comes in incrementally, meaning that any problem areas that arise can be easily found due to the digestible nature of the changes made by development and the tests pointed out by the assurance team. That being said, it is the quality assurance team's best friend to automate as much as possible to save time. Since saving time is paramount in an agile development project, automation of builds, testing, metric gathering and defect tracking all must be established before hand to provide a sturdy foundation from which the team quickly builds reliable software.
Mike Gnieski - Software Quality Assurance Blog
Wednesday, June 20, 2012
Monday, June 18, 2012
Web Application Testing
With the different types of computer software solutions there come different types of testing tools. In many regards these tools are similar in function but difficult to run constantly. This is why for web application testing it is commonplace to see macro generation programs aid in the automation of multiple online tests. Programs like Selenium, Automator, and JitBit are all easy to use and allow testers in a small environment to busy themselves writing test cases rather than spending time to run them. These programs also help the the developers and testers of online software in a different way, through concurrent running, simulated connections can test the stability of a system when under stress or heavy load. One program that handles this specifically is called JMeter. JMeter displays a graph showing connections and handled requests by the server containing the web application test environment. Too many users or threads will crash any system, and this type of automated stress testing is crucial to the integrity of a system that runs any web application.
In recent times we've seen that denial of service attacks can be used as a weapon against widely used systems. This kind of attack can be simulated in this way through automated testing of users. Though some companies decide instead to hire groups of user mercenaries from different places across the globe to simultaneously connect to their servers, this is often somewhat overkill. Internal workings within the code can look to foil these sorts of attacks by only allowing a certain number of connections, but these capabilities can only be truly tested through extensive automated testing.
In the case of larger platforms like twitter or ebay, these web applications must naturally handle thousands upon thousands of connections simultaneously, and here is where stress testing takes top priority. Fortunately for simple interfaces an limited complexity of twitter make this easier to run tests for.
Though stress testing is important for prevention of total failures, simulated load testing is also an important aspect of web application testing. Considering that an application expects a certain number of concurrent users, we can monitor what is essentially the heart rate of this application as it tries to deal with normal requests, where the server is the heart and the connections are blood cells being run through the heart and back out. Depending on the reactions to this data the project may change its focus to utilize hardware better, or find existing bottlenecks within the program that cause stress on the servers. Just like an irregular heartbeat brings about changes in ones health choices, automated testing shows whether changes need to be made when it comes to web applications.
In recent times we've seen that denial of service attacks can be used as a weapon against widely used systems. This kind of attack can be simulated in this way through automated testing of users. Though some companies decide instead to hire groups of user mercenaries from different places across the globe to simultaneously connect to their servers, this is often somewhat overkill. Internal workings within the code can look to foil these sorts of attacks by only allowing a certain number of connections, but these capabilities can only be truly tested through extensive automated testing.
In the case of larger platforms like twitter or ebay, these web applications must naturally handle thousands upon thousands of connections simultaneously, and here is where stress testing takes top priority. Fortunately for simple interfaces an limited complexity of twitter make this easier to run tests for.
Though stress testing is important for prevention of total failures, simulated load testing is also an important aspect of web application testing. Considering that an application expects a certain number of concurrent users, we can monitor what is essentially the heart rate of this application as it tries to deal with normal requests, where the server is the heart and the connections are blood cells being run through the heart and back out. Depending on the reactions to this data the project may change its focus to utilize hardware better, or find existing bottlenecks within the program that cause stress on the servers. Just like an irregular heartbeat brings about changes in ones health choices, automated testing shows whether changes need to be made when it comes to web applications.
Software quality metrics
In the software world the impetus for any advanced software solution comes from a business opportunity. In order to best optimise the software for the intended need and with these business requirements in mind, companies of all kinds form metrics to use in different ways monitoring the progress of a software solutions life. These metrics range innumerably from general metrics used for upper management and decision making to very minute metrics that cover individual lines of code. Some of the types of information used to generate metrics are execution time, resources needed, total defects existing, number of lines of code, number of developers working on a project. Through the combination of these factors we can generate many types of metrics that speak to different levels of efficiency and provide useful decision making information to managers.
Some examples of metrics commonly used in the development atmosphere include: total defects existing - defects repaired, resources allocated vs resources consumed, test cases per line of code.
Existing defects - repaired defects:
This metric is crucial to the development and testing team and can also be seen to gauge progress from the project manager's point of view. As defects are closed they are removed from the system, and how that number fluctuates given the influx of new defects says much about the team, the software and the plan going forward.
Resources allocated vs. resources consumed:
This metric is used mainly from a managerial standpoint where they can overwatch the progress and resources used to reallocate where ever necessary to attempt to remedy or better the project. This metric takes into account the software and hardware tools needed in the development process over time as well as the manpower needed to produce these products. This could lead to decisive changes made in consideration of the business requirements. It may entail the purchasing of additional or newer hardware and software, or it may mean the addition or removal of team members depending on more specific metrics used to monitor individual progress.
Test cases per line of source code:
This metric is important in code coverage. Specifically in unit testing where individual functions should be tested thoroughly to establish confidence within a product. Though 100% coverage could seem ideal in this case, multiple tests of code per line though tedious would bring about higher confidence in the quality of the code being created.
Some examples of metrics commonly used in the development atmosphere include: total defects existing - defects repaired, resources allocated vs resources consumed, test cases per line of code.
Existing defects - repaired defects:
This metric is crucial to the development and testing team and can also be seen to gauge progress from the project manager's point of view. As defects are closed they are removed from the system, and how that number fluctuates given the influx of new defects says much about the team, the software and the plan going forward.
Resources allocated vs. resources consumed:
This metric is used mainly from a managerial standpoint where they can overwatch the progress and resources used to reallocate where ever necessary to attempt to remedy or better the project. This metric takes into account the software and hardware tools needed in the development process over time as well as the manpower needed to produce these products. This could lead to decisive changes made in consideration of the business requirements. It may entail the purchasing of additional or newer hardware and software, or it may mean the addition or removal of team members depending on more specific metrics used to monitor individual progress.
Test cases per line of source code:
This metric is important in code coverage. Specifically in unit testing where individual functions should be tested thoroughly to establish confidence within a product. Though 100% coverage could seem ideal in this case, multiple tests of code per line though tedious would bring about higher confidence in the quality of the code being created.
Tuesday, June 12, 2012
Validation in the Pharmaceutical industry
The world of pharmaceuticals is crucial to the medical field and to the human race’s overall well being. As such an important piece, the quality of these pharmaceuticals is integral to a successful product and eventually a cured customer. Being coined “Beantown” long ago, we now hear stirrings of another nickname for Boston, “Genetown”. As a hotbed for biological and genetic companies as well as high powered minds in the community, pharmaceutical companies fit right into this science driven city. With all of this work being done there must be practices in place to ensure that this work is reflected in today’s society, and one major part of this lies with the pharmaceutical industry. These practices are outlined by cMGP or the Current Good Manufacturing Practice, and they provide specific details regarding contamination, the lifecycle of the product, the necessary documentation of the steps and procedures involved. in order to create valid products, certain criteria must be considered. Some of which include the quality of the equipment being used, the quality of the employees working in the factories, the quality of the vendors chosen to send the product to, and of course testing of the produced drug. Within the company there is undoubtedly quality control practices in place to ensure the plants are run correctly and efficient. Within the plants there are also quality control practices going on that require the inspection of packages being sold. By doing these things they can optimize their processes, narrowing defects and bad batches, lowering the cost of production. This all comes in Phases in which they link the research and development (Phase 1) to the correct processes that must be put in place to generate a reliable product (Phase 2) and a maintenance validation phase that is the revisiting of documentation throughout the company as well as an audits of the company.
Economics of Software Quality
I agree with the article (http://ardalis.com/economics-of-software-quality) in many ways regarding the economics of software quality. Having worked in the development and IT setting during two co-ops I have see both how the development team has its inner struggles and how those outside of the development view the software. One reason that I find is nearly the case for low software quality in each development setting is the amount of allotted time. Considering the time it takes for a team of developers to make a fully functioning and tested application, nearing the end of this process certain APIs may have changed, or requirements may have changed. In any case the tools used by developers are constantly upgrading, and oftentimes the development team can only image the amount of time they could save had these tools been available sooner. One practice that is occasionally used in the field is a full re-write of a program after a certain timeframe. This would mean abandoning the last solution in favor of the newest tools available coupled with the the production knowledge from the last version. Some developers see this as a necessary evil, others are excited for a fresh start and willing to put the frustrations of an outdated solution behind them, while others abhor the idea and delay it as long as possible. This method would undoubtedly help to increase the overall quality of the software as it would show dedication from the development and testing groups.
I also agree for the most part with the reasoning behind production rate and quality. I do, however believe that this sort of assessment depends on a few different factors. These factors include the type of software solution, whether it is run once or frequently. Whether the development team is larger or small, since oftentimes larger groups require more coordination in coding style, lending itself to a higher quality. Third, the importance of software's internals. Obviously an righteous software developer would care about the internals of their solution, but in certain solutions that handle sensitive information are required by law to have a certain minimum degree of quality. This thinking can also be applied to what this article states as "the catch-up effect" meaning that the progression of software testing becomes decreasingly effective as time goes on. To which degree of confidence a company decides to release the solution is undoubtedly both an economic and business decision to be made.
Outsourcing Testing and QA
Much like many industries in the world, looking to increase profitability means outsourcing of jobs to cut costs. In many cases this is feasible but I believe that in the case of software development it is very inefficient and would eventually undermine the initiative that was to cut cost in the beginning.
As stated in Markus Gärtner's article, "quality involves many different things to different people" and with the sensitivity of source code being outsourced, you either have to subject the assurance to a lower level of confidence that could be given through black box testing, or you can elect to allow white box testing, and hope that your source is in safe hands. He also brings up a interesting point when you factor in the customer. You levee a certain amount of control to this outsourced company if that was the path taken, this weakens your image when facing the customer. The perfect example involves discrepancies between the customer and the quality assurance team. The original company would be at a loss, acting as a middleman between the customer with needs and the assurance team with the answers. I believe this can be somehow feasible in an ideal environment given a small program being outsourced, but this is surely an unlikely scenario.
This does lead to an interesting aspect in outsourcing where the company could actually benefit from this. If one was to consider comprehensive testing is done in house coupled with some outsourced usability testing, this would prove to be a useful since outsourced usability testing has proven to be the most effective way to gain from usability testing, even though that usually means an in-house usability testing team.
After all of this put into consideration I would say that outsourced testing and quality assurance is not an advisable route considering multiple factors. If a software solution was outsourced for quality assurance then the founding company is lessening their power through a vested trust in their quality provider. Certainly this may work with particular trustworthy entities that do this line of work, but I believe that in many cases the amount saved is not worth the hassle or risks of outsourcing a product. Currently we see the repercussions of the repercussions of internal quality assurance, meaning that business decisions were initially made to relocate jobs overseas when internal quality assurance was found to be too expensive but this trend is now reversed in time. Consumers started to become averse to outsource tactics mostly because it made them frustrated, which opened up the door for a quality assurance movement back to their original sources. This is more evidence that outsourcing of crucial functions within a company is bad practice, and that even though this experiment with cost cutting was failed, this shows that the quality of a unified system prevails over a separation of powers.
Subscribe to:
Posts (Atom)