Archive | Software Testing RSS for this section

Inadequate Software Testing Jeopardized Elections in Pakistan, Why?

(On the eve of Pakistan General Elections 2018, everyone was anxiously waiting for the results when it was told by authorities that the newly deployed system to transmit and manage results have failed. Yes, the technology failed when it was most needed. Our regular contributor Sohail Sarwar takes a deeper look at what the problem was and more importantly how we can avoid such things happening again)

Conducting elections in any country is a major political activity since it decides the future “reign-holders” of a nation. Similarly, election 2018 was conducted by Election Commission of Pakistan (ECP) on a national scale (with 272 national and 676 provincial seats). A total of 12570 candidates contested to win hearts of 106 Million Pakistani voters at 85, 307 polling stations for representing them till 2023.

In order to manage the election activity at such a massive scale, ECP for the first time deployed a system named Result Transmission System (RTS) and Result Management System (RMS) for consolidating/compiling/tracking the election results promptly. RTS was made available as mobile app whereas RMS was installed in specific systems for the concerned election personnel.

However RTS appeared to be a nightmare for field personnel on “D-Day” i.e. on 25 July 2018. It was revealed that mobile phones would hang when RTS app was launched. The directed fix to this problem was removing the app, downloading it and re-installing every time. If this component of installing and launching worked fine, the submission of results (after attaching the images) made the application not responding.

Due to these issues, polling staff in field could not submit the results as expected by ECP and suffered a backlash from political entities due to non-responsiveness of RTS. Consequently, fairness and transparency of elections was questioned that could end up in a chaos. Also, competence of eminent professionals managing world class software products was also doubted. Why this trouble was caused? A number of reasons can be enlisted but our answer is “Lack of Thorough Software Testing”.

Some of the apparent reasons that we can think of from the perspective of software quality are:

  • Deploying the systems in production without testing to ensure end-to-end functional completeness of both components i.e. RTS and RMS.
  • Pilot testing of RTS was done on a very limited test bed i.e. only 2 constituencies. ( Obviously, health of application with load generated by 300-400 polling stations would never endure the load generated by 80,000+ polling stations.
  • Overall testing of non-functional perspectives was ignored such as configuration testing (for uniformity of mobile types and OS versions); scalability testing and performance testing (load testing and stress testing).
  • Load balancing for facilitating a seamless downloading from sources seemed missing since anomalies were identified while registering the polling personnel (8 members were directed to register at one time from one constituency that seemed impractical).
  • No dedicated hardware testing infrastructure where applications were hosted with availability of minimum bandwidth requirements.
  • Testing from Cyber security perspective so that any situations in Western countries could be countered. Not sure if this phenomenon really applies to the system developed by ECP.
  • Relatively, a shorter span of time was kept for testing the products since these issues may prevail with uncooked applications.

These are few suggestions that may lessen the chances of getting into “Fire-fighting” situations as witnessed few days ago.

Do you know more about the episode? Or would like to share suggestions on avoiding such failures?

Making Testing Public

A case study on building trustworthy testing team by making all of it’s work public was shared by me for EuroSTAR Blog. The punch line is:

Everyone should be able to see Testing. Your in-laws included

Read the full article at EuroSTAR Huddle Blog

3 Testers, 3 Stories

One of the benefits of getting older is that you can tell stories. No, not the stories you heard but the stories that you observed. So here are my three stories based upon lives of three real testers but am hiding their identity and making the story more generic.

If you are wondering why you should read these stories, let me entice you by informing that they’ll help you plan your career better.

And by the way, this name is borrowed from the famous Urdu magazine category called “teen auratein, teen kahaniaan” (3 women, 3 stories).

Meet Tester Alif. She started her career in software testing by accident but in first few years of her career, she really liked testing as profession. She took pride in breaking software and stopping releases by reporting obnoxious bugs. As the time passed, her excitement went away. She started to believe the repetitive nature of testing is kind of boring. She tried to reinvent herself by joining a new team or new organization, but she never enjoyed testing as she used to do in the early times.

Alif took a decision. She left testing and became a Programmer. First she felt uncomfortable with this new role and her old friends mocked her a lot. But after spending some time, she became comfortable. She had doubts that she will get bored with Programming as she got bored with Testing, but she didn’t. Many years have gone and Alif is now an accomplished Programmer. Many people actually don’t know that Alif was once a Tester.

Now let’s review life of Tester Bay. He chose software testing as career as he had a knack for finding issues in even apparently unblemished work. He became an expert Black Box tester very quickly and got a repute of someone who can find big bugs at will. He progressed nicely and became a test lead like role and taught the skills of testing to his junior members. As the time passed, Bay started feeling relaxed as if he knows every trick of the trade. He became more and more a person who managed technical stuff but not do much technical stuff himself. He became dull and kind of useless though he never realized.

Bay has had some miserable years lately. He was laid off from one job though quickly got another. But within six month or so, the lack of depth in his skills was evident and he was put on a project that is not much important. Bay thinks he is doing good and his job is safe but anyone who has little understanding can predict a bleak future for Bay.

Let me introduce you the third and last tester in this series. Tester Jeem became a tester by chance as he applied for a Design job but was offered a testing job. He started it reluctantly thinking that he will soon quit it. But he started to like testing. The fun of exploring new stuff, the spotlight he got for helping his team achieve excellence, the confidence he got by understanding the internals of the software he tested made his job a fun. He grew in the ladder and became a technical tester who had a team working with him on key projects for his team. He occasionally thought to switch his career to his ambition to become a Designer but he felt that there is so much new stuff coming in Testing profession that can keep him moving for many years he can foresee.

Jeem also became an advocate for testing profession. He started to tweet about it, started joining Meetups and training, started reading lot of books/blogs and was a source of information for many testers around him. Jeem has decided to remain a Tester for the rest of his life.

That ends the three stories folks. I know you were expecting more dramatic than they are above but I told you they are real stories.

Usually I like the notion of “Story is more important than the moral of the story”, but if you want one from above, here you go:

Never Be like “Bay”. Always be like “Alif” or “Jeem”

Or to make it generic:

Do what you love. And if you don’t love it, quit it

Do these stories look familiar to you? Have you spent your life as Alif, Bay or Jeem?

On Demand Testing

In an earlier post, it was explained that how a DevOps testing consists of three main types: Scheduled Testing, On-Demand Testing and Triggered Testing. I covered how we scheduled different types of testing in the same article and now let me dig into details of On Demand Testing.

Keeping the notion that “testing is an activity” that is performed more often than a “phase” towards the end, it is necessary to configure Continuous Testing. That requires to setup automated testing jobs in a way that they can be called when needed. This is what we call On Demand Testing. This is contrary to common notion of having testers ready to do testing “On Demand” as features get ready to test.

(the original photo is here)

For example, we have a set of tests that we call “Data Conversion tests” as they convert data from our old file format to new file format. As you can imagine, lot of defects uncovered by such tests are data dependent and a typical bug looks like “Data conversion fails for this particular file due to this type of data”. Now as a Developer takes upon that defect and fixes this particular case, she’d like to be sure that this change hasn’t affected the other dataset in the conversion suite of tests. The conversion testing job is setup in a way that a Developer can run it at any given time.

I shared that we are using Jenkins for the scheduled testing. So one way for an individual team member to run any of the jobs on-demand is to push set of changes, log into the Jenkins system and start a build of automated testing job say Conversion Testing for the above example. This is good, but this might be too late as changes have been pushed. Secondly the Jenkins VMs are setup in a separate location than where the Developer is sitting and feedback time can be anything between 2-3 hours.

Remember, tightening the feedback loop is a main goal of DevOps. The quicker we can let a Developer know that the changes work (or don’t work), the better we are positioned to release more often with confidence.

So in this case, we exploited our existing build scripts which are written in Python but are basically a set of XML files that define parts that can be executed by any team member. We added a shortened dataset that has enough diversity and yet is small, which runs within 15-20 minutes. Then we added a part in the XML file that can do this conversion job on any box at any given time by anyone in the team.

Coming back to the same Developer fixing a Conversion defect, the Developer after fixing the bug now can run the above part on her system. Within half an hour, she’ll have results and if they look good, she’d push changes with a confidence that next round of Scheduled Testing with larger dataset would also pass.

Please note that we have made most of our testing jobs as On-Demand but we are having hard time at few. One of them being Performance Testing because that is done on a dedicated machine in a controlled environment for consistency of results. Let’s see what we can do about it.

Do you have automated testing that can be executed On-Demand? How did you implement it?

Pakistan Software Quality Conference 2018

Doing it for the first time is very hard. But doing it again and again with same energy and passion is even harder. That’s why as we witnessed a very successful Pakistan Software Quality Conference (PSQC’18) on April 7th, we felt even more accomplished than the first edition PSQC’17.

Last year it was in Islamabad, and this year the biggest IT event in terms of Quality Professionals attendance moved to the culture capital of Pakistan: Lahore. Beautifully decorated in national color theme, the main auditorium of FAST NU Campus witnessed 200+ amazing people join us from various cities across Pakistan.

After recitation of the Holy Quran, event host Sumara Farooq welcomed the audience and invited PSTB President Dr. Muhammad Zohaib Iqbal for the Opening note. Dr. Zohaib recapped the journey of community building event and shared demographics of the audience. He emphasized the need for everyone to act upon to the Conference theme “Adapting to Change”.

We had two quick Key Note sessions in the first half. The first one being on “Software Security Transformations” by Nahil Mahmood (CEO Delta Tech) who spoke about the grave realities of current software security situation in Pakistan. He then urged to take a part in making every software secure enough in accordance with various Industry standards. Read more about those in the PDF: PSQC18_NahilMahmood_SoftwareSecurityTransformation.

The second talk was on “Quality Engineering in Industry 4.0” by Dr. Muhammad Uzair Khan (Managing Director Quest Lab) who first explained the notion of Industry 4.0. He envisioned a future where systems are being tested in more and more automated way with Exploratory manual testing going in background. He also rightly cautioned that any prediction to the future is a tricky business.

Few honorable guests then spoke at the event including Professor Italo, Honorary Consulate of Purtagal Feroz Iftikhar and HOD of FAST NU CS Department. Shields were then presented to Sponsors of the event by Dr. Zohaib and myself. Contour Software was represented by Moin Uddin Sohail and Stewart Pakistan (formerly CTO24/7) by their HR Head Afsheen Iftikhar.

A tea break was now needed to refresh participants for the more technical stuff which was coming their way. This time was well utilized by all to meet strangers who quickly became friends.

(More photos covering the event are coming soon at our facebook page)

Second session had five back to back talks:

  • “Performance Testing… sailing on unknown waters” by Qaiser Munir, Performance Test Manager from Stewart in which after giving some definitions, he shared a case study on how a specific client felt happy with insights it needed from Performance Testing. Full slides here: PSQC18_QaiserMuneer_PerformanceTesting
  • “Agile Test Manager – A shift in perspective” by Ahmad Bilal Khalid, Test Manager from 10Pearls who travelled from Karachi for the event. ABK, as he likes to be called, recalled his own transformation from a traditional Test Manager to Test Coach who is more of an enabler. His theme of experienced Testers becoming Dinosaurs and not helping new ones learn new stuff did hit well and resulted in quite a fruitful discussion. Read more here: PSQC18_AhmadBilalKhalid_TestManager-ChangingTimes
  • “Agile Regression Testing” by Saba Tauqir, Regression Team Lead from Vroozi shared her current work experience where they have a dedicated team for Regression Testing. This also sparked a debate within the audience so as how much Regression Testing can be sustained in Agile environments. See her talk here: PSQC18_SabaTquqir_RegressionTesting
  • “To be Technical or not, that is the question” by Ali Khalid, Automation Lead at Contour Software which perhaps was the star talk of the day. He took upon story of a hypothetical tester “Asim” and how he became a Technical Tester through four lessons. Easing up learning with some funny clips and GIFs, Ali gripped the audience to convey the message strongly which included creating an attitude towards designing algorithms and enjoying solving problems. Full slides here: PSQC18_AliKhalid_ToBeTechnical
  • “Power of Cucumber” by Salman Saeed, Automation Lead from Digitify Pakistan who talked about his journey towards automation through the same tool. He explained different features of it, the Gherkin language, the tools needed to run it and shared a piece of code that showed a sample Google search test case. He urged all to use powerful tools like Cucumber to begin their automation journeys. He also promised to share code to whoever contacts him, so feel free to bug him. His slides are here: PSQC18_SalmanSaeed_PowerofCucumber

A delicious lunch was waiting in the Cafeteria which was basically an excuse to learn from each other while enjoying the food. I could see many people catching up with Speakers to ask their follow up questions and some healthy conversations around it.

Audience was welcome back by three more talks in the afternoon session:

  • “Distributed Teams” by Farah Gul, SQA Analyst at Contour Software, another speaker from Karachi. She first explained how different location and time zones create the challenge of working together as a team. She shared some real examples on how marketing campaigns failed in a foreign country due to language barriers. At the end she suggested some ways to curb these challenges which included understanding culture, spending more time in face to face communication and asking for clarity. Slides are here: PSQC18_FarahGul_DistributedTeams
  • “Backend Testing using Mocha” by Amir Shahzad, Software QA Manager at Stella Technology who started off his talks with the ever rising need of testing the backends. He explained how RESTful APIs can be tested using Mocha with some sample code. He also mentioned other libraries that can be used to do better assertions and publishing HTML reports. His talk is here: PSQC18_AmirShahzad_Mocha
  • “ETL Testing” by Arsala Dilshad, Senior SQA Engineer at Solitan Technologies who shared her first-hand experience of testing ETL solutions. After providing an overview of her company’s processes, she told how data quality, system integration  and other testing are needed to provide a Quality solution. Read more details here: PSQC18_ArsalaDilshad_ETL Testing

Then came the best part of the day. We experimented with a new segment called “My 2 cents in 2 minutes” which provide participants to come onto stage and share any challenge they are facing in their profession. Inspired by 99 seconds sessions at TestBash, this proved to be a marvelous way to engage the audience. Around 20 awesome thoughts were presented by Quality Professionals who talked on a variety of topics. I do plan to write some follow up posts on some of the stuff that was brought there as it would be unjust to sum it up here in few lines.

Another tea break was needed to defeat the afternoon nap and seeing some Samosas (and other eateries) being served with tea resulted in many happy faces.

We were then back for the final and perhaps the best talk of the day. “Melting pot of Emotional and Behavioral Intelligence“ by Muhammad Bilal Anjum Practice Head QA & Testing from Teradata who has more than a decade experience in Analytics. Bilal gave some examples on how the current situation of a potential customer can be predicted from the available data. For example, Telco data combined with Healthcare and other sources can be used to predict how likely a person will buy some health solution. He then explained how culture plays a key role in human behaviors and why Industry Consultants are in demand for jobs like above. At the end he threw some ideas on how such solutions can be tested.

With all talks finished, I was asked to close the day being the General Chair of PSQC’18. I took upon this opportunity to thank sponsors, partners, organizers (Qamber Rizvi, Salman Saeed, Adeel Shoukat, Ali Khalid, Mubashir Rashid, Muhammad Ahsan, Amir Shahzad, Ayesha Waseem, Salman Sherin , Ovyce Ashraph), speakers and audience and made a point to mention that how collaboration can produce results which can never be surpassed by individuals. To spark some motivation for participants to try out wonderful ideas presented in the day, I used Munnu Bhai’s famous Punjabi poem with punch line “Ho naheen jaanda, karna painda aye” (It never happens automatically, you have to do it)

Long live Pakistan Software Quality community and we’ll be back with more events through the year and yes PSQC’19 in the Spring of next year!


Participate in “State of the Testing Survey 2018”

The only way to improve yourself and your craft is to reflect on the sate of affairs. The #StateOfTesting Survey gives us exactly that opportunity where testers from across the globe give their valuable feedback and then gain value from this collective wisdom.

I have been taking part in the survey for sometime and I see that not much testers from Pakistan are doing that. Given that our community is on the rise and we had our first ever testing conference, it’s time to get in touch with testing community of the world!

The link to the survey is:

The survey is brought to us by PractiTest and TeaTimeWithTesters magazine. You can find results of previous survey here.

Thanks for the help and let’s make (testing) world better than today!


Code Coverage Dos and Don’ts

Code Coverage is a good indicator of how well your tests are covering your code base. It is done through tools through which you can run your existing set of tests and then get reports on code coverage with file level, statement or line level, class and function level and branch level details.

Since the code coverage report gives you a number, the game of numbers kicks in just like any other number game. If you set hard targets, people would like to get it, and at times a number means nothing. Here are my opinions based upon experience on how to best use Code Coverage in your project:

Do run code coverage every now and then to guide new unit test development

Think of code coverage as a wind indicator flag on your unit testing boat. It can guide you where to maneuver your boat based upon the results. As Martin Fowler notes in this piece:

It’s worth running coverage tools every so often and looking at these bits of untested code. Do they worry you that they aren’t being tested?

The question is not to get the list of untested code base; the question is whether we should write tests for that untested code base?

(image source)

In our project, we measure code coverage for functions and spit out list of functions that are not tested. The testing team then do not order the developers to write tests against them. The testing team simply suggests to write tests and the owner of that code prioritize that task based upon how critical is that piece and how often that is requested by the Users.

Dorothy Graham suggests in this excellent talk that coverage can be either like “butter” or like “strawberry jam” on your bread. You decide if you want “butter’ like coverage i.e. cover all areas or you want “strawberry jam” coverage i.e. cover some areas more in depth.

Do not set a target of 100% code coverage

Setting up a coverage goal is in itself disputed and is often misused as Brian Marick notes in this paper which has been foundation of any Code Coverage work thereafter. Also anything that claims 100% is suspicious e.g. consider following statements:

  • We can’t ship unless we have 100% code coverage
  • We want 100% reported defects to be addressed in this release
  • We want 100% tests to be executed in each build.

You can easily see that a 100% code coverage gives you “Test all Fallacy” to imply that we can test it all. Brian suggests in the same paper that 90 or 95% coverage is good enough.

We have set a target of 90% function coverage but it is not mandatory for release. We provide this information on the table along with other testing data like results of tests, occurrence of bugs per area etc. and then leave the decision to ship on the person who is responsible. Remember, the job of testing is to provide information not make release decisions.

Yes, there is no simple answer to how much code coverage we need. Read this for amusement and know why we get different answers to this question.

Do some analysis on the code coverage numbers

As numbers can mean different things to different people, so we need to ask stakeholders why they need code coverage numbers and what they mean when they want to be covered.

We asked this question, got the answer which is to do a test heat analysis on our code coverage numbers. It gives us following information:

  • Which pieces are hard to be automated? Or easy to be automated
  • Which pieces are to be tested next? (as stated in first Do)
  • Which pieces need more manual testing?
  • How much effort is needed for Unit testing?
  • ….

Do use tools

There are language and technology specific tools. For our C++ API, we have successfully used Coverage Validator  (licensed but very rightly priced) and OpenCppCoverage (free tool) that extract info by executing GoogleTest tests.

Do not assume coverage as tested well

You can easily write a test to cover each function or each statement, without testing it well or even without testing it at all.

Along with our function wise code coverage that I mentioned above, we have a strong code review policy which includes reviewing the test code. Also we write many scenario level tests that do not add to the coverage but cover the workflows (or the orders in which functions will be called) which are more important to our users.

Brian summarizes it nicely in the before mentioned paper:

I wouldn’t have written four coverage tools if I didn’t think they’re helpful. But they’re only helpful if they’re used to enhance thought, not replace it.

How you have used Code Coverage in your projects? What Dos and Don’ts you’d like to share?