Showing posts with label A. Show all posts
Showing posts with label A. Show all posts

Availability

ISTQB Glossary  definition 
"The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage."
In Simple English,

This is a simple measure of how much a system is available for users to use.
In testing usually this is read as how much time a system is available for testers to test the system.



Field Notes 


  • Most often it is expressed in percentage.
  • Percentage is calculated as how much was the system available divided by how much was it expected to be available.
  • It can also be expressed in just plain simple numbers. Say for example, The test site was available only 2 hours today"
  • A tricky thing about availability is that it is always not clear what qualifies a site or system as available. what if the URL works and opens the site to be tested but the site is so slow that each step takes 45 seconds for the website to react.
  • To avoid such situations, it is better to have the project and test phase specific definition of availability in the test plan document.
  • Note that the availability measure does not change with the number of people to who it was not available.
  • Note that the availability measure does not change with the number of times the site was not available either. sometimes this must be handled separately.  Imagine what if the site was available for 5 minutes and then not available for another 5 minutes and then available again for just another 5 minutes all through the day repeating the cycle. In this case the site's availability mathematically would be 50%. How ever that does not convey the whole picture as essentially the site is as good as totally not available for testing as very little testing can be achieved in 5 minute cycles.    

For Example:

For example, say you are testing a website. Your team of 12 people had planned to test this website 10 hours today.  When the test day started the website was available. After an hour the website becomes unavailable. The URL does not open at all. so you suspend testing and would report that the website's availability was 10% during the test day.
  • 10 % availability =  (1 hour available /10 hours expected availability) * 100

Automated testware

ISTQB Glossary  definition 


"Testware used in automated testing, such as tool scripts."
In Simple English,


I am working on this post still.
In an effort to explain these terms as simple as it could be i go through rigorous cycles of writing, reviewing and updating the post till it is simple enough.Meanwhile,  
Please leave your comments,questions & opinions about this testing term explained here or this site in general below. i would love to know.

Field Notes 


For Example:


Audit trail

ISTQB Glossary  definition 
A path by which the original input to a process (e.g. d ) can be traced back through the process, taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out. [After TMap]
In Simple English,

I am working on this post still.
In an effort to explain these terms as simple as it could be i go through rigorous cycles of writing, reviewing and updating the post till it is simple enough.Meanwhile,  
Please leave your comments,questions & opinions about this testing term explained here or this site in general below. i would love to know.

Field Notes 


For Example:


Audit


ISTQB Glossary  definition 
"An independent evaluation of software products or processes to ascertain compliance to standards, guidelines, specifications, and/or procedures based on objective criteria, including documents that specify (1) the form or content of the products to be produced (2) the process by which the products shall be produced (3) how compliance to standards or guidelines shall be measured." 
[IEEE 1028]
In Simple English,

Audit is to check for compliance with a predetermined expected state.
Its very different from actual testing in the fact that most time audits are static checks that do not interact with a system under test a lot.

Field Notes 


  • Audits are most often used as complimentary process to testing in development projects.
  • Audits are not a feasible replacement for software testing.
  • Audits are only as effective as the person doing it and his understanding of the audit's purpose and the process or software being audited unlike testing in which once the test is designed properly, anyone even can execute it and determine a pass or fails state.
  • Audits for process to be followed are very common. Regular Security audit are very common in most software companies.  Process audits like audits for ISO and CMMI are also conducted in software development companies.
  •  Audits mostly need pre planning in terms of preparing a list of items to audit and the accepted state for each of them. Mature audit systems have also an measure for allowable deviation.
  • Audits are most often a group or team activity. There is one who conducts the audit and the one being responsible for the process or product being audited.
  • Some instances may however allow self audit with standard process and procedures to be followed.
  • The result of audits are not defects or bugs. Audits result in observations, recommendations and non compliance items.
  • Check lists are the most common tool used for audits.

For Example:
Code audits are very common examples that are easy to understand.
Say for example, the development team has a guideline that variable names should follow a specific pattern. A auditor  may then audit each piece of code for compliance to this standard and issue a non compliance for each instance of variables not following the pattern.

ractiveness

The capability of the software product to be ractive to the user. [ISO 9126] See also usability.

Attack

ISTQB Glossary  definition 
Attack is directed and focused attempt to evaluate the quality, especially reliability, of a test object by empting to force specific failures to occur.
See also negative testing.
Note: For some reason this is not listed as part of the new version 3.01 of the glossary. I guess this is a an oversight and probably will be rectified in the future releases. how ever in the sprite of learning and knowing your trade, I have included it here.

In Simple English,


A much easier way to understand Attack is to think of it as testing for defects that were successfully fixed in previous releases.


You know or assume that a particular part of the application under test will break or
you do not want a particular feature or part of the application under test to break so you very specifically test that particular feature or previous failure. This is termed as an attack.

Field Notes 


  • Attack as a term is used mostly as part of security testing jargon.  They really have attacks to simulate and test the capacity of the system to withstand the attacks.
  • Not many use this term for functional testing as much as i have seen.
  • Though this is very close to negative testing, do not confuse this with negative testing as they are both different things.
  • A good way to understand this would be to see an Attack as an very specific method of testing which could also be grouped under the negative testing category. 
For Example:
I had to think a lot about a better example for this other than a user id and password based example scenario just so you understand the actual flavour of this test technique.

still thinking.., if you have any good examples .., please leave them in the comments below. 


atomic condition

A condition that cannot be decomposed, i.e., a condition that does not contain two or more single conditions joined by a logical operator (AND, OR, XOR).

assessor

A person who conducts an assessment; any member of an assessment team.

assessment report

A document summarizing the assessment results, e.g. conclusions, recommendations and findings. See also process assessment.

arc testing

See branch testing.

API (Application Programming Interface) testing

Testing the code which enables communication between different processes, programs and/or systems. API testing often involves negative testing, e.g., to validate the robustness of error handling. See also interface testing.

anti-p ern

Repeated action, process, structure or reusable solution that initially appears to be beneficial and is commonly used but is ineffective and/or counterproductive in practice.

anomaly

Any condition that deviates from expectation based on requirements specifications, design documents, user documents, standards, etc. or from someone’s perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation. [IEEE 1044] See also bug, defect, deviation, error, fault, failure, incident, problem.

analyzer

See static analyzer.

analyzability

The capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identified. [ISO 9126] See also maintainability.

analytical testing

Testing based on a systematic analysis of e.g., product risks or requirements.

alpha testing

Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing.

algorithm test

[TMap] See branch testing.

agile testing

Testing practice for a project using agile software development methodologies, incorporating techniques and methods, such as extreme programming (XP), treating development as the customer of testing and emphasizing the test-first design paradigm. See also test driven development.

agile software development

A group of software development methodologies based on iterative incremental development, where requirements and solutions evolve through collaboration between self-organizing cross-functional teams.