Sunday, August 14, 2011

Glossary - QA


Testing: It involves operating an application under controlled conditions and evaluating the results in order to confirm that the application fulfills it stated requirements
ST: It is process of evaluating a system / systems component by manual / automated means to verify that it satisfies specified requirements / to identify differences between expected and actual results.
QA: A set of activities designed to ensure that development and / or maintenance process is adequate to ensure that system will meet its requirement.      FocusPROCESS, Proactive, Staff Function, Prevent Defects
QC: Set of procedures intended to ensure that a manufactured product service adheres to a defined set of quality criteria / meets requirements of the client FocusPRODUCT, Reactive, Line Function, Find Defects
SQA: A planned and systematic pattern of all actions necessary to provide adequate confidence that a software work product conforms to established technical requirements.
Capability Maturity Model (CMM): 5 level staged framework that describes key elements of an effective software process. It covers practices for planning, engineering and managing, development and maintenance.
CMM Integration (CMMI): It covers practices for planning, engineering and managing product development and maintenance.
Unit T: It is a procedure used to validate that individual units of source code are working properly. Unit tests tell a developer that the code is doing things right; functional tests tell a developer that the code is doing the right things.
System T: Test of an entire system conducted to ensure that the system meets all applicable user and design requirements. Ex: resource loss bugs, throughput bugs, performance, security, recovery, and transaction synchronization bugs.
Integration test: Test which verifies that interfaces and interdependencies of products, modules, subsystems, and systems have been properly designed and implemented.
System Integration T: It takes multiple integrated systems that have passed system testing as input and tests their required interactions.
UAT: It is a phase of software development in which the software is tested in the "real world" by the intended audience. It gives the end users the confidence that the application being delivered to them meets their requirements.
Baseline: A specification or product that has been formally reviewed and agreed upon, that thereafter serves as the basis for further development, and that can be changed only through formal change control procedures.
Functional tests: They are written from a user's perspective. These tests confirm that the system does what users are expecting it to. Functional Tests test the entire system from end-to-end.
Walkthroughs: A presentation of developed material to an audience with a broad cross-section of knowledge about material being presented. It gives assurance that no major oversight lies concealed in material.
Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner
Error: A discrepancy between a computed, observed, or measured value / condition and the true, specified, or theoretically correct value or condition.
Failure: The inability of a system or component to perform its required functions within specified performance requirements
Fault: An incorrect step, process, or data definition in a computer program which causes the program to perform in an unintended or unanticipated manner
Verification: The process of determining whether / not the products of a given phase of the SDLC meet the implementation steps and can be traced to the incoming objectives established during the previous phase.
Validation: Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specifications and quality attributes.
White box T: Testing technique whereby explicit knowledge of the internal workings of the item being tested is used to select the test data.     (Structural, Clear box, Open box, Logic driven, Glass box)
Black box T: It subjects the program / system to inputs and its outputs are verified for conformance to specified behavior.                                 (Functional, Opaque box, Closed box, Data driven, Behavioral)
Gray box T: The tester applies a limited no. of test cases to the internal workings of software under test. In remaining part, one takes a black-box approach in applying inputs to software under test and observing the outputs.
Positive T: Testing which attempts to show that a given module of an application does what it is supposed to do. (Not showing error when not supposed to) + (Showing error when supposed to). Also known as “test to pass
Negative T: Testing which attempts to show that the module does not do anything that it is not supposed to do. (Showing error when not supposed to) + (Not showing error when supposed to). Also known as “test to fail
Bottom-Up T: Approach to integration testing, where the lowest level components are tested 1ST, and then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Top-Down T: Approach to integration testing where component at top of the component hierarchy is tested 1ST with lower level components being simulated by stubs. The process is repeated until lowest level components have been tested.
Functional T: Testing that verifies that the system conforms to the specified functional requirements. Its goal is to ensure that user requirements have been met.
Performance T: Testing conducted to evaluate the compliance of a system with specified performance requirements. Testing that verifies that system meets specific performance objectives in terms of response times under varying workloads
Dynamic T: Testing the physical response from the system to variables that change with time and are not consistent.     
Static T: Testing which does not involve the code start. Analysis of a program carried out without executing the program.
Accessibility T: A test case designed to verify a product is accessible to the people having disabilities (deaf, blind, mentally disabled, etc). Testing to determine the ease by which users with disabilities can use a system.
Ad hoc T: Testing carried out using no recognized test case design technique. Testing phase where the tester tries to break the software by randomly trying the system’s functionality. It is carried out informally.
Agile T: Testing practice for a project using agile methodologies, such as extreme programming (XP), treating development as the customer of testing and emphasizing then test-first design paradigm.
Alpha T: It is a phase of testing of an application when development is nearing completion, done in a controlled environment by the end-user, done by end-users, and minor design changes may still be made as a result of testing.
Beta T: Testing conducted at one / more customer sites by the end-user of a delivered software product. This is usually a “friendly” user and the testing is conducted before it is officially launched.
Back-to-back T: Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies.
Boundary T: Test which focus on the boundary / limit conditions of the software being tested. (Some of these tests are stress tests).
Compatibility T: It is part of software non-functional tests, testing conducted on the application to evaluate the application's compatibility with other systems, with which it should communicate (computing environment).
Component T: Process of examining individual hardware/software components / a set of related components in seclusion. This is derived from the developer’s experience and intuition on how components should operate in the system.
Complete T: Erroneously used to mean 100% branch coverage. Testing is "complete" when the tests specified by the criterion have been passed. Absolutely complete testing is impossible
Conversion T: Testing of programs / procedures used to convert data from existing systems for use in replacement systems.
Data and DB Integrity T: It is intended to uncover design flaws that may result in data corruption, unauthorized data access, lack of data integrity across multiple tables, and lack of adequate transaction performance.
Data flow T: Testing in which test cases are designed based on variable usage within the code
Data driven T: Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file / table / spreadsheet. Common technique in Automated Testing
Dependency T: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
End-to-end T: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a DB, using network communications, or interacting with other hardware, applications, / systems if appropriate.
Independent T (3rd party): The approach of using personnel not involved in the development of the product or system in its testing.
Installation T: Confirms that the application under test recovers from expected / unexpected events without loss of data / functionality. Events can include shortage of disk space, unexpected loss of communication, / power out conditions.
Load T: Determines response time of a system with various workloads within anticipated normal production range. It simulates user activity and analyzes effect of real-world user environment on an application.
Path T: Testing in which all paths in the program source code are tested at least once.
Recovery T: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Regression T (Verification T): Retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements
Re-testing: Testing again and again for the functionality of the application
Sanity T: Brief test of major functional elements of a piece of software to determine if it’s basically operational.
Scalability T: Performance testing focused on the behavior of a system with expanded workloads simulating future production states such as added data and an increased amount of users.
Security T: Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
Smoke T: It validates that a fundamental operation of program is ready to undergo more complex Functional / Scenario testing. It is a cursory examination of all of basic components of a system to ensure that they work
Stress T: Testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.
Volume T: It seeks to verify the physical and logical limits to a system's capacity and ascertain whether such limits are acceptable to meet the projected capacity of application's required processing. The system is subjected to large volume of data.
Audit: An independent examination of a work product or set of work products to assess compliance with specifications, standards, contractual agreements, or other criteria. See: functional configuration audit, physical configuration audit.
Acceptance Criteria: The criteria that a system / component must satisfy in order to be accepted by a user, customer, / other authorized entity.
Boundary value: A data value that corresponds to a minimum or maximum input, internal, or output value specified for a system or component.
Code Coverage: Knowledge of purpose, methods, and test coverage tools used for monitoring the execution of software and reporting on the degree of coverage at the statement, branch / path level.
Error rate: Understanding of mean time between errors as a criterion for test completion.                       
Risk: A measure of the probability and severity of undesired effects. Often taken as the simple product of probability and consequence
Test: An activity in which a system or component is executed under specified conditions, the results are observed or recorded and an evaluation is made of some aspect of the system or component.
Testability: The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.
Test case: An assertion concerning the functioning of an application software entity, truth of which must be demonstrated through testing in order to conclude that entity meets established user requirement.
Test plan: Selection of techniques and methods to be used to validate the product against its approved requirements. An overview of project lists items to be tested and serves as a communication device between all project team members.
Test script: A system life cycle documentation standard that is the design specification for a test run. It defines the test cases to be executed, required set up procedures, required execution procedures, and required evaluation procedures.
Test and evaluation plan: It identifies high-level requirements, defines the objectives, and overall structure of the test and evaluation for a system. It details the test strategy, schedule, and resource requirements for test and evaluation.
Test data: Files, records, and data elements created by users, analysts, and developers to test requirements, design specifications, and software code. There is no standard format for test data.
Test req.: A description of the test which must be executed to verify a system / software requirement. They should exist at levels corresponding to the requirements. This is part of the traceability matrix.
Technical req.: Requirements that describe what the software must do and its operational constraints. Ex: functional, performance, interface, and quality requirements
Test procedure: Defines the procedures to be followed when applying a test suite to a product for the purposes of conformance testing.
Test architecture: The high-level design of a planned application software test. It includes: (1) a structural blueprint, (2) a definition of the test time dimension, and (3) a definition of the overall processing sequence for the test.
End user: The individual or group who will use the system for its intended operational use when it is deployed in its environment.
Prototype: An active model for end-users to see touch feel and experience. It is the working equivalent to a paper design specification with one exception - errors can be detected earlier.
Inspections: Planned and formal technique used to verify compliance of specific development products against their documented standards and requirements. Knowledge should cover purpose, structure, and roles of participants.
Regression acceptance test: It may be necessary to execute a planned acceptance, integration, string, system, or unit test more than once, either because the initial execution did not proceed successfully to its conclusion or because a flaw was discovered in the system or subsystem being tested. The first execution of a planned test, whether or not successful, is termed an initial test. Subsequent executions, if any, are termed regression tests.
Software qualification test (SQT): This test phase verifies compliance with the system design objectives and tests each module/program/system against the functional specifications using the system test environment. It should include a performance test, a volume test, stress testing, operability tests, security and control tests, disaster recovery tests, and if applicable, a data conversion test
System acceptance: Testing of the system to demonstrate system compliance with user requirements.
Acceptance testing: A test of an application software system that is performed for the purpose of enabling the system sponsor to decide whether or not to accept the system.
Formal testing conducted to determine whether a system satisfies its acceptance criteria and to enable the customer to determine whether to accept the system.
Software acceptance test (SAT): It is used to test effectiveness of the documentation, the training plan, environmental impact on the operating systems, and security. In this test phase, the user is involved in validating the acceptability of the system against acceptance criteria using the operational test environment. Establishing the test in the operational environment requires coordination between the System Developer and the Information Processing Centers and is used to validate any additional impacts to the operating environment. The completion of the SAT should result in the formal signing of a document accepting the software and establishes a new baseline.
Requirements traceability: It is defined as the ability to describe and follow the life of a requirement, in both a forward and backward direction (i.e. origins, development and specification, deployment and use, refinement and iterations).
Cross referencing: It involves embedding phrases like "see section x" throughout the project documentation (e.g., tagging, numbering, or indexing of requirements, and specialized tables or matrices that track the cross references).
Phase-End Reviews: Review of products and the processes used to develop or maintain systems occurring at, or near, the completion of each phase of development, e.g., design, programming. Decisions to proceed with development, based on cost, schedule, risk, progress, etc., are usually a part of these reviews. A formal written report of the findings and recommendations is normally provided. These phase-end reviews are often called phase exits, stage gates, or kill points. Each project phase normally includes a set of defined work products designed to establish the desired level of management control. The majority of these items are related to the primary phase deliverable, and the phases typically take their names from these items: requirements, design, build, text, start-up, turnover, and others as appropriate. At the end of a project they are commonly called “Post Mortems Review”
Functional Baseline: The initially approved documentation describing a system’s or configuration item’s functional characteristics and the verification tests required to demonstrate the achievement of those specified functional characteristics.
Allocated Baseline: The initially approved documentation describing a configuration item’s interface characteristics that are allocated from those of the higher level configuration item or those to a lower level, interface requirements with interfacing configuration items, additional design constraints, and the verification tests required to demonstrate the achievement of those specified functional and interface characteristics.
Product Baseline: The initially approved documentation describing all of the necessary physical and functional characteristics of the configuration item, including manufacturing processes and procedures, materials, any required joint and combined operations interoperability characteristics of a configuration item (incl. a complete summary of other service and allied interfacing configuration items or systems and equipment); the selected physical characteristics designated for production acceptance testing and tests necessary for production and support of the configuration item.
Process capability baseline (PCB): A documented characterization of the range of expected results that would normally be achieved by following a specific process under typical circumstances.
Process performance baseline (PPB): A documented characterization of the actual results achieved by following a process, which is used as a benchmark for comparing actual process performance against expected process performance. A process performance baseline is typically established at the project level, although the initial process performance baseline will usually be derived from the organization’s process capability baselines.
Critical path: A series of dependent tasks for a project that must be completed as planned to keep the entire project on schedule.
Development costs: It includes personnel costs; computer usage, training, supply, and equipment costs; and the cost of any new computer equipment and software. In addition, costs associated with the installation and start-up of the new system must be calculated.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.