Doc:latest/taeguide/overview

Revision as of 05:04, 27 August 2010 by Senthilk (Talk)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)


Contents

Organization

The TAE is organized following the Matryoshka principle in that each facility requires all "smaller" facilities, yet any particular facility and all "smaller" facilities comprises a working automation environment (albeit missing the features provided by the "larger" facilities) and is useful in certain circumstances.

The purpose of this organization is most importantly to create an incremental development schedule that will yield a usable test framework rapidly and a fully-featured one when those features are required.

Another extremely important purpose is to allow the automation environment to be run in a variety of different situations and by a variety of different users. For example, it could be run by an automated nightly verification system (layer 3), or by a developer fixing a bug (layer 1,2). It could be run by an OpenClovis engineer (layer 1,2 or 3), or by a customer who is verifying compatibility with his hardware (layer 3).

Lastly, it is designed to allow programs written for a variety of purposes -- test, demo, eval, and real systems -- to be run. Traditionally automated test frameworks are so encompassing that the only software that can be run are test suites designed from the bottom-up to fit within the automated test framework. This framework allows software that was not written for the express purpose of testing to be annotated with compile-time optional tests (similar to assert()) that can be used to verify the correct execution of that software.

Each layer is summarized in this document and contains a link to a detailed design.

Layer 1: C program API (Clovis Test API)

The smallest layer is the API used in a program to run tests. This layer consists of a set of C macros that group and run tests. When not testing, these macros can be compiled out. This is very similar to unit testing packages found for many languages.

If this layer is run alone, it will simply print test successes and failures to the screen.

When run within the layer 3 framework, test successes and failures will be posted to the TAE report server.

Layer 2: Python based testing (using APIs provided by TAE)

This is another layer to write test programs in python and run tests. Here the test program uses the rich set of APIs (written in python) present in the TAE infrastructure. This layer provides multiple utilities to run tests in a distributed environment.

If this layer is run alone, it will simply print test successes and failures to the screen.

When run within the layer 3 framework, test successes and failures will be posted to the TAE report server.

Layer 3: Test Execution Management and Event Simulation

The Test Execution Management and Event Simulation (TEMES) layer consists of a separate machine (test driver) that controls the test "fixture" (the set of machines that constitute a test environment). It is capable of taking software from either a local directory, a ftp site, an http site or from subversion repository, unpacking it (if needed), installing it on the fixture, compiling it, and running a series of tests.

These tests primarily constitute executable programs that use the Layer 1 and 2 APIs, and conform to a loose modeling format.

Additionally, a test-specific scripts may be run on the TEMES that can "drive" the progress of the test and cause events such as process, network, and machine failures.

If this layer is run alone, it will run through a complete test suite and report successes and failures.

Layer 4: Automated Run and Longitudinal Reports

The Automated Run and Longitudinal Reporting (ARLR) layer consists of a single globally accessible server (ARLR server) and a source repository (subversion etc). The ARLR server contains a web site that allows users to see which source code branches are failing which tests on what hardware and it keeps a history of this information. It also will communicate with unused (or layer 3 dedicated) test drivers to schedule runs of a set of test suites against a set of branches and receive results from these drivers.