Difference between revisions of "Doc:latest/taeguide/overview"

 
(Organization)
 
(One intermediate revision by one user not shown)
Line 1: Line 1:
 
== Organization ==
 
== Organization ==
  
The TAE is organized following the [http://en.wikipedia.org/wiki/Matryoshka_doll Matryoshka principle] in that each facility requires all "smaller" facilities, yet any particular facility and all "smaller" facilities comprises a working automation environment (albeit missing the features provided by the "larger" facilities) and is useful in certain circumstances.
+
The TAE is intended to be used both as a individual developer's test bench and as an enterprise-wide test automation solution. It therefore is organised in two independent parts: first, the web interface and historical database that is generally deployed on a single machine, and second, the testing engine which often deployed on every developer's machine and on an automated (nightly) build and test server.
  
The purpose of this organization is most importantly to create an incremental development schedule that will yield a usable test framework rapidly and a fully-featured one when those features are required.
+
This allows the test automation environment to be run in a variety of different situations and by a variety of different users.  For example, it could be run by an automated nightly verification system or by a developer fixing a bug.  Also tests can be run at different physical sites (or even by multiple companies cooperating on the same project) yet all the results can be uploaded to a single centralized database and web interface for project analysis.
  
Another extremely important purpose is to allow the automation environment to be run in a variety of different situations and by a variety of different users.  For example, it could be run by an automated nightly verification system (layer 3), or by a developer fixing a bug (layer 1,2).  It could be run by an OpenClovis engineer (layer 1,2 or 3), or by a customer who is verifying compatibility with his hardware (layer 3).
+
Lastly, the TAE is designed to allow programs written for a variety of purposes -- test, demo, eval, and real systems -- to be run.  By supplying a heavy "class" framework, most automated test frameworks' client libraries force the application into a specific unit test mold.  So the only software that can be run are test suites designed from the bottom-up to fit within the automated test framework.  This framework allows software that was not written for the express purpose of testing to be annotated with compile-time optional tests (similar to assert()) that can be used to verify the correct execution of that software.  This means that your final application code can be self-testing and report errors to the TAE, just like your unit tests.
 
+
Lastly, it is designed to allow programs written for a variety of purposes -- test, demo, eval, and real systems -- to be run.  Traditionally automated test frameworks are so encompassing that the only software that can be run are test suites designed from the bottom-up to fit within the automated test framework.  This framework allows software that was not written for the express purpose of testing to be annotated with compile-time optional tests (similar to assert()) that can be used to verify the correct execution of that software.
+
  
 
Each layer is summarized in this document and contains a link to a detailed design.
 
Each layer is summarized in this document and contains a link to a detailed design.
  
=== Layer 1: C program API (Clovis Test API) ===
+
=== Layer 1: Unit testing API (Clovis Test API) ===
  
The smallest layer is the API used in a program to run tests.  This layer consists of a set of C macros that group and run tests. When not testing, these macros can be compiled out.  This is very similar to unit testing packages found for many languages.
+
The smallest layer is the API used in a program to run tests.  This layer consists of a set of C macros that group and run tests. When not testing -- when a "release" build is created for example -- these macros are compiled out.  This is very similar to unit testing packages found for many languages.
  
 
If this layer is run alone, it will simply print test successes and failures to the screen.
 
If this layer is run alone, it will simply print test successes and failures to the screen.
Line 19: Line 17:
 
When run within the layer 3 framework, test successes and failures will be posted to the TAE report server.
 
When run within the layer 3 framework, test successes and failures will be posted to the TAE report server.
  
=== Layer 2: Python based testing (using APIs provided by TAE)===
+
=== Layer 2: System Test API ===
 +
 
 +
This API layer allows tests to be written in Python programming language and runs them on a machine that sits outside of the device-under-test. 
  
This is another layer to write test programs in python and run tests. Here the test program uses the rich set of APIs (written in python) present in the TAE infrastructure. This layer provides multiple utilities to run tests in a distributed environment.
+
Through these APIs, your tests can access the entire cluster and start/stop applications (or unit tests), cause failures of processes or nodes, and access CLIs and other external interfaces.
  
 
If this layer is run alone, it will simply print test successes and failures to the screen.
 
If this layer is run alone, it will simply print test successes and failures to the screen.
  
 
When run within the layer 3 framework, test successes and failures will be posted to the TAE report server.
 
When run within the layer 3 framework, test successes and failures will be posted to the TAE report server.
=== Layer 3: Test Execution Management and Event Simulation ===
 
  
The Test Execution Management and Event Simulation (TEMES) layer consists of a separate machine (test driver) that controls the test "fixture" (the set of machines that constitute a test environment).  It is capable of taking software from either a local directory, a ftp site, an http site or from subversion repository, unpacking it (if needed), installing it on the fixture, compiling it, and running a series of tests.
+
=== Layer 3: Test Execution Management ===
  
These tests primarily constitute executable programs that use the Layer 1 and 2 APIs, and conform to a loose modeling format.
+
This layer consists of a separate machine (test driver) that controls the test "fixture" (the set of machines that constitute a test environment).  It is capable of taking software from either a local directory, a ftp site, an http site or from subversion repository, unpacking it (if needed), installing it on the fixture, compiling it, and running a series of tests.
  
Additionally, a test-specific scripts may be run on the TEMES that can "drive" the progress of the test and cause events such as process, network, and machine failures.
+
These tests constitute executable programs that use the Layer 1 APIs, and Layer 2 Python modules that can "drive" the progress of the test suite and inject errors and events such as process, network, and machine failures.
  
 
If this layer is run alone, it will run through a complete test suite and report successes and failures.
 
If this layer is run alone, it will run through a complete test suite and report successes and failures.
Line 38: Line 37:
 
=== Layer 4: Automated Run and Longitudinal Reports ===
 
=== Layer 4: Automated Run and Longitudinal Reports ===
  
The Automated Run and Longitudinal Reporting (ARLR) layer consists of a single globally accessible server (ARLR server) and a source repository (subversion etc).  The ARLR server contains a web site that allows users to see which source code branches are failing which tests on what hardware and it keeps a history of this information.  It also will communicate with unused (or layer 3 dedicated) test drivers to schedule runs of a set of test suites against a set of branches and receive results from these drivers.
+
The Automated Run and Longitudinal Reporting layer consists of a single globally accessible server and a source repository (subversion etc).  The server contains a web site that allows users to see which source code branches are failing which tests on what hardware and it keeps a history of this information.

Latest revision as of 20:19, 12 March 2013

Contents

[edit] Organization

The TAE is intended to be used both as a individual developer's test bench and as an enterprise-wide test automation solution. It therefore is organised in two independent parts: first, the web interface and historical database that is generally deployed on a single machine, and second, the testing engine which often deployed on every developer's machine and on an automated (nightly) build and test server.

This allows the test automation environment to be run in a variety of different situations and by a variety of different users. For example, it could be run by an automated nightly verification system or by a developer fixing a bug. Also tests can be run at different physical sites (or even by multiple companies cooperating on the same project) yet all the results can be uploaded to a single centralized database and web interface for project analysis.

Lastly, the TAE is designed to allow programs written for a variety of purposes -- test, demo, eval, and real systems -- to be run. By supplying a heavy "class" framework, most automated test frameworks' client libraries force the application into a specific unit test mold. So the only software that can be run are test suites designed from the bottom-up to fit within the automated test framework. This framework allows software that was not written for the express purpose of testing to be annotated with compile-time optional tests (similar to assert()) that can be used to verify the correct execution of that software. This means that your final application code can be self-testing and report errors to the TAE, just like your unit tests.

Each layer is summarized in this document and contains a link to a detailed design.

[edit] Layer 1: Unit testing API (Clovis Test API)

The smallest layer is the API used in a program to run tests. This layer consists of a set of C macros that group and run tests. When not testing -- when a "release" build is created for example -- these macros are compiled out. This is very similar to unit testing packages found for many languages.

If this layer is run alone, it will simply print test successes and failures to the screen.

When run within the layer 3 framework, test successes and failures will be posted to the TAE report server.

[edit] Layer 2: System Test API

This API layer allows tests to be written in Python programming language and runs them on a machine that sits outside of the device-under-test.

Through these APIs, your tests can access the entire cluster and start/stop applications (or unit tests), cause failures of processes or nodes, and access CLIs and other external interfaces.

If this layer is run alone, it will simply print test successes and failures to the screen.

When run within the layer 3 framework, test successes and failures will be posted to the TAE report server.

[edit] Layer 3: Test Execution Management

This layer consists of a separate machine (test driver) that controls the test "fixture" (the set of machines that constitute a test environment). It is capable of taking software from either a local directory, a ftp site, an http site or from subversion repository, unpacking it (if needed), installing it on the fixture, compiling it, and running a series of tests.

These tests constitute executable programs that use the Layer 1 APIs, and Layer 2 Python modules that can "drive" the progress of the test suite and inject errors and events such as process, network, and machine failures.

If this layer is run alone, it will run through a complete test suite and report successes and failures.

[edit] Layer 4: Automated Run and Longitudinal Reports

The Automated Run and Longitudinal Reporting layer consists of a single globally accessible server and a source repository (subversion etc). The server contains a web site that allows users to see which source code branches are failing which tests on what hardware and it keeps a history of this information.