Contents |
Step 1: Create your model
- Create a model using the OpenClovis IDE. A good example model is located in the "SAFplus Platform test" project in ClovisForge. Your model can use any redundancy model and contain any number of components and/or service groups. It can expect any number of blades. However, the most common configuration will be 2 system controllers and 2 payload blades. It would be best if your model supported this configuration (or fewer) and was capable of utilizing extra blades if they exist.
Python TAE Interface
No specific instructions...
C TAE Interface
- Any service group instances that run tests MUST be called tcSg<number>[Name]
- For example:
- tcSg001MessagingTest, tcSg034,
- To run 2 service group instances simultaneously use tcSg<same number>[different name]
- For example:
- tcSg001a and tcSg001b, or tcSg005MsgClient and tcSg005MsgServer
Include and Library
#include <clTestApi.h>
Code is located in the SAFplus Platform utils library (SAFplus/components/utils/...).
Summary
The module provides a set of functions that are useful when creating regression tests.
The module creates a hierarchy of tests; a test, a test case, and a test point. The "test" is started/stopped by clTestInitialize/clTestFinalize (clTestStart/End deprecated) and should precede and succeed all other clTest function calls. In general, you'll call these functions once each in your program. Next, with a "test" you are allows any number of "test cases". A test case is whatever you want it to be, but generally think of it as a grouping -- maybe by configuration, load, or strategy. To start/end a test case, use clTestCaseStart/End. You can also use "clTestCase" if your test is a single line (like a function call) -- it's just syntatic sugar.
Finally, individual predicates are called "test points". Use the clTestXXX for these. The basic function is clTest. You essentially pass it a predicate (an expression that evaluates to a boolean) and it checks the truth of that predicate. There is also a similar function that lets you also execute code if the test failed. This is mostly used to skip subsequent test points in a setup where each point requires that the prior succeed. Finally, you can claim that a test succeeded, failed or malfunctioned, without checking any predicate.
Malfunctioned? What's that? A "malfunction" occurs when initial conditions necessary to run the test were not met. For example, let's say you are running a messaging test. The test checks that messaging works between 2 processes on the same blade, and then checks between 2 blades. But what if it is run on a chassis with a single blade? In this case, the test could call TestMalfunction.
Context
These functions will act differently depending on the context in which they are executed.
- If "CL_TESTING" is not defined at compile time, they will all be no-ops (not implemented yet). So, like assertions, do not put critical functions within a clTest macro! Of course, that only applies if you are writing a "normal" program that also has a test interface. If you are writing a test, then it will never be compiled without CL_TESTING defined.
- If run independent of a Test Automation Environment (TAE) they will print out consistently formatted messages that can be analysed by scripts. To stop the deluge of data, there is a mode that does not print test point successes
- If run within a TAE, they will communicate state to the TAE. So do not expect real-time performance
Note, some functions ask for formatted strings. Please refrain from using newline or linefeeds "\n" or "\l" since these functions will do formatting for you. Also, do not use success or failure words, for example "Failed", "Pass", "Success", or "Ok", since the functions will also add this annotation.
Functions
<TBD> Test APIs available
Example
Step 2: Add your tests
There are various ways/options to implement SAFplus Platform testcases:
- Write testcase in C as part of an SAFplus-based component. Such testcases should use the C TAE Interface, see below.
- Such testcases should be triggered by unlocking AMF entities, typically Service Groups.
- Such testcases can be run completely independent of the TAE robot, for instance via the debug CLI.
- Such testcases should use the TAE C interface to preport start and end of testcases, and the results of the testcase
- Such testcases can also be activated from the TAE robot, using a special 1-line wrapper
- Write testcases purely in Python at the TAE robot side.
- Such testcase is implemented as a test*.py file
- Such testcases can typically do anything that can be done using normal shell access to the target blades and debug CLI access to SAFplus Platform
- In the short future, direct access to HPI actions will also be supported
- It is also possible to invoke C functions in your (test) application from the Python test script, provided that the function is exposed using the SOAP RPC method described and demoed below
- Such testcases may not use the TAE C interface at all
- The 3rd type of testcase is a mixture of the two extremes above, and implement tests using the C interface, but also have more than just one line code in the Python layer (robot side)
When to Use What Approach?
When you try to decide what is the best approach for a given testcase, think from a (customer) application perspective. Some guidelines:
- If the scenario you want to test is representative of what customer applications would do, lean more on the C-level implementation. Example:
- An application uses the checkpoint service to save some data, and a standby component is trying to read that data.
- If the scenario has to emulate some artificial external events, use the Python layer to induce the event. Example:
- You need to reset a blade or emulate a kernel crash by invoking some command line commands. Would this code ever be done in a user application? NO! So, you should not (need to) implement this in C, but rather figure out a simple way of doing this from the Python testcase code.
- If the scenario involves artificial sequencing of otherwise randomly occurring events across multiple nodes, the sequencing is better to be left for the Python layer. Example:
- You need to bring some components up, and then crash them in a certain order. Again, you should ask: does the code that crashes the component have a natural place in the application? Would a customer ever write such code as natural part of his application? Hardly. Also, would the sequencing be controlled in C from some other application? No. So, in that case implementing the sequencing and the crash is better to be left to the Python (robot) layer.
Python TAE Interface
- If you plan to implement your tests in a combination of C and Python by invoking some C functions from Python testcase, you must now add the TAE SOAP server and remotely callable functions to your model. For an example of how to do this, please look at the "sysctrlcomp" component in the "unitTests" model in the "asptest" project here. This component is untouched except for the addition of 2 remotely callable functions. You must also add the "tae" subdirectory, as seen here. This is where you define and implement your RPC calls. Finally, you must hook up the Makefiles, so that the "tae" subdirectory is build, as seen here (the first 2 changes are not important -- I just remove extra spaces).
- Next you must implement your tests in Python, and have them call down to the C layer. This information is best shown by example, so this user guide is continued here in the unitTests/test/lint test case. You can also look at the unitTests/test/* for examples.
C TAE Interface
- When a service group is assigned work (a CSI) this should trigger your test to run. When your test is complete it should simply wait until the work is unassigned.
- Your tests must call the functions in the Test API defined here. You can also use the Test Lifecycle APIs defined here to facilitate starting test cases with different parameters.
- An example simple test is located in the osal component test. The simplest example is the task test. This "task test" example shows just one test case of many that can appear in a single service group. To make this example a "fully" runnable standalone test, you would have to call clTestGroupInitialize and clTestGroupFinalize functions before and after calling the task test (see testOsal.cxx).
Failover support
The Python TAE interface can be used to trigger various failures in the chassis. Organise your tests to use the Python layer.
Multi-blade coordination
- If your test must coordinate the activity of multiple blades then you must use the Python layer. But please ask yourself "is this what I really want?" The customer's application will not be able to coordinate blade activity in this manner. Perhaps a better test would be to use the system like the customer would use it...
Step 3: Debug
Python TAE Interface
- You can call your Python-to-C functions outside of the TAE with some very simple code. In this example, assume that your RPC function is called "Log", and that you have set the MyPort variable to the TCP port of your TAE server (initialized in your component). Your function will be magically created as a member of the WSDL.Proxy class, so in the example below it will be called in the line "soap.Log(...)".
import pdb from SOAPpy import * from SOAPpy import WSDL Config.simplify_objects = 1 MyPort = 8100 soap = WSDL.Proxy("http://localhost:%d/wsdl" % MyPort) # Call your function. NOTE you MUST use the keyword=value argument format! print soap.Log(severity=1, area="TST", context="RPC", log="Test the RPC")
C TAE Interface
- You can debug your own test by unlocking it as you would a normal application through the debug CLI (asp_console). The output of your test (i.e calls to clTestXXX functions) will be stored in /var/log/testresults.txt.