Doc:latest/taeguide/testimpl

Revision as of 17:30, 13 March 2013 by Stone (Talk | contribs)



Contents

Create your model

Create a model using the OpenClovis IDE. Your model can use any redundancy model and contain any number of components and/or service groups. It can expect any number of blades. However, the TAE can run the same test on different device (test fixture) configurations. Therefore it is recommended that your organization choose a minimum configuration (preferably one that can be run by developers) and make sure your model supports this configuration but at the same time is capable of utilizing extra blades if they exist in the test fixture.

Add your tests

There are various ways/options to implement SAFplus Platform testcases:

  1. Write testcase in C as part of an SAFplus-based component. Such testcases should use the C TAE Interface, see below.
    • Such testcases should be triggered by unlocking AMF entities, typically Service Groups.
    • Such testcases should use the TAE C interface to start and end of testcases, and the result of the testcases.
    • Such testcases will be activated from the TAE robot, using a special 1-line wrapper (explained in "Testcase Integration" section).
  2. Write testcases purely in Python at the TAE robot side.
    • Such testcase is implemented as a test*.py file.
    • Such testcases can typically do anything that can be done using normal shell access to the target blades and debug CLI access to SAFplus Platform.
    • In the near future, direct access to HPI actions will also be supported.
    • It is also possible to invoke C functions in your (test) application from the Python test script, provided that the function is exposed using the SOAP RPC method described with a code snippet below.
    • Such testcases may not use the TAE C interface at all.
  3. The 3rd type of testcase is a mixture of the two extremes above, and implement tests using the C interface, but also have more than just one line code in the Python layer (robot side).

When to Use What Approach?

When you try to decide what is the best approach for a given testcase, think from a application perspective. Some guidelines:

  • If the scenario you want to test is representative of what applications would do, lean more on the C-level implementation. Example:
    • An application uses the checkpoint service to save some data, and a standby component is trying to read that data.
  • If the scenario has to emulate some artificial external events, use the Python layer to induce the event. Example:
    • You need to reset a blade or emulate a kernel crash by invoking some command line commands. Would this code ever be done in a user application? NO! So, you should not (need to) implement this in C, but rather figure out a simple way of doing this from the Python testcase code.
  • If the scenario involves artificial sequencing of otherwise randomly occurring events across multiple nodes, the sequencing is better to be left for the Python layer. Example:
    • You need to bring some components up, and then crash them in a certain order. Again, you should ask: does the code that crashes the component have a natural place in the application? Would a customer ever write such code as natural part of his application? Hardly. Also, would the sequencing be controlled in C from some other application? No. So, in that case implementing the sequencing and the crash is better to be left to the Python (robot) layer.

Python TAE Interface

If you plan to implement your tests in a combination of C and Python by invoking some C functions from Python testcase, you must now add the TAE SOAP server and remotely callable functions to your model. For an example of how to do this, please look at the "sysctrlcomp" component in the "bicTests" model in the "asptest" project here. This component is untouched except for the addition of 2 remotely callable functions. You must also add the "tae" subdirectory, as seen here. This is where you define and implement your RPC calls. Finally, you must hook up the Makefiles, so that the "tae" subdirectory is build, as seen here.

Next you must implement your tests in Python, and have them call down to the C layer. This information is best shown by example test_lint.py

C TAE Interface

  • Any service group instances that run tests MUST be called tcSg<number>[Name]
  • For example:
  • tcSg001MessagingTest, tcSg034,
    • To run 2 service group instances simultaneously use tcSg<same number>[different name]
    • For example:
    • tcSg001a and tcSg001b, or tcSg005MsgClient and tcSg005MsgServer
Include and Library
#include <clTestApi.h>

Code is located in the SAFplus Platform utils library (SAFplus/components/utils/...).

Summary

The module provides a set of functions that are useful while implementing regression tests.

The module creates a hierarchy of tests; a test, a test case, and a test point. The "test" is started/stopped by clTestInitialize/clTestFinalize (clTestStart/End deprecated) and should precede and succeed all other clTest function calls. In general, you'll call these functions once each in your program. Next, within a "test", you are allowed to implement any number of "test cases". A test case is whatever you want it to be, but generally think of it as a grouping based on configuration, load, or strategy. To start/end a test case, use clTestCaseStart/End. You can also use "clTestCase" if your test is a single line (like a function call) -- it's just syntactic sugar.

Finally, individual predicates are called "test points". Use the clTestXXX for these. The basic function is clTest. You essentially pass it a predicate (an expression that evaluates to a boolean) and it checks the truth of that predicate. There is also a similar function that lets you also execute code if the test failed. This is mostly used to skip subsequent test points in a setup where each point requires that the prior succeed. Finally, you can claim that a test succeeded, failed or malfunctioned, without checking any predicate.

Malfunctioned? What's that? A "malfunction" occurs when initial conditions necessary to run the test were not met. For example, let's say you are running a messaging test. The test checks that messaging works between 2 processes on the same blade, and then checks between 2 blades. But what if it is run on a chassis with a single blade? In this case, the test could call TestMalfunction.

Context

These functions will act differently depending on the context in which they are executed.

  • If run independent of a Test Automation Environment (TAE) they will print out consistently formatted messages that can be analysed by scripts. To stop the deluge of data, by default is runs in a mode that does not print test point successes. But errors WILL be printed.
  • If run within a TAE, these functions will communicate state to the TAE.

Note, some functions ask for formatted strings. Please refrain from using newline or line feeds "\n" or "\l" since the functions will do formatting for you. Also, do not use success or failure words, for example "Failed", "Pass", "Success", or "Ok", since the functions will also add this annotation. These words can confuse the TAE Report Server's parsing of the test output since they are the keywords the system uses to analyse test results.

Functions

C API documentation is available here.

Failover support

The Python TAE interface can be used to trigger various failures in the chassis. Organise your tests to use the Python layer.

Multi-blade coordination

If your test must coordinate the activity of multiple blades then you must use the Python layer. There are rich set of APIs provided by TAE for this purpose.

Debug

Python TAE Interface

You can call your Python-to-C functions outside of the TAE with some very simple code. In this example, assume that your RPC function is called "Log", and that you have set the MyPort variable to the TCP port of your TAE server (initialized in your component). Your function will be magically created as a member of the WSDL.Proxy class, so in the example below it will be called in the line "soap.Log(...)".

import pdb
from SOAPpy import *
from SOAPpy import WSDL
Config.simplify_objects = 1

MyPort = 8100
soap = WSDL.Proxy("http://localhost:%d/wsdl" % MyPort)

# Call your function.  NOTE you MUST use the keyword=value argument format!
print soap.Log(severity=1, area="TST", context="RPC", log="Test the RPC")

C TAE Interface

You can debug your own test by unlocking it as you would a normal application through the debug CLI (asp_console). The output of your test (i.e calls to clTestXXX functions) will be stored in /var/log/testresults.txt.


Examples

Python test case

This test case is available in the basic infrastructure mode "bictests" and is named: test_lint.py


import pdb
import openclovis.test.testcase as testcase

class arbitraryname(testcase.TestCase): # The name of the class does not matter

    # This function is executed before each test_xxxx member function is called
    def set_up(self):
        
        # self.fixture is a large data structure that models the whole
        # chassis or cluster (i.e. all of the machines SAFplus Platform runs on).
        fix = self.fixture
        # pdb.set_trace()
        
        # Get to the right node in the chassis.
        node = fix.nodes["SysCtrl0"]
        
        # Connect to a process on the system controller blade so RPC calls can
        # be made. Note that the port 8100 is "well known" for that application.
        self.rpc = node.get_rpc("sysctrl", 8100)

    # This function is executed after each test_xxxx member function is called
    def tear_down(self):
        del self.rpc  # Clean up my RPC client

    def test_doesnotmattername(self): # methods with 'test' prefix are testcases
        r"""
        \testcase   SQA-TAE-REG.TC001
        \brief      TAE regression test and demo testcase (LINT)
        \description
        This is a "veterinary horse" kind of a testcase, demonstrating the
        various features of the test environment and testing these
        features in the same time.
        """
        
        ##----------------------------------------------------------------
        ##
        ## Logging
        ##
        ## .log                             object
        ## .log.debug()                     method
        ## .log.info()                      method
        ## .log.warning()                   method
        ## .log.error()                     method
        ## .log.critical()                  method
        ##
        ##----------------------------------------------------------------
        
        self.log.debug('This is a sample debug message')
        self.log.info('This is a sample info message')
        self.log.warning('A sample warning')
        self.log.error('A sample error that is recoverable')
        self.log.critical('A sample critical error')
        
        ##----------------------------------------------------------------
        ## Local bash access
        ##
        ## .tae_host                        object
        ## .tae_host.run()                  method
        ## .tae_host.run_cmd()              method
        ##
        ##----------------------------------------------------------------
        
        #
        # Running Unix commands on the local host that the tae robot is
        # running (this is not part of the fixture), using the pre-opened
        # bash session 'tae_host':
        #
        
        # Just need output of command. In case of error, an exception is thrown.
        # Before using 'run' check for a member function that already implements
        # your command.  If it exists it will parse the output and return
        # 'Pythonized' data (like a list of filenames for 'ls').

        # If the member function does not exist, consider writing one!
        res = self.tae_host.run('wc -l /etc/passwd')
        
        # res includes the trailing newline; to ignore it, use rstrip()
        self.log.debug('Output: %s' % res.rstrip())
        
        # Need both return code and output (does not throw an exception)
        rc, out = self.tae_host.run_cmd('wc -l /etc/passwd')
        self.log.debug('Return values: rc=[%d] out=[%s]' % (rc, out.rstrip()))
        
        ##----------------------------------------------------------------
        ##
        ## Build server features
        ##
        ## .fixture                         object
        ## .fixture.build_server            object
        ## .fixture.build_server.run()      method
        ## .fixture.build_server.run_cmd()  method
        ## .fixture.build_server.scp()      method
        ##
        ##----------------------------------------------------------------
        
        # Running Unix commands on the build server of the fixture

        res = self.fixture.build_server.run('hostname')
        self.log.debug('The buildserver is [%s]' % res.rstrip())
        
        # Miscellaneous info about the build server

        self.log.debug('Build server IP    : [%s]' % self.fixture.build_server.ip)
        self.log.debug('Build server login : [%s]' % self.fixture.build_server.user)
        self.log.debug('Build server passwd: [%s]' % self.fixture.build_server.password)
        
        # Run scp on the build server to copy in or out something
        # Syntax: scp(frm, to, pw) where either frm or to can also include
        # a username.
        self.log.info('Checking accessibility by ping before attempting scp...')
        if not self.fixture.build_server.run_cmd('ping -c 1 10.10.6.1')[0]:
            self.fixture.build_server.scp('root@10.10.6.1:.bashrc', '.', 'clovis')
        
        ##----------------------------------------------------------------
        ##
        ## Fixture target node features
        ##
        ## .fixture.nodes                   object, works as dictionary
        ## .fixture.nodes.keys()            method
        ## .fixture.nodes.values()          method
        ## .fixture.nodes.items()           method
        ## .fixture.nodes[name].name        str
        ## .fixture.nodes[name].ip          str
        ## .fixture.nodes[name].user        str
        ## .fixture.nodes[name].password    str
        ## .fixture.nodes[name].ping        method
        ## 
        ##----------------------------------------------------------------
        
        # Target node access, based on the model mapping
        # Key: self.fixture.nodes is a Python dictionary indexed by the node
        # names. Each element is a Node class with a few useful info and
        # methods.
        
        # List of node names in the fixture:
        node_names = self.fixture.nodes.keys()
        self.log.debug('Model node names: %s' % str(node_names))
        
        # To access a given node:
        node = self.fixture.nodes['SysCtrl0']

        # A few useful values in node:
        self.log.debug('Fixture node name  : [%s]' % node.name)
        self.log.debug('Fixture node ip    : [%s]' % node.ip)
        self.log.debug('Fixture node login : [%s]' % node.user)
        self.log.debug('Fixture node passwd: [%s]' % node.password)

        # A few useful methods:
        if node.ping(): # ping node from tae server and check if accessible
            self.log.debug('Fixture [%s] at [%s] is accessible' %
                           (node.name, node.ip))
        else:
            self.log.error('Fixture [%s] at [%s] is not accessible' %
                           (node.name, node.ip))

        ##----------------------------------------------------------------
        ##
        ## Unix commands on fixture node
        ##
        ## .fixture.nodes[name].bash            object (session)
        ## .fixture.nodes[name].bash.run()      method
        ## .fixture.nodes[name].bash.run_cmd()  method
        ##
        ##----------------------------------------------------------------
        
        # To run a Unix command line command on the fixture node using a
        # preinstantiated bash session (this works in the same way as the
        # build_server bash session above):
        res = self.fixture.nodes['SysCtrl0'].bash.run('df -h . | tail -n 1')
        self.log.debug('Disk on fixture node [%s] is [%s] percent full ([%s] available)' %
                       (node.name, res.split()[4], res.split()[3]))

        # Note you must leave this bash session at the Unix prompt!
        # For example, do not do .bash.run("tail -f /var/log/asp")
        # For this, you can create your own bash session.

        shell =  self.fixture.nodes['SysCtrl0'].create_bash()
        # To start a command that you don't expect to complete:
        # expect = shell.start("tail -f /var/log/messages")
        # Returned is a pexpect object

        ##----------------------------------------------------------------
        ##
        ## Other generic fixture node methods (also available as methods
        ## of any bash session)
        ##
        ## .fixture.nodes[name].get_pid()       method
        ## .fixture.nodes[name].kill()          method
        ## .fixture.nodes[name].killall()       method
        ##
        ##----------------------------------------------------------------
        
        # To get list of pids for any running process by given name
        pids = self.fixture.nodes['SysCtrl0'].get_pid('bash') # returns a list
        self.log.debug('Number of bash sessions: [%d]' % len(pids))
        self.log.debug('PID of first bash session: [%s]' % 
                       (len(pids) and pids[0] or 'N/A'))

        # To kill a process by pid or process name
        node = self.fixture.nodes['SysCtrl0']
        node.bash.run('ping localhost > /dev/null & ' * 4) # starting 4 pings
        pids = node.get_pid('ping')
        self.log.debug('Number of ping sessions: [%d]' % len(pids))

        node.kill(pids[0])
        node.kill(pids[1], signal=9)
        self.log.debug('Number of ping sessions: [%d]' % len(node.get_pid('ping')))

        node.killall('ping')
        self.log.debug('Number of ping sessions: [%d]' % len(node.get_pid('ping')))

        ##----------------------------------------------------------------
        ##
        ## SAFplus Platform specific fixture node methods
        ##
        ## .fixture.nodes[name].asp_running()   method
        ## .fixture.nodes[name].start_asp()     method
        ## .fixture.nodes[name].stop_asp()      method
        ## .fixture.nodes[name].restart_asp()   method
        ##
        ##----------------------------------------------------------------

        # Check if SAFplus Platform is running and start it if not
        
        # Note, the framework starts the SAFplus Platform before running your test case,
        # so you can assume that the SAFplus Platform is running.  That is, you don't
        # need to call this function, unless your test shuts down SAFplus Platform.
        
        if self.fixture.nodes['Worker0'].asp_running():
            self.log.debug('SAFplus Platform is running on the node already')
        else:
            self.fixture.nodes['Worker0'].start_asp()

        # Bring down SAFplus Platform and then back
        if self.fixture.nodes['Worker0'].asp_running():
            self.log.debug('Stopping SAFplus')
            self.fixture.nodes['Worker0'].stop_asp()
        self.fixture.nodes['Worker0'].start_asp()
        
        # Restart SAFplus Platform in a single command
        self.fixture.nodes['Worker0'].restart_asp()
                    
        ##----------------------------------------------------------------
        ##
        ## Fixture-wide methods
        ## Note that all fixture-wide methods have node equivalents that
        ## have the same name but are members of the 'node' object.
        ##
        ## .fixture.start_asp()                 method
        ## .fixture.stop_asp()                  method
        ## .fixture.restart_asp()               method
        ##
        ##----------------------------------------------------------------

        
        # Stopping SAFplus Platform on all nodes
        ### self.fixture.stop_asp()
        
        # Starting up SAFplus Platform on all nodes
        ### self.fixture.start_asp()
        
        # Restarting SAFplus Platform on all nodes
        # Does it wait?
        ### self.fixture.restart_asp()

        # self.fixture.wait_until_asp_up(50)   # TBD, 
        
        ##----------------------------------------------------------------
        ##
        ## Test case python script debugging
        ##
        ## pdb.set_trace()                      method
        ##
        ##----------------------------------------------------------------
        ##
        ## To get the python debugger break during test execution, issue the
        ## pdb.set_trace() call ANYWHERE IN YOUR TESTCASE code. To check this
        ## feature out, uncomment the two lines below
        ##
        
        #import pdb
        #pdb.set_trace()

        ##----------------------------------------------------------------
        ##
        ## Debug CLI access
        ##
        ## .fixture.has_debug_cli()             method
        ## .fixture.start_debug_cli()           method
        ## .fixture.debug_cli.root()            method
        ## .fixture.debug_cli.run()             method
        ##
        ##----------------------------------------------------------------

        # All "get_xxx" functions return a cached version if it exists or create
        # if it does not.
        # All "create_xxx" functions create a new one and pass it back to you.

        # In the case of the debug CLI, there can be only 1 instance, so there
        # is no "create" function.
        dbgcli = self.fixture.get_debug_cli()
        
        # Run some native debug cli commands:
        dbgcli.run('setc 1')
        dbgcli.run('setc cpm')

        # You may also use the fixture variable debug_cli, if you are sure
        # that it is valid.
        res = self.fixture.debug_cli.run('compList')
        self.log.debug('First 20 partial lines of component list:')
        for line in res.splitlines()[:20]:
            self.log.debug('>> %-65s ...' % line[:60])

    ##----------------------------------------------------------------
    ##
    ## Determining test result
    ##
    ## Test ERROR is not the same as a FAIL-ed test
    ##
    ## The former means the test is not conclusive because the test
    ## procedure itself could not be completed dur to some errors.
    ## The latter means the test subject failed the test.
    ##
    ## Test errors:  any unhandled Python exception that occurs during
    ##               running the testcase will be regarded by the test
    ##               framework as a test error and the failure of the
    ## Test failure: test failures are generated when an explicit
    ##               verification of some result produces a negative
    ##               result. These types of check are done using one of
    ##               the following method calls:
    ##
    ## .assert_true(condition [, failure info])
    ## .assert_false(condition [, failure info])
    ## .assert_equal(value1, value2 [, failure info])
    ## .assert_not_equal(value1, value2 [, failure info])
    ## .assert_almost_equal(value1, value2 [, failure info])
    ## .assert_not_almost_equal(value1, value2 [, failure info])
    ## .assert_raises(exception [, failure info])
    ##
    ## These are demonstrated in separate testcases below not as part
    ## of this LINT testcase
    ##
    ##----------------------------------------------------------------

    def test_always_errors1(self):
        r"""
        \testcase   SQA-TAE-REG.TC002
        \brief      This testcase should always produce an ERROR
        """
        l = [0, 1]
        print l[100] # index is out of range, will generate a python exception
    
    def test_always_fails(self):
        r"""
        \testcase   SQA-TAE-REG.TC003
        \brief      This testcase should always produce a FAIL
        """
        self.assert_equal(1, 10, 'This test is failed purposely')

    def test_well_documented(self):
        r"""
        \testcase   SQA-TAE-REG.TC004
        
        \brief      This is a well documented testcase example
        
        \description
        You can have an arbitrarily long description of the testcase.
        You can have an arbitrarily long description of the testcase.
        You can have an arbitrarily long description of the testcase.

        You can have an arbitrarily long description of the testcase.
        You can have an arbitrarily long description of the testcase.
        You can have an arbitrarily long description of the testcase.

        You can have an arbitrarily long description of the testcase.
        You can have an arbitrarily long description of the testcase.
        You can have an arbitrarily long description of the testcase.

        \state      enabled

        \prerequisites
         * SC0 is accessible
         * OS booted on system controller
         * Python test environment (with PyOpenHPI) is available on SC0
         * IP address to shelf manager is know as SHMGR_IP environment variable

        \steps
         1 start test application which will attempt to setup HPI session and
           wait till its output is printed

        \criteria
          \li HPI session open returns with success
        """
        pass # always passes

    def test_unimplemented(self):
        r"""
        \testcase   SQA-TAE-REG.TC005
        
        \brief      Unimplemented testcase (always errors out)
        """
        self.log.debug('This is an unimplemented testcase')
        raise testcase.TestcaseNotImplemented

    def test_with_measurement(self):
        r"""
        \testcase   SQA-TAE-REG.TC006
        
        \brief      Testcase example with measurements
        
        \description
        This measures two attributes, as described below.
        
        \measured
        \data FREE_DISK_SPACE  [MB] Free disk space on first node
        \data RANDOM_NUMBERS   []   Random numbers in [0, 1) interval
        \data ASP_STARTUP_TIME [ms] Ping latency between two nodes

        """
        ##----------------------------------------------------------------
        ##
        ## Measurements related stuff
        ##
        ## openclovis.test.bin                  module
        ## openclovis.test.bin.Bin()            class
        ## openclovis.test.bin.Bin.record()     method
        ## .report_data()                       method
        ##
        ##----------------------------------------------------------------
        
        import openclovis.test.bin as bin

        # Free disk space example
        raw_data = self.fixture.nodes['SysCtrl0'].bash.run('df -m . | tail -n 1')
        free_space = int(raw_data.split()[3])
        self.report_data(bin.Bin('FREE_DISK_SPACE', free_space, unit='MB'))

        # Random number measurement
        import random
        array = [random.random() for foo in range(10000)]
        self.report_data(bin.Bin('RANDOM_NUMBERS', array, fmt='%6.4f'))
        
        # Step-by-step data collection
        data = bin.Bin('ASP_STARTUP_TIME', unit='s')
        import time
        self.log.info('Measuring SAFplus Platform startup time, using 5 runs')
        if self.fixture.nodes['Worker0'].asp_running():
            self.fixture.nodes['Worker0'].stop_asp()
        for i in range(5):
            start_time = time.time()
            self.fixture.nodes['Worker0'].start_asp()
            stop_time = time.time()
            data.record(stop_time - start_time)
            self.fixture.nodes['Worker0'].stop_asp()
            self.log.debug('- Iteration %d done' % (i+1))
        self.report_data(data)


    def test_with_rpc(self):
        r"""
        \testcase   SQA-TAE-REG.TC007
        \brief      Testcase example with RPC calls
        
        \description
        This runs 2 functions on the target within a particular process.
        To see the server side implementation, look at:
        SAFplus/models/unitTests/app/sysctrlcomp/tae
        
        \measured
        \data Length of a task delay.
        """
        # This example shows passing strings.
        self.rpc.Log(severity=1,area="TST",context="LNT", log="Testing the RPC mechanism.  This log actually originated from the test_lint.py script running on the TAE.")

        # This example shows how a complex data structure can be returned.
        ret = self.rpc.TaskSleep(sec=3,msec=0)
        self.log.info("Raw data received from RPC call: %s" % str(ret))
        self.log.info('Sleep of 3 seconds was measured (on the node) as taking %d ms' % ((int(ret["sec"]) * 1000) + int(ret["msec"])))
        
    def test_disabled(self):
        r"""
        \testcase   SQA-TAE-REG.TC008
        \brief      Disabled testcase
        \state      disabled
        """
        self.log.critical('You should never see this testcase executed by TAE')

C testcase (SG based)

This is an example of a functional test of our bitmap data structure and is located in the basic infrastructure test model in: testBitmap.c


void TC2_BitMap(void)
{
    ClBitmapHandleT bitHdl  =   CL_BM_INVALID_BITMAP_HANDLE;
    ClRcT           rc      =   CL_OK;
    ClRcT           retVal  =   CL_OK;
    ClUint32T       bitCount=   0;
    ClUint32T       bitNum  =   0;

    for(bitCount = 3, bitNum = 1; bitCount <= 100; bitCount += 2, bitNum++)
    {
        clTestCaseMalfunction(
               ("Bitmap create"),
               (rc = clBitmapCreate(&bitHdl, bitCount)) == CL_OK,
               return);

        clTest(("Bitmap set bit [%d]", bitNum),
               (rc = clBitmapBitSet(bitHdl, bitNum)) == CL_OK,
               ("Error: rc[0x %x]", rc));

        clTest(("Bitmap is bit[%d] set?", bitNum),
               (((clBitmapIsBitSet(bitHdl, bitNum, &retVal)) 
                 == CL_BM_BIT_SET) && CL_OK == retVal),
               ("Error: rc[0x %x]", retVal));

        clTest(("Bitmap set bit [%d]", (bitNum + 2)),
               (rc = clBitmapBitSet(bitHdl, (bitNum + 2))) == CL_OK,
               ("Error: rc[0x %x]", rc));

        clTest(("Bitmap is bit[%d] set?", (bitNum + 2)),
               (((clBitmapIsBitSet(bitHdl, (bitNum + 2), &retVal)) 
                 == CL_BM_BIT_SET) && CL_OK == retVal),
               ("Error: rc[0x %x]", retVal));

        clTest(("Bitmap destroy"),
               (rc = clBitmapDestroy(bitHdl)) == CL_OK,
               ("Error: rc[0x %x]", rc));

    }

}

void
clTestBitmapMain(void)
{

    clTestGroupInitialize(("Test of Bitmap utility"));

    clTestCase(("BIC-UTL-BIT.TC003 Set Bits in a bitmap and Verify whether the bits are set"), 
		TC2_BitMap());

    (void) clTestGroupFinalize();
  
}

  • The string passed to the clTestCase function: "BIC-UTL-BIT.TC003 Set Bits in a bitmap and Verify whether the bits are set" defines the test name that TAE Report Server will use to report this test (in this case "BIC-UTL-BIT.TC003") and also includes the short description that appears in the Report Server.
  • This example shows just one test case of many that can appear in a single service group.
  • When a service group is assigned work (a CSI) this should trigger your test to run. When your test is complete it should simply wait until the work is unassigned. Also, the service group should be modeled to be in lock assigned mode so that the TAE infrastructure can start the test when it is ready to do so.
  • Your tests must call the functions in the Test API defined in "clTestApi.h" so that output is generated that can be parsed by the TAE Report Server.