Please note that these files are webex recording whose size exceeds 1 MB thus i am enclosing the files in the SAP Box link which is valid only for 30 days.
Thanks,
Aj
Please note that these files are webex recording whose size exceeds 1 MB thus i am enclosing the files in the SAP Box link which is valid only for 30 days.
Thanks,
Aj
In this document I would like to share my lesson learned and show that it is possible to run automated Unit Tests in ABAP and integrate it with Hudson (or Jenkinks) Continuous Integration server. With some development effort we can have all Unit Tests running every day, see tests results with code coverage in Hudson and look at nice graphs and historical data summary.
Document overview:
When I started experience with SAP development almost 2 years ago, I looked for automated Unit Tests possibilities. Unfortunately Continuous Integration is not strong side of ABAP development. You can schedule and run tests regularly in fact, but the way how results are presented was not enough me. As I had already experience with Hudson projects built for java software (easy integration by the way), I tried to find out same possibilities in ABAP world. Unfortunately - no plugins for SAP integration with Hudson, not even one integration example existing in forums. I took the initiative into own hands.
Before I will go into details let me explain how I understandContinuous Integration process in ABAP context. It must be fully automated and consists of the following steps:
1. Compile and build code - in ABAP it is actually done for free with activation.
2. Run tests, different levels possible:
3. Collect and present statistics on Continuous Integration server - tests results and code coverage are available in ABAP.
4. Notify users in case of at least one test has failed.
5. Optionally deploy code - send to quality / production system - not included into my framework.
In the framework I focused only on ABAP Unit Tests automation, run in development not quality system because:
Continuous Integration process should be triggered after each change commit, or at least once per day. For me once per day is enough (many developers and changes during day) and framework is configured as such.
Actually framework fulfills criteria from points 1-4, just with limited scope for Unit Tests in development system. Anyhow it is a lot.
So let's look what we have. Framework is run for all Unit Tests in the system from customized packages (Z* or Y* packages). As I do not want to present statistics for all development in my company, I just show example results from ZCAGS_CI package, which is actually the package containing framework development itself.
Hudson projects for two development landscapes D83 and D87 are defined:
Example ZCAGS_CI package preview with Unit Tests results (real duration on test level is not implemented):
If someone knows Hudson from before he is aware of the package hierarchy and links to nested levels. We can click on example row to see its details. Lets preview ZCL_CAGS_CI_CODE_COVERAGE test results:
We see all 8 tests run from local class LCL_CAGS_CI_CODE_COVERAGE and all of them are passed. In case if there would be an error we could see it by clicking test method link further.
In addition Hudson keeps history of all builds and tests statistics. Graph below shows history of Unit Tests for all packages (I have not shown number of tests which are normally on Y axis):
As you can see from the graph there were some failing tests that nobody cared about (red color). Having daily automated Unit Tests in place exposed that clearly so tests have been finally fixed. All run tests should be have success status - daily green build goal. In addition we see increasing number of tests due to Continuous Integration awareness and teams engagement.
Now lets look at code coverage statistics for ZCAGS_CI package. We see historical progress since beginning, code coverage finally increased till 71.2 % for line level:
Code coverage shows which of classes in the package were touched/entered during execution of Unit Tests. This is one of helpful statistics that measures testing code quality.
We see graph with historical data. 4 levels of code coverage are available:
Let me describe example of coverage based on two classes:
1. ZCL_CAGS_CI_HTTP_REQ_HANDLER
2. ZCL_CAGS_CI_REPORT
Code coverage and unit tests views are available for all customized packages in SAP system that have executable code (top level view).
All these statistics can be extracted from SAP standard tools. Hudson integration just adds another, extended GUI view on top of results and makes tests run management easier. I can run tests at any time from Hudson. By default tests are scheduled on every working day. This is configurable in Hudson project.
It is worth to mention that we can configure Hudson project in the way that it sends automatic email notifications in case of at least single test failure occurred. We can have single person that monitors daily status or even whole team included into notifications - it depends on us.
Hudson offers even more options and plugins and is a good candidate to me for Continuous Integration server for daily automated tests.
Main report looks like this:
We can specify defined packages list or load all Y* and Z* packages for tests execution. Second option is default, run from Hudson - we want to run all tests for customized packages in the system.
Code coverage is measured only for classes and excluded for programs. The fact is that reports often does not need to have Unit Tests or even cannot be tested. We have also many historical reports from time where there was no Unit Tests yet. That is why code coverage statistics including programs are very low. That is why we focus only on code quality measurements for classes - it is easier to focus and monitor improvements.
Two levels of results are possible - package level and user level. Package level is default, we see how well each package is covered with tests. User level is just for own overview and history monitoring. We cannot get exact user coverage as class and all its tests method is "owned" by creator, further updates from another user just adds statistics to original class creator.
After report is run, all results are saved to files with given names and path. Files are formatted according to JUnit format which is simple XML file with predefined schema and hierarchy. This is needed for integration with Hudson - we need to transfer files and read them on Hudson to show final results.
How results are calculated? That is the core of report, where I need to say thank you to SAP standard and its developers.
SAP development goes in good direction, Unit Tests framework is extended, new automated tools are available. One of them is report rs_aucv_runner, which actually can be run from SE80 -> right click menu on class -> Execute -> Unit Tests With -> Job Scheduling.
That is very good report that allows user to run Unit Tests for any packages, including subpackages. My customized report is built on top of this - submits standard program, reads required data through implicit enhancements during runtime and saves results to files. In fact we can get Unit Tests and code coverage results from this standard report, but to be honest results are not very informative - no information about tests count, separated code coverage for each class after selecting it and finally no history, just current results:
That is why I have chosen Hudson. It presents same data in different and better way.
It is also possible to schedule daily automated Unit Tests from SCI Code Inspector. However there is mainly one useful information there - failure tests. That is basically enough to be notified in case of tests failures. Anyhow, code quality statistics and historical progress overview is a good part of Agile development where reviews and retrospectives are part of life.
I hope this document will be inspiration for those who are searching for automated Unit Tests possibilities like I was before. I believe that Unit Tests are important and improve software development quality and efficiency as well. To encourage people to use them it is worth to establish daily automated tests in first place. If we add another layer of status visualization and progress history, it helps even much. We should believe that good quality software development requires automated tests that is why it is worth to build motivation with supportive tools.
Hudson ABAP integration is one of options. I started with Unit Tests, but I guess it would not be difficult to add another level of tests and present results on same server. Personally I found Hudson more user friendly. It is enough just to open HTML link instead of SAP transaction. Results are present all the time, calculated at night. They expose two most important statistics: Unit Tests failure/success rate and code coverage values, for all packages that we specify. If we have many teams, they can focus on own package monitoring. If we add history and statistics kept in Hudson, that is a good help for tracking progress and improve ourselves.
It would be good if SAP could provide built in integration to Hudson server as plugin. If someone is interested in details of development, I can share it or maybe even post new document or publish code project - I need to investigate possible options for code sharing/publishing rights if it is needed.
Good luck with automated tests in ABAP!
I was asked by many to public source code. Finally, after long time I got some capacity to do it - here it is. I put it to external server as zip file was not allowed on this forum:
ABAP Hudson integration.zip - Google Drive
Go to Readme.txt and try to implement framework in your system. You can contact me in case of any doubts. Good luck!
I preserve code property rights as mine, but you can use code for your own or company purposes, mentioning that the source comes from me.
Hi,
SoapUI is a free and open source cross-platform Testing solution. With an easy-to-use graphical interface, and enterprise-class features, SoapUI allows you to easily and rapidly create and execute automated functional, regression, compliance, and load tests. In a single test environment, SoapUI provides complete test coverage and supports all the standard protocols and technologies. There are simply no limits to what you can do with your tests. The Document will help you configure the SOAP UI tool for testing.
We normally maintain Variants in the Live system, but somewhere we may come across the scenario where we want to to transport the Variants along with program from Development system to the Live System or Testing System.
SAP has provided a standard program to pull a Variant into a tansport, in the doucment below, I will be showing the usage of the same.
Step 1 : Goto SE38 and enter program Name - RSTRANSP and Execute
Step 2 - Provide the program name and name of the variant that needs to be transported to next level and execute the report, constarint is that variant should be available in the system where this report is being executed (mostly Development system).
Step 3 - Popup comes with program name and variant name which needs to be selected. Checkbox needs to be ticked.
Step 4 - Popup window for transport organiser appears where we can create a new Transport or use our own request.
INTERFACE if_td_currency_converter PUBLIC .
EVENTS new_currency_code EXPORTING VALUE(currency_code) TYPE string.
METHODS convert
IMPORTING
amount TYPE i
source_currency TYPE string
target_currency TYPE string
RETURNING VALUE(result) TYPE i
RAISING cx_td_currency_exception.
METHODS convert_to_base_currency
IMPORTING
amount TYPE i
source_currency TYPE string
EXPORTING
base_currency TYPE string
base_curr_amount TYPE i.
ENDINTERFACE.
CLASS ltcl_abap_td_examples DEFINITION FINAL FOR TESTING
DURATION SHORT RISK LEVEL HARMLESS.
PRIVATE SECTION.
METHODS:
create_double FOR TESTING RAISING cx_static_check,
ENDCLASS.CLASS ltcl_abap_td_examples IMPLEMENTATION.
METHOD create_double.
DATA: lo_currency_converter_double TYPE REF TO if_td_currency_converter,lo_expense_manager TYPE REF TO cl_td_expense_manager.
"create test double object
lo_currency_converter_double ?= cl_abap_testdouble=>create( 'if_td_currency_converter' ).
"injecting the test double into the object being tested
CREATE OBJECT lo_expense_manager EXPORTING currency_converter = lo_currency_converter_double.
ENDMETHOD.
ENDCLASS.
METHOD simple_configuration.
DATA: lo_currency_converter_double TYPE REF TO if_td_currency_converter,
lo_expense_manager TYPE REF TO cl_td_expense_manager,
lv_total_expense TYPE i.
"create test double object
lo_currency_converter_double ?= cl_abap_testdouble=>create( 'if_td_currency_converter' ).
"configuration for stubbing method 'convert':
"step 1: set the desired returning value for the method call
cl_abap_testdouble=>configure_call( lo_currency_converter_double )->returning( 80 )."step 2: specifying which method should get stubbed
lo_currency_converter_double->convert(
EXPORTING
amount = 100
source_currency = 'USD'
target_currency = 'EUR'
).
"injecting the test double into the object being tested
CREATE OBJECT lo_expense_manager EXPORTING currency_converter = lo_currency_converter_double."add one expense item
lo_expense_manager->add_expense_item(
EXPORTING
description = 'Line item 1'
currency_code = 'USD'
amount = '100'
).
"actual method call
lv_total_expense = lo_expense_manager->calculate_total_expense( currency_code = 'EUR' ).
"assertion
cl_abap_unit_assert=>assert_equals( exp = 80 act = lv_total_expense ).
ENDMETHOD.
METHOD configuration_variants.
DATA: lo_currency_converter_double TYPE REF TO if_td_currency_converter,
lo_expense_manager TYPE REF TO cl_td_expense_manager,
lv_total_expense TYPE i.
"create test double object
lo_currency_converter_double ?= cl_abap_testdouble=>create( 'if_td_currency_converter' ).
"eg1: configuration for exporting parameters
cl_abap_testdouble=>configure_call( lo_currency_converter_double )->set_parameter( name = 'base_currency' value = 'EUR'
)->set_parameter( name = 'base_curr_amount' value = 80 ).
lo_currency_converter_double->convert_to_base_currency(
EXPORTING
amount = 100
source_currency = 'USD'
).
"eg2: configuration ignoring one parameter. 55 gets returned if source currency = 'USD' , target currency = 'EUR' and any value for amount.
cl_abap_testdouble=>configure_call( lo_currency_converter_double )->returning( 55 )->ignore_parameter( 'amount' ).
lo_currency_converter_double->convert(
EXPORTING
amount = 0 "dummy value because amount is a non optional parameter
source_currency = 'USD'
target_currency = 'EUR'
).
"eg3: configuration ignoring all parameters. 55 gets returned for any input
cl_abap_testdouble=>configure_call( lo_currency_converter_double )->returning( 55 )->ignore_all_parameters( ).
lo_currency_converter_double->convert(
EXPORTING
amount = 0 "dummy value
source_currency = 'USD' "dummy value
target_currency = 'EUR' "dummy value
).
ENDMETHOD.
Please note that the configure_call method is used to configure the next method call statement on the test double. If you need to configure different methods of an interface, the configure_call method should be called for every method.
METHOD configuration_exception.
DATA: lo_currency_converter_double TYPE REF TO if_td_currency_converter,
lo_expense_manager TYPE REF TO cl_td_expense_manager,
lv_exp_total_expense TYPE i,
lo_exception TYPE REF TO cx_td_currency_exception.
FIELD-SYMBOLS: <lv_value> TYPE string.
"create test double object
lo_currency_converter_double ?= cl_abap_testdouble=>create( 'if_td_currency_converter' ).
"instantiate the exception object
CREATE OBJECT lo_exception.
"configuration for exception. The specified exception gets raised if amount = -1, source_currency = USD "and target_currency = 'EUR'
cl_abap_testdouble=>configure_call( lo_currency_converter_double )->raise_exception( lo_exception ).
lo_currency_converter_double->convert(
EXPORTING
amount = -1
source_currency = 'USD'
target_currency = 'EUR'
).
ENDMETHOD.
METHOD configuration_event.
DATA: lo_currency_converter_double TYPE REF TO if_td_currency_converter,
lo_expense_manager TYPE REF TO cl_td_expense_manager,
lv_total_expense TYPE i,
lv_exp_total_expense TYPE i,
lt_event_params TYPE abap_parmbind_tab,
ls_event_param TYPE abap_parmbind,
lo_handler TYPE REF TO lcl_event_handler.
FIELD-SYMBOLS: <lv_value> TYPE string.
"create test double object
lo_currency_converter_double ?= cl_abap_testdouble=>create( 'if_td_currency_converter' ).
"configuration for event. 'new_currency_code' event gets raised if the source_currency = INR
ls_event_param-name = 'currency_code'.
CREATE DATA ls_event_param-value TYPE string.
ASSIGN ls_event_param-value->* TO <lv_value>.
<lv_value> = 'INR'.
INSERT ls_event_param INTO TABLE lt_event_params.
cl_abap_testdouble=>configure_call( lo_currency_converter_double )->raise_event( name = 'new_currency_code' parameters = lt_event_params
)->ignore_parameter( 'target_currency'
)->ignore_parameter( 'amount' ).lo_currency_converter_double->convert(
EXPORTING
amount = 0
source_currency = 'INR'
target_currency = ''
).
ENDMETHOD.CLASS lcl_event_handler DEFINITION.
PUBLIC SECTION.
DATA: lv_new_currency_code TYPE string.
METHODS handle_new_currency_code FOR EVENT new_currency_code OF if_td_currency_converter IMPORTING currency_code.
ENDCLASS.
CLASS lcl_event_handler IMPLEMENTATION.
METHOD handle_new_currency_code.
lv_new_currency_code = currency_code.
ENDMETHOD.
ENDCLASS.
METHOD configuration_times.
DATA: lo_currency_converter_double TYPE REF TO if_td_currency_converter,
lo_expense_manager TYPE REF TO cl_td_expense_manager,
lv_total_expense TYPE i.
"create test double object
lo_currency_converter_double ?= cl_abap_testdouble=>create( 'if_td_currency_converter' ).
"configuration for returning 80 for 2 times
cl_abap_testdouble=>configure_call( lo_currency_converter_double )->returning( 80 )->times( 2 ).
lo_currency_converter_double->convert(
EXPORTING
amount = 100
source_currency = 'USD'
target_currency = 'EUR'
).
"configuration for returning 40 the next time
cl_abap_testdouble=>configure_call( lo_currency_converter_double )->returning( 40 ).
lo_currency_converter_double->convert(
EXPORTING
amount = 100
source_currency = 'USD'
target_currency = 'EUR'
).
ENDMETHOD.
METHOD verify_interaction.
DATA: lo_currency_converter_double TYPE REF TO if_td_currency_converter,
lo_expense_manager TYPE REF TO cl_td_expense_manager,
lv_total_expense TYPE i,
lv_exp_total_expense TYPE i VALUE 160.
"create test double object
lo_currency_converter_double ?= cl_abap_testdouble=>create( 'if_td_currency_converter' ).
"injecting the test double into the object being tested
CREATE OBJECT lo_expense_manager EXPORTING currency_converter = lo_currency_converter_double.
"add three expenses
lo_expense_manager->add_expense_item(
EXPORTING
description = 'Line item 1'
currency_code = 'USD'
amount = '100'
).
lo_expense_manager->add_expense_item(
EXPORTING
description = 'Line item 2'
currency_code = 'USD'
amount = '100'
).
lo_expense_manager->add_expense_item(
EXPORTING
description = 'Line item 3'
currency_code = 'INR'
amount = '100'
).
"configuration of expected interactions
cl_abap_testdouble=>configure_call( lo_currency_converter_double )->returning( 80 )->and_expect( )->is_called_times( 2 ).
lo_currency_converter_double->convert(
EXPORTING
amount = 100
source_currency = 'USD'
target_currency = 'EUR'
).
"actual method call
lv_total_expense = lo_expense_manager->calculate_total_expense( currency_code = 'EUR' ).
"assertion
cl_abap_unit_assert=>assert_equals( exp = lv_exp_total_expense act = lv_total_expense ).
"verify interactions on testdouble
cl_abap_testdouble=>verify_expectations( lo_currency_converter_double ).
ENDMETHOD.
CLASS lcl_my_matcher DEFINITION.
PUBLIC SECTION.
INTERFACES if_abap_testdouble_matcher.
ENDCLASS.
CLASS lcl_my_matcher IMPLEMENTATION.
METHOD if_abap_testdouble_matcher~matches.
DATA : lv_act_currency_code_data TYPE REF TO data,
lv_conf_currency_code_data TYPE REF TO data.
FIELD-SYMBOLS:
<lv_act_currency> TYPE string,
<lv_conf_currency> TYPE string.
IF method_name EQ 'CONVERT'.
lv_act_currency_code_data = actual_arguments->get_param_importing( 'source_currency' ).
lv_conf_currency_code_data = configured_arguments->get_param_importing( 'source_currency' ).
ASSIGN lv_act_currency_code_data->* TO <lv_act_currency>.
ASSIGN lv_conf_currency_code_data->* TO <lv_conf_currency>.
IF <lv_act_currency> IS ASSIGNED AND <lv_conf_currency> IS ASSIGNED.
IF <lv_act_currency> CP <lv_conf_currency>.
result = abap_true.
ENDIF.
ELSE.
result = abap_false.
ENDIF.ENDIF.
ENDMETHOD.
ENDCLASS.
Using the custom matcher in a configuration
METHOD custom_matcher.
DATA: lo_currency_converter_double TYPE REF TO if_td_currency_converter,
lo_expense_manager TYPE REF TO cl_td_expense_manager,
lv_total_expense TYPE i,
lv_exp_total_expense TYPE i VALUE 160,
lo_matcher TYPE REF TO lcl_my_matcher.
"create test double object
lo_currency_converter_double ?= cl_abap_testdouble=>create( 'if_td_currency_converter' ).
"configuration
CREATE OBJECT lo_matcher.
cl_abap_testdouble=>configure_call( lo_currency_converter_double )->returning( 80 )->set_matcher( lo_matcher ).
lo_currency_converter_double->convert(
EXPORTING
amount = 100
source_currency = 'USD*'
target_currency = 'EUR'
).
"injecting the test double into the object being tested
CREATE OBJECT lo_expense_manager EXPORTING currency_converter = lo_currency_converter_double.
"add expenses with pattern
lo_expense_manager->add_expense_item(
EXPORTING
description = 'Line item 1'
currency_code = 'USDollar'
amount = '100'
).
lo_expense_manager->add_expense_item(
EXPORTING
description = 'Line item 2'
currency_code = 'USDLR'
amount = '100'
).
"actual method call
lv_total_expense = lo_expense_manager->calculate_total_expense( currency_code = 'EUR' ).
"assertion
cl_abap_unit_assert=>assert_equals( exp = lv_exp_total_expense act = lv_total_expense ).
ENDMETHOD.
CLASS lcl_my_answer IMPLEMENTATION.
METHOD if_abap_testdouble_answer~answer.
DATA : lv_src_currency_code_data TYPE REF TO data,
lv_tgt_currency_code_data TYPE REF TO data,
lv_amt_data TYPE REF TO data,
lt_event_params TYPE abap_parmbind_tab,
ls_event_param TYPE abap_parmbind.
FIELD-SYMBOLS:
<lv_src_currency_code> TYPE string,
<lv_tgt_currency_code> TYPE string,
<lv_amt> TYPE i,
<lv_value> TYPE string.
IF method_name EQ 'CONVERT'.
lv_src_currency_code_data = arguments->get_param_importing( 'source_currency' ).
lv_tgt_currency_code_data = arguments->get_param_importing( 'target_currency' ).
lv_amt_data = arguments->get_param_importing( 'amount' ).
ASSIGN lv_src_currency_code_data->* TO <lv_src_currency_code>.
ASSIGN lv_tgt_currency_code_data->* TO <lv_tgt_currency_code>.
ASSIGN lv_amt_data->* TO <lv_amt>.
IF <lv_src_currency_code> IS ASSIGNED AND <lv_tgt_currency_code> IS ASSIGNED AND <lv_amt> IS ASSIGNED.IF <lv_src_currency_code> EQ 'INR' AND <lv_tgt_currency_code> EQ 'EUR'.
result->set_param_returning( <lv_amt> / 80 ).
ENDIF.
ENDIF.
ENDIF.
ENDMETHOD.
ENDCLASS.
Adding the custom answer implementation to a method call configuration
METHOD custom_answer.
DATA: lo_currency_converter_double TYPE REF TO if_td_currency_converter,
lo_expense_manager TYPE REF TO cl_td_expense_manager,
lv_total_expense TYPE i,
lv_exp_total_expense TYPE i VALUE 25,
lo_answer TYPE REF TO lcl_my_answer.
"create test double object
lo_currency_converter_double ?= cl_abap_testdouble=>create( 'if_td_currency_converter' ).
"instantiate answer object
CREATE OBJECT lo_answer.
"configuration
cl_abap_testdouble=>configure_call( lo_currency_converter_double )->ignore_parameter( 'amount' )->set_answer( lo_answer ).
lo_currency_converter_double->convert(
EXPORTING
amount = 0
source_currency = 'INR'
target_currency = 'EUR'
).
"injecting the test double into the object being tested
CREATE OBJECT lo_expense_manager EXPORTING currency_converter = lo_currency_converter_double.
"add the expense line items
lo_expense_manager->add_expense_item(
EXPORTING
description = 'Line item 1'
currency_code = 'INR'
amount = '80'
).
lo_expense_manager->add_expense_item(
EXPORTING
description = 'Line item 2'
currency_code = 'INR'
amount = '240'
).
lo_expense_manager->add_expense_item(
EXPORTING
description = 'Line item 3'
currency_code = 'INR'
amount = '800'
).
lo_expense_manager->add_expense_item(
EXPORTING
description = 'Line item 4'
currency_code = 'INR'
amount = '880'
).
"actual method call
lv_total_expense = lo_expense_manager->calculate_total_expense( currency_code = 'EUR' ).
"assertion
cl_abap_unit_assert=>assert_equals( exp = lv_exp_total_expense act = lv_total_expense ).
ENDMETHOD.
The framework currently supports the creation of test doubles for global interfaces. Support for non-final classes is already under discussions.
Have a look at the framework and feel free to give your feedback or ask questions in the comment section.
The Open Data Protocol (OData) was created to provide a simple, standardized way to interact with data on the web from any platform or device. This interface technology protocol for querying and updating data is meanwhile widely used in development of SAP business applications. Consequently, there is a strong need for corresponding test tools.
As OData services can offer quite complex business functionalities, OData services are considered as integration testing from the eCATT point of view. With the new OData test automation functions, a process chain of several OData service operations (e.g. create → change → delete) can be tested automatically. Also, required test data can be generated easily via OData calls for further tests of other services or even other application interfaces.
The test focus with the approach presented here is more on the straight-forward scenario testing. In contrast, data combinatorics should get addressed rather with unit testing on service provider side.
If you want to know how OData works at SAP in general and how the eCATT OData Test Architecture looks like, click here.
For the testing of OData Services via eCATT, the eCATT Odata Assistant has been developed. Find out here, how you can test your OData services automatically using the assistant.
This document describes the technical background of the OData Test automation with eCATT.
If you are rather looking for hands-on information on how to create automatic OData tests, click here.
As shown in the picture below (click to enlarge), an OData service is delivered by a server, which can be an SAP ABAP system, an SAP HANA system or any other SAP or SAP vendor software system delivering services according to the Open Data Protocol. The client which invokes a service can be anything outside the server, for example a user interface program, a mobile device, a user interface server or another business system. Naturally, also a test program can act as a client against the OData service provider.
In OData, data is made available in data chunks called data entities. One example for an entity is a sales order. The reason to make data available in entities is that they are easier to consume, which is especially important in the mobile area.
Technically, the OData service defines a number of entity sets (table of entities) with their properties (table structure fields) and relations between entities for a specific business. There are operations to act on the entity sets (the CRUD operations create, retrieve, update, delete) and functions which can read or process one or multiple entities.
The definition of the specific business service as OData service can be accessed from a client by retrieving the service metadata document.
The OData protocol relies mainly on the REST principle, on the data transfer via HTTP protocol and on a data format which is either XML-based or JSON.
The eCATT test support for OData offers the following advantages:
The underlying architecture is as shown in following diagram (click to enlarge), which shows a number of participating objects in two layers:
The (HTTP) calls to the System Under Test to invoke OData services are processed in the OData library of SAP NetWeaver and encapsulated in a general technical layer.
Client access to business data provided by the service is implemented in ABAP classes, which can be generated by the eCATT OData Assistant tool for each individual OData service. These generated classes will include type definition for business data (entity types) of the service in one super class including support for all entity types and complex types of one entity container. The generation process will build sub-classes, one for each entity set, to include methods for the data access and methods for CRUD operations (create, retrieve, update, delete).
Invoking the public methods of the generated classes can result in:
Although the classes generated by the eCATT OData Assistant could be changed and modified by a developer in a development system, you are not encouraged to do so, since future changes in the service metadata model might affect and invalidate the generated classes which then will lead to the need to again use the assistant to regenerate the classes. Re-generation and overwriting of existing OData Service Client classes will be possible only for “untouched” classes.
Selected options and user input of the first generation is currently not persisted, so the user has to provide his or her entry again at the next use of the Assistant. It's advisable to use the same settings especially the same "name proposal" to generate similar classes, which will match the eventually already existing test classes.
Building a test automation of a test scenario against an OData service from the eCATT perspective means to invoke the service operations in a given logical order using defined test data. This can be achieved by calling the methods of the above described OData service access classes.
The frame to carry the test algorithm can be implemented in any ABAP module or eCATT test script, which will invoke the methods of the generated OData service access classes.
The eCATT OData Assistant provides the option to also generate integration tests in a global ABAP Unit class which builds one set of possible tests. These tests can use test data stored in eCATT test data containers, which are also generated and filled with test data during the Assistant’s test generation process.
Of course, the test developer can modify and enhance the generated test modules according to his or her test projects needs. The generated ABAP Unit tests could be considered as show case how to call the service access classes. You are encouraged to enhance the generated methods, to implement more test methods and even completely new classes calling the service access classes to enrich your test scenarios.
If you want to know how the automatic testing of OData Services functions practically, click here.
You can easily create automated OData integration tests with the help of the eCATT OData Assistant (transaction SECATT_ODATA).
SECATT_ODATA is available in software component SAP_BASIS release 7.40, 7.41 and 7.60.
It is recommended to use the highest possible support package, since the tool is in a process of continuous improvement and completion.
For information on the technical landsape of the automated OData Service testing, click here.
The eCATT OData Assistant leads you through three major steps:
The eCATT OData Assistant (transaction SECATT_ODATA) starts with following screen (click to enlarge the picture):
To load the service meta data, proceed as follows:
If you want to display the meta data, choose Display Service. The xml service meta data will be displayed.
Choose the Continue button or the Create Access Classes for Service step in the step path to proceed to the next step.
The screen of this step displays the name of the main class and the corresponding service entities (sub-classes) arranged in a tree structure.
For each entity set, you can see its properties and available CRUD-operations (create, retrieve, update, delete). For each property and its datatype in EDM model, the screen shows the associated ABAP data type.
On the level of entity container, you find functions and actions provided by the service.
To generate the desired access class(es), proceed as follows:
The log window in the lower part of the screen informs you about status and success of the generation process. It is a business application log, which is stored permanently in the system and can be accessed choosing Goto -> Application log -> Select and Display.
Each class name node in the tree provides a context menu option to navigate to the generated class in transaction SE80.
The class contains methods for calling CRUD operations on entities and class-based type definitions for EntityTypes, ComplexTypes and key structures.
These ABAP types allow access to the service data in ABAP variables, which are considered useful for building integration tests in ABAP coding or eCATT test scripts.
Choose the Continue button or the Create Tests step in the step path to proceed to the next step.
The Create Tests screen provides options to create test data containers and global ABAP classes containing ABAP unit test methods as test implementation.
Generated test classes will use the Service Access Classes to call the service.
You can choose names, package assignment and the transport request number. You may change test class names, test class description, test data container names (TDC) and TDC description by either clicking on it in the tree or choosing the correspondig option from context menu.
The use of test data in TDCs is optional. You can simply deselect TDCs if not required.
The creation of test classes is based on a template which includes test methods for:
The creation of test classes is flexible and creates only test methods for those CRUD operations, which have been selected during the creation of the service access class.
To enhance performance, you can also enter a value in the Entries field or via Test Scope -> Maximum Number of Data Sets. The value stands for the maximum number of data sets to be read. If you enter 0 or no value, all data is read.
Use Generate to create the test classes and TDCs. On the screen then displayed, again the application log is filled with the class generation status and related messages.
The generated unit tests can make use of the test data container. Test data will serve as reference data in retrieve scenarios and will be used as data templates for create scenarios.
The TDCs will contain a parameter for the assigned entity set. The parameter is typed to the ABAP structure which is related to the entity set. Remember that these ABAP structures are part of the generated Service Access Classes.
In the variants part of the TDC, all entities (records) of this entity set will be stored, which had been available in the service provider system at the time of generation.
In the generated classes screen, again the context menu of the class name node provides the option to navigate into the generated class in transaction SE80, where the test’s coding can be modified and the tests can be started.
The generated test classes should be considered as proposals for tests provided by the assistant. It is nearly impossible to provide templates for generated tests, which would fit to every test projects needs.
Responsible developers and testers are encouraged to further refine, extend and enrich these test methods and to add new tests to the test classes.
Be warned that repeated test class generation will overwrite classes and coding potentially modified by the test project.
In contrast to the service access classes, which may need re-generation once the service meta data changes, the test classes might be generated seldom or even never a second time.
The method SETUP is used to prepare each test run. Here, you can overwrite the settings for the HTTP connection adressing the service provider and the name of the TDC providing test data.
During class creation the connection data used in the assistant step 1 are written into Constructor parameters of the service access classes. Nevertheless, the test classes as callers of service access classes can overwrite and provide their connection options. Note that it is advisable to use HTTP destinations from transaction SM59 to store user credentials in a safe way and to profit from additional options and settings of these destination objects.
You can run the test via the ABAP Unit framework, transaction SUT, with the Integrated ABAP Unit runner with or without debug mode.
The scripting language of the test automation tool SAP eCATT (transaction SECATT) includes commands for calling ABAP methods.
Thus, it is easy to use the generated Service Access Classes also from eCATT Test Scripts. Such intergration options allow to combine the call of OData services with other test automation capabilities on SAP business systems.