Skip to main content

Project Configuration

Creating Projects

You can create Projects on the Test Assets page by clicking the Create button and selecting the Project menu item.

You must complete all the fields on the General tab of the Project screen and add at least one test target on the Test Targets tab to run your test. Optionally, you can perform additional customization and fine-tuning on the Runtime Options tab. The remainder of this page discusses these tabs in more detail.

General Configuration

The General tab allows for setting up the basics of the Project, such as:

  1. Selecting the Project Type
  2. Selecting which Test Flow Template the Engine should use
  3. Naming the Project
  4. Providing a short description
  5. Selecting an Engine to run the test
  6. The transport mechanism to use to deliver the test cases

Each Project must have an Engine assigned to run the test. Selecting an Engine capable of interacting with the target under test is crucial. (For example, the Engine is on the same network as the target, has internet access, or the Engine and the target are physically connected.)

Once an Engine is selected, another field becomes visible where we can select the appropriate Driver. The Drivers listed depend on what Drivers the Engine has deployed. By default, only the built-in Drivers are available. Additional Drivers can be created using the SDK and deployed to registered Engines.

Adding Test Targets

To run a test, you must configure at least one Test Target on the Test Targets tab by clicking the Add New Target button. Provide a name for the Test Target using the Test Target Name input field in the pop-up window and click the Add button.

Adding multiple Test Targets to the Project allows for testing multiple targets simultaneously or distributing test cases between multiple targets to shorten the overall test run time.

Note the dropdown menu appearing in the top-right corner of the screen after registering the first Test Target. This dropdown allows selecting the Test Target to configure.

The configuration options for the Test Target are organized into panels that can be expanded or collapsed. The two core configuration sections are the Connection Settings and Target Monitoring. Additional sections may appear depending on how the other assets of the Project were implemented. For example, authentication-related configuration options and forms to customize Session Variables may show up.

Connection Settings

Each registered test target has a section dedicated to driver-specific configuration options that describe how GUARDARA can reach the target. For example, a Network driver will allow specifying the IP address of the server running the target service and the port number the targeted service is listening on.

Authentication Variables

You can customize the authentication credentials on a per-target basis. This section of the configuration is only available if there is at least one Authenticate Action in the Test Flow Template. Please note that the Authenticate Action is only available when testing web applications and web services.

Session Variables

You can customize the Session Variables within the collection on a per-target basis. The relevant sections and fields are only visible if any project assets have a Variable Collection assigned. This feature helps tune the test case according to the target’s requirements.

Target Monitoring

It is possible to configure Monitors for each test target registered. Monitors can improve the anomaly detection rate and increase the accuracy of the reports. You can configure multiple Monitors to monitor different aspects of a test target. For example, there could be a monitor to check if the target has crashed, another to monitor the log files for any errors, etc. You can use the SDK to implement Monitors. You can also find examples Monitors on GitLab to help with the development.

Without Monitors, the Engine will rely on other issue detection mechanism.

Runtime Options

The run-time options are organised under three groups: the generic, test generation, and analysis options. These are discussed below.

Generic Options

The following table discusses the generic runtime options.

OptionDescription
Start the test in paused stateWhen enabled, the Engine initialises the test runtime, including firing up all Monitors, then pauses the test execution immediately. No test cases are delivered until you click the Play button to start testing.
Delete test once completedWhen enabled, the test is automatically deleted once it completes.
Distribute test casesThis option is only visible when working with multiple test targets. If enabled, test cases are distributed across test targets making the overall test run time shorter. The more targets we define in this mode, the faster the test completes.

Test Generation Options

The following table discusses the test generation related options.

OptionDescription
Skip(Optional) The number of test cases to skip when starting the test.
Count(Optional) The total number of test cases to perform during the test run.
Times to Execute(Optional) The number of times to execute the range of test cases configured using the Skip and Count options. This option is only available if either Skip or Count is specified.
Wait Time(Optional) The number of milliseconds to wait between the range executions. This is only considered if Times to Execute is specified (not 0).

Analysis Options

The following table discusses the analysis options.

OptionDescription
Analyze findings during testingGUARDARA automatically analyses each finding to determine reproducibility. The analysis also improves the accuracy of the findings in the report by narrowing down the exact test cases required to trigger the detected issue and only including those under the “Flow” tab of the reported finding. Sometimes, the analysis can take a significant time, depending on the test target. If you are willing to sacrifice report quality to achieve faster test run time, the Project configuration allows disabling the automatic finding analysis feature.
Sanity CheckThis configuration option is available when the Project Type is set to Web Service. When this analysis feature is enabled, GUARDARA executes the Test Flow once, without any modifications, to see if the web service is accessible and functional. The sanity check detects client-side (for example, test configuration) or server-side issues that could negatively impact the dynamic testing of the application. If GUARDARA detects any issues during this phase, the test terminates, and you must take corrective action before attempting to rerun the test. This analysis option is enabled by default when testing web services via the OpenAPI import feature.
Authentication CheckThis configuration option is available when the Project Type is set to Web Service. When enabled, GUARDARA executes the Test Flow once, without any modification, to record the web service's response as a baseline. It then executes the Test Flow again, skipping the Authenticate action. By comparing the observed behaviour with the recorded baseline, GUARDARA can detect if the authentication mechanism is broken. Please note that even when the feature is enabled, the check is only executed if there is an Authenticate action within the Test Flow.
Target Restart Soft TimeoutThe maximum time in seconds to wait for the Monitor to restart the target during the issue analysis phase before warning the user about a potential issue with the target restart process. This option is only available if at least one of the Monitors configured can restart the target under test upon request.
Target Restart Hard TimeoutThe maximum time in seconds to wait for the Monitor to restart the target during the issue analysis phase. If the configured Monitor cannot restart the target during this time, the Engine terminates the test. This option is only available if at least one Monitor configured can restart the target under test upon request.
Enable target performance baselining and monitoringWhen enabled, GUARDARA performs a sequence of tests without any changes to the Messages to record how long the execution of each action takes under normal circumstances. The number of test rounds can be controlled using the Baseline Iterations option. Then, during the actual testing process, GUARDARA compares how long each action takes, compares it to the recorded baseline and alerts (creates a finding in the report) if the action took longer than the threshold defined using the Report Threshold option. Set the Report Threshold to a very high number to disable the reporting.