Architecture

CD System

Network

You can command and query a Server through its public API. The network provides communication between Servers by using the Server’s public API.

Server

Agent

Agents are identified by tag and we need at least one of each of these:

  • Build – this is o build the application

  • Deploy – this is to deploy applications

  • DeploySecure – this is to deploy applications to a secure domain

  • Test – this is to run test against the applications

  • TestSecure – this is to run test against the applications on secure domain

  • Manager – this is to push or pull updates for the local CD installation

  • ManagerSecure – this is to push or pull secure updates for the local CD installation

The big difference here is the agent's service user account used for each tag: local, domain, or secure domain. These agents can be provisioned with just 3 physical agents. Each agent would have to be tagged with the proper agent identity:

  • Local Agent – ManagerSecure (a local user account)

  • DomainAgent – Build, Deploy, Test (a domain user account)

  • SecureDomainAgent – DeploySecure, TestSecure, ManagerSecure (this is a secure domain user account created as a local user account)

Repository

Source control, artifact repository, file system, database, web service… A place to store and retrieve artifacts produced and needed by the CD system.

Monitoring

Infrastructure

Environments

We need a test version of the CD environment to cut down on the false alarm alerts.

Installation

We should be able to spin up a CD server or agent with the click of a button. CD infrastructure configuration should be stored in source control. From a base virtual server instance we should be able to install and configure a running CD server or agent environment and join it to the CD network.

CD Framework

Build Framework

Deploy Framwork

Test Framework

Application Environment AppDriver – Environment (This is where Selenium is implemented and exposed (Links TestPipe with the Application Under Test.) Screen – AppDriver, Screen Controls (Links the Screen with the AppDriver.) Screen Control – Screen AppDriver Test Step – Screen, Screen Control, Test Feature, Test Scenario, Test Session (Links the Test with the Screen and AppDriver.) Test Scenario – Test Steps, Test Assert, Test Result, AppDriver Test Feature – Test Scenarios, AppDriver Test Suite – Test Features Test Session – Test Cache, Test Suite, App Driver

Test Runner

Some automated unit test framework like NUnit, Junit, Jasmine, or Pester.

Test Driver – Test Runner, Test Session

A parallel test orchestrator, like Tarsvin or SpecFlow, that links a Test Framework with a Test Runner. It takes commands from the Test Framework and sends the command to the Test Runner. It can talk to any Test Runner that implements ITestRunner. It can manage Test Runners; create, subscribe, start, ping, query, stop, and remove.

Test Objects

Each object can have:

  • State – Cached or persisted state

  • Logging – A way to pass notices to a console, file or persisted store

  • Reporting – A way to broadcast heath and status messages

Test Run Tags

Main

If a test is not tagged as Ignore, Manual, Failure, Flaky then it is considered Main. There isn’t an actual Main tag.

Ignore

This causes a test to be ignored in the test run.

Manual

This causes a test test be ignored in the test run and be reported as a manual test.

Failure

When a test fails it is tagged as Failure which causes it to be ran in the Failure pipeline. This is done to keep the main test pipelines green, but failures visible and isolated so that they can be fixed.

Flaky

If a test in the Failure pipeline passes, it is tagged as Flaky which causes it to be ran in the Flaky pipeline. Failures are ignored in the main test pipeline, but the Failure pipeline provides a gate that the application must pass to get to the next stage. The application will not pass if it has Failures. Flaky tests are ignored in the main pipeline and are not used to gate the application to the next stage. They continue to run but on a schedule separate from the main pipeline and they should not be allowed to persist in a Flaky state.

Tests will continue to run in the Failure and Flaky pipelines until they are consistently passing. Tests in the Failure pipeline have the highest priority for fixing. Flaky tests can prioritized over Failure tests if the test has a high business priority.

Tests can be automatically tagged by the Test Framework. Tests are driven by text files. The test framework already knows how to update these files. We just need a way to merge the changes back to the main test repository. Do we merge on every failure or queue the failures for a batch merge? How do we handle merge conflicts? If the merge causes the main pipeline to run again, say it runs on every commit, then how do we limit the mainline run to only process failures and flakes if the commit only has failures and flakes? We don’t want rerun to cause a bottleneck in the system.

Test Speed Tags

This could be configurable.

  • Slow

  • Medium

  • Fast

The default is Fast, which is not an actual tag. The absence of Slow or Medium indicates Fast. We can have piplines set up to optimize based on these tags. Main could run only Fast tests and ignore Medium and Slow tests. Then there could be a pipeline for Medium tests and one for Slow tests that run on their own schedule and may or may not be used as gates to the next stage.

Test Priority Tags

Tests can be tagged with a priority (these could be configurable):

  • Immediate

  • High

  • Medium

  • Low

  • Warning

Priority Matrix

This could be configurable.

Run Tag

Priority Tag

Main

Immediate

Main

High

Failure

Immediate

Main

Medium

Failure

High

Flaky

Immediate

Main

Low

Failure

Medium

Flaky

High

Failure

Low

Flaky

Medium

Flaky

Low

Main

Warning

Failure

Warning

Flaky

Warning

Last updated