This guide covers concepts as well as practical details needed by developers to use the Spring Boot runtime. It provides information governing the design of a Spring Boot application deployed as a Linux container on OpenShift.

1. Runtime Details

What Spring Boot Does

Spring Boot lets you create opinionated Spring-based stand-alone applications. See Additional Resources for a list of documents about Spring Boot.

The Spring Boot runtime gives you the advantages and convenience of the OpenShift platform:

  • rolling updates

  • service discovery

  • canary deployments

  • ways to implement common microservice patterns: externalized configuration, health check, circuit breaker, and failover

1.1. Spring Boot Tested and Verified Version

The Spring Boot runtime version 1.5.8.RELEASE is tested and verified to run with the Embedded Apache Tomcat Container on OpenShift. When used with Spring Boot, this embedded container, as well as other components such as the Java container image, are part of a Red Hat subscription.

For a complete list of Spring Boot components provided as part of this release, see the Release Notes.

1.2. Features and Frameworks Summary

This guide covers the design of modern applications using Spring Boot. These concepts support developing Web or Websocket applications using either a HTTP connector or non-blocking HTTP connector. The applications can be packaged and deployed without modification or updated to use cloud native features on OpenShift.

The features in the table below are available as a collection of missions which run on OpenShift. Some features are native to Kubernetes, others are available from Spring Cloud Kubernetes. Features such as Actuator are available directly in Spring Boot.

Table 1. Features and Frameworks Summary
Feature Problem Addressed Cloud Native Framework

Circuit Breaker

Switches between services and continues to process incoming requests without interruption in case of service failure.

Yes - using Kubernetes API

Spring Cloud Kubernetes - Hystrix

Health Check

Checks readiness and liveness of the service. Service restarts automatically if probing fails.

Yes

Spring Boot Actuator

Service Discovery

Discovers Service/Endpoint deployed on OpenShift and exposed behind a service or route using the service name matching a DNS entry.

Yes - using Kubernetes API

Spring Cloud Kubernetes - DiscoveryClient

Server Side Load Balancing

Handles load increases by deploying multiple service instances, and by transparently distributing the load across them.

Yes - Using internal Kubernetes Load Balancer

-

Client Side Load Balancing

Transparently handle load balancing on the client for better control and load distribution across multiple service instances.

No

Spring Cloud Kubernetes - Ribbon

Externalize Parameters

Makes the application independent of the environment where it runs.

Yes - Kubernetes ConfigMap or Secret

Spring Cloud Kubernetes - ConfigMap

2. Debugging

This sections contains information about debugging your Spring Boot–based application both in local and remote deployments.

2.1. Remote Debugging

To remotely debug an application, you must first configure it to start in a debugging mode, and then attach a debugger to it.

2.1.1. Starting Your Spring Boot Application Locally in Debugging Mode

One of the ways of debugging a Maven-based project is manually launching the application while specifying a debugging port, and subsequently connecting a remote debugger to that port. This method is applicable at least when launching the application manually using the mvn spring-boot:run goal.

Prerequisites
  • A Maven-based application

Procedure
  1. In a console, navigate to the directory with your application.

  2. Launch your application and specify the necessary JVM arguments and the debug port using the following syntax:

    $ mvn spring-boot:run -Drun.jvmArguments="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=$PORT_NUMBER"

    Here, $PORT_NUMBER is an unused port number of your choice. Remember this number for the remote debugger configuration.

2.1.2. Starting an Uberjar in Debugging Mode

If you chose to package your application as a Spring Boot uberjar, debug it by executing it with the following parameters.

Prerequisites
  • An uberjar with your application

Procedure
  1. In a console, navigate to the directory with the uberjar.

  2. Execute the uberjar with the following parameters. Ensure that all the parameters are specified before the name of the uberjar on the line.

    $ java -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=$PORT_NUMBER -jar $UBERJAR_FILENAME

    Here, $PORT_NUMBER is an unused port number of your choice. Remember this number for the remote debugger configuration.

2.1.3. Starting Your Application on OpenShift in Debugging Mode

To debug your Spring Boot-based application on OpenShift remotely, you must set the JAVA_DEBUG environment variable inside the container to true and configure port forwarding so that you can connect to your application from a remote debugger.

Prerequisites
  • Your application running on OpenShift.

  • The oc binary installed on your machine.

Procedure
  1. Using the oc command, list the available deployment configurations:

    $ oc get dc
  2. Set the JAVA_DEBUG environment variable in the deployment configuration of your application to true, which configures the JVM to open the port number 5005 for debugging. For example:

    $ oc set env dc/MY_APP_NAME JAVA_DEBUG=true
  3. Redeploy the application if it is not set to redeploy automatically on environment change. For example:

    $ oc rollout latest dc/MY_APP_NAME
  4. Configure port forwarding from your local machine to the application pod:

    1. List the currently running pods and find one containing your application:

      $ oc get pod
      NAME                            READY     STATUS      RESTARTS   AGE
      MY_APP_NAME-3-1xrsp          0/1       Running     0          6s
      ...
    2. Configure port forwarding:

      $ oc port-forward MY_APP_NAME-3-1xrsp $LOCAL_PORT_NUMBER:5005

      Here, $LOCAL_PORT_NUMBER is an unused port number of your choice on your local machine. Remember this number for the remote debugger configuration.

  5. When you are done debugging, unset the JAVA_DEBUG environment variable in your application pod. For example:

    $ oc set env dc/MY_APP_NAME JAVA_DEBUG-

2.1.4. Attaching a Remote Debugger to the Application

When your application is configured for debugging, attach a remote debugger of your choice to it. In this guide, Red Hat JBoss Developer Studio is covered, but the procedure is similar when using other programs.

Prerequisites
  • The application running either locally or on OpenShift, and configured for debugging.

  • The port number that your application is listening on for debugging.

  • Red Hat JBoss Developer Studio installed on your machine. You can download it from the Red Hat JBoss Developer Studio download page.

Procedure
  1. Start Red Hat JBoss Developer Studio.

  2. Create a new debug configuration for your application:

    1. Click Run→Debug Configurations.

    2. In the list of configurations, double-click Remote Java application. This creates a new remote debugging configuration.

    3. Enter a suitable name for the configuration in the Name field.

    4. Enter the path to the directory with your application into the Project field. You can use the Browse…​ button for convenience.

    5. Set the Connection Type field to Standard (Socket Attach) if it is not already.

    6. Set the Port field to the port number that your application is listening on for debugging.

    7. Click Apply.

  3. Start debugging by clicking the Debug button in the Debug Configurations window.

    To quickly launch your debug configuration after the first time, click Run→Debug History and select the configuration from the list.

3. Monitoring

This section contains information about monitoring your Spring Boot–based application running on OpenShift.

3.1. Accessing JVM metrics for your application on OpenShift

3.1.1. Accessing JVM metrics using Jolokia on OpenShift

Jolokia is a built-in lightweight solution for accessing JMX (Java Management Extension) metrics over HTTP on OpenShift. It allows you to access CPU, storage, and memory usage data collected by JMX over an HTTP bridge using a REST interface and JSON-formatted message payloads, making it suitable for monitoring cloud applications, due to its comparably high speed and low resource-requirements.

Accessing Jolokia metrics using the hawt.io console

For Java-based applications, the OpenShift Web console provides the integrated hawt.io console that collects and displays all relevant metrics output by the JVM running your application.

Prerequistes
  • the oc client authenticated

  • a Java-based application container running in a project on OpenShift

  • latest JDK 1.8.0 image

Procedure
  1. List the deployment configurations of the pods inside your project and select the one that corresponds to your application.

    oc get dc
    NAME         REVISION   DESIRED   CURRENT   TRIGGERED BY
    MY_APP_NAME   2          1         1         config,image(my-app:6)
    ...
  2. Open the YAML deployment template of the pod running you application for editing.

    oc edit dc/MY_APP_NAME
  3. Add the following entry to the ports section of the template and save your changes:

    ...
    spec:
      ...
      ports:
      - containerPort: 8778
        name: jolokia
        protocol: TCP
      ...
    ...
  4. Redeploy the pod running you application.

    oc rollout latest dc/MY_APP_NAME

    The pod is redeployed with the updated deployment configuration and has port 8778 exposed on the OpenShift host.

  5. Log into the OpenShift Web console.

  6. In the sidebar, navigate to Applications > Pods, and click on the name of the pod running your application.

  7. In the pod details screen, click Open Java Console to access the hawt.io console.

Additional information

4. Missions and Cloud-Native Development on OpenShift

What are Missions?

Missions are working applications that showcase different fundamental pieces of building cloud native applications and services.

A mission implements a Microservice pattern such as:

  • Creating REST APIs

  • Interoperating with a database

  • Implementing the Health Check pattern

You can use missions for a variety of purposes:

  • A proof of technology demonstration

  • A teaching tool, or a sandbox for understanding how to develop applications for your project

  • They can also be updated or extended for your own use case

What is a Booster?

A booster is the implementation of a mission in a specific runtime. Boosters are preconfigured, functioning applications that demonstrate a fundamental aspect of modern application development and run in an environment similar to production.

Each mission is implemented in one or more runtimes. Both the specific implementation and the actual project that contains your code are called a booster.

For example, the REST API Level 0 mission is implemented for these runtimes:

5. Available Missions and Boosters for Spring Boot

The Spring Boot runtime supports the following missions and boosters.

5.1. REST API Level 0 Mission - Spring Boot Booster

Mission proficiency level: Foundational.

What the REST API Level 0 Mission Does

The REST API Level 0 Mission shows how to map business operations to a remote procedure call endpoint over HTTP using a REST framework. This corresponds to Level 0 in the Richardson Maturity Model. Creating an HTTP endpoint using REST and its underlying principles to define your API lets you quickly prototype and design the API flexibly. For more information on REST, see REST Resources.

This booster introduces the mechanics of interacting with a remote service using the HTTP protocol. It allows you to:

  • Execute an HTTP GET request on the api/greeting endpoint.

  • Receive a response in JSON format with a payload consisting of the Hello, World! String.

  • Execute an HTTP GET request on the api/greeting endpoint while passing in a String argument. This uses the name request parameter in the query string.

  • Receive a response in JSON format with a payload of Hello, $name! with $name replaced by the value of the name parameter passed into the request.

Viewing the Booster source code and README

To view the source code and README file of this booster, download and extract the ZIP file with the booster. To get the download link of the ZIP file, see the Creating and Deploying a Booster Using OpenShift Online chapter of the Getting Started with Application Development on OpenShift.

Table 2. Design Tradeoffs
Pros Cons
  • The booster enables fast prototyping.

  • The API Design is flexible.

  • HTTP endpoints allow clients to be language agnostic.

  • As an application or service matures, the REST API Level 0 approach might not scale well. It might not support a clean API design or use cases with database interactions.

    • Any operations involving shared, mutable state must be integrated with an appropriate backing datastore.

    • All requests handled by this API design are scoped only to the container servicing the request. Subsequent requests might not be served by the same container.

5.1.1. Deploying the REST API Level 0 Booster to OpenShift Online

Use one of the following options to execute the REST API Level 0 booster on OpenShift Online.

Although each method uses the same oc commands to deploy your application, using developers.redhat.com/launch provides an automated booster deployment workflow that executes the oc commands for you.

Deploying the Booster Using developers.redhat.com/launch
Prerequisites
Procedure
  • Navigate to the OpenShift Online URL in a browser and log in.

  • Follow on-screen instructions to create and launch your booster in Spring Boot.

Authenticating the oc CLI Client

To work with boosters on OpenShift Online using the oc command-line client, you need to authenticate the client using the token provided by the OpenShift Online web interface.

Prerequisites
Procedure
  1. Navigate to the OpenShift Online URL in a browser.

  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.

  3. Select Command Line Tools in the drop-down menu.

  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.

  5. Paste the command into a terminal application. The command uses your authentication token to authenticate your oc CLI client with your OpenShift Online account.

    $ oc login OPENSHIFT_URL --token=MYTOKEN
Deploying the REST API Level 0 Booster using the oc CLI Client
Prerequisites
Procedure
  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.

  3. Use Maven to start the deployment to OpenShift.

    $ mvn clean fabric8:deploy -Popenshift

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift and to start the pod.

  4. Check the status of your booster and ensure your pod is running.

    $ oc get pods -w
    NAME                             READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-1-aaaaa               1/1       Running     0          58s
    MY_APP_NAME-s2i-1-build           0/1       Completed   0          2m

    Your MY_APP_NAME-1-aaaaa pod should have a status of Running once it is fully deployed and started.

  5. Once your booster is deployed and started, determine its route.

    Example Route Information
    $ oc get routes
    NAME                 HOST/PORT                                                     PATH      SERVICES        PORT      TERMINATION
    MY_APP_NAME         MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME      MY_APP_NAME      8080

    The route information of a pod gives you the base URL which you use to access it. In the example above, you would use http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME as the base URL to access the application.

5.1.2. Deploying the REST API Level 0 Booster to Single-node OpenShift Cluster

Use one of the following options to execute the REST API Level 0 booster locally on Single-node OpenShift Cluster: * Using Fabric8 Launcher * Using the oc CLI client

Although each method uses the same oc commands to deploy your application, using Fabric8 Launcher provides an automated booster deployment workflow that executes the oc commands for you.

Getting the Fabric8 Launcher Tool URL and Credentials

You need the Fabric8 Launcher tool URL and user credentials to create and deploy boosters on Single-node OpenShift Cluster. This information is provided when the Single-node OpenShift Cluster is started.

Prerequisites
Procedure
  1. Navigate to the console where you started Single-node OpenShift Cluster.

  2. Check the console output for the URL and user credentials you can use to access the running Fabric8 Launcher:

    Example Console Output from a Single-node OpenShift Cluster Startup
    ...
    -- Removing temporary directory ... OK
    -- Server Information ...
       OpenShift server started.
       The server is accessible via web console at:
           https://192.168.42.152:8443
    
       You are logged in as:
           User:     developer
           Password: developer
    
       To login as administrator:
           oc login -u system:admin
Deploying the Booster Using the Fabric8 Launcher Tool
Prerequisites
Procedure
  • Navigate to the Single-node OpenShift Cluster URL in a browser and log in.

  • Follow on-screen instructions to create and launch your booster in Spring Boot.

Authenticating the oc CLI Client

To work with boosters on Single-node OpenShift Cluster using the oc command-line client, you need to authenticate the client using the token provided by the Single-node OpenShift Cluster web interface.

Prerequisites
Procedure
  1. Navigate to the Single-node OpenShift Cluster URL in a browser.

  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.

  3. Select Command Line Tools in the drop-down menu.

  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.

  5. Paste the command into a terminal application. The command uses your authentication token to authenticate your oc CLI client with your Single-node OpenShift Cluster account.

    $ oc login OPENSHIFT_URL --token=MYTOKEN
Deploying the REST API Level 0 Booster using the oc CLI Client
Prerequisites
Procedure
  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.

  3. Use Maven to start the deployment to OpenShift.

    $ mvn clean fabric8:deploy -Popenshift

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift and to start the pod.

  4. Check the status of your booster and ensure your pod is running.

    $ oc get pods -w
    NAME                             READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-1-aaaaa               1/1       Running     0          58s
    MY_APP_NAME-s2i-1-build           0/1       Completed   0          2m

    Your MY_APP_NAME-1-aaaaa pod should have a status of Running once it is fully deployed and started.

  5. Once your booster is deployed and started, determine its route.

    Example Route Information
    $ oc get routes
    NAME                 HOST/PORT                                                     PATH      SERVICES        PORT      TERMINATION
    MY_APP_NAME         MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME      MY_APP_NAME      8080

    The route information of a pod gives you the base URL which you use to access it. In the example above, you would use http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME as the base URL to access the application.

5.1.3. Deploying the REST API Level 0 Booster to OpenShift Container Platform

The process of creating and deploying boosters to OpenShift Container Platform is similar to OpenShift Online:

Prerequisites
Procedure

5.1.4. Interacting with the Unmodified Spring Boot Booster

The booster provides a default HTTP endpoint that accepts GET requests.

  1. Use curl to execute a GET request against the booster. You can also use a browser to do this.

    $ curl http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/greeting
    {"content":"Hello, World!"}
  2. Use curl to execute a GET request with the name URL parameter against the booster. You can also use a browser to do this.

    $ curl http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/greeting?name=Sarah
    {"content":"Hello, Sarah!"}
From a browser you can also use a form provided by the booster to perform these same interactions. The form is located at the root of the project http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME.

5.1.5. Running the REST API Level 0 Booster Integration Tests

This booster includes a self-contained set of integration tests. When run inside an OpenShift project, the tests:

  • deploy a test instance of the application to the project,

  • execute the individual tests on that instance,

  • remove all instances of the application from the project when the testing is done.

Executing integration tests removes all existing instances of the booster application from the target OpenShift project. To avoid accidentally removing your booster application, ensure that you create and select a separate OpenShift project to execute the tests.

Prerequisites
  • the oc client authenticated

  • an empty project in your OpenShift namespace

Procedure

Execute the following command to run the integration tests:

$ mvn clean verify -Popenshift,openshift-it

5.2. Externalized Configuration Mission - Spring Boot Booster

Mission proficiency level: Foundational.

The Externalized Configuration Mission provides a basic example of using a ConfigMap to externalize configuration. ConfigMap is an object used by OpenShift to inject configuration data as simple key and value pairs into one or more Linux containers while keeping the containers independent of OpenShift.

This mission shows you how to:

  • Set up and configure a ConfigMap.

  • Use the configuration provided by the ConfigMap within an application.

  • Deploy changes to the ConfigMap configuration of running applications.

About Externalized Configuration

It is important for the application configuration to be externalized and separated from its code. This allows the application configuration to change as it moves through different environments while leaving the code unchanged. This also keeps sensitive or internal information out of your code base and version control. Many languages and application servers provide environment variables to support externalizing an application’s configuration. Microservices architectures and polyglot environments add a layer of complexity to managing an application’s configuration. Applications are comprised of independent, distributed services, each potentially with its own configuration. This creates a maintenance challenge to keep the configuration synchronized and accessible from all services.

ConfigMaps enable the application configuration to be externalized and used in individual Linux containers and pods on OpenShift. You can create a ConfigMap object in a variety of ways, including using a YAML file, and inject it into the Linux container. ConfigMaps also allow sets of configuration data to be easily grouped and scaled. This lets you configure an arbitrarily large number of environments beyond the basic Development, Stage, and Production. You can find more information about ConfigMaps in the OpenShift documentation.

Viewing the Booster source code and README

To view the source code and README file of this booster, download and extract the ZIP file with the booster. To get the download link of the ZIP file, see the Creating and Deploying a Booster Using OpenShift Online chapter of the Getting Started with Application Development on OpenShift.

Table 3. Design Tradeoffs
Pros Cons
  • Configuration is separate from deployments

  • Can be updated independently

  • Can be shared across services

  • Configuration is separate from deployments

  • Has to be maintained separately

  • Requires coordination beyond the scope of a service

5.2.1. Deploying the Externalized Configuration Booster to OpenShift Online

Use one of the following options to execute the Externalized Configuration booster on OpenShift Online.

Although each method uses the same oc commands to deploy your application, using developers.redhat.com/launch provides an automated booster deployment workflow that executes the oc commands for you.

Deploying the Booster Using developers.redhat.com/launch
Prerequisites
Procedure
  • Navigate to the OpenShift Online URL in a browser and log in.

  • Follow on-screen instructions to create and launch your booster in Spring Boot.

Authenticating the oc CLI Client

To work with boosters on OpenShift Online using the oc command-line client, you need to authenticate the client using the token provided by the OpenShift Online web interface.

Prerequisites
Procedure
  1. Navigate to the OpenShift Online URL in a browser.

  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.

  3. Select Command Line Tools in the drop-down menu.

  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.

  5. Paste the command into a terminal application. The command uses your authentication token to authenticate your oc CLI client with your OpenShift Online account.

    $ oc login OPENSHIFT_URL --token=MYTOKEN
Deploying the Externalized Configuration Booster using the oc CLI Client
Prerequisites
Procedure
  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Assign view access rights to the service account before deploying your booster, so that the booster can access the OpenShift API in order to read the contents of the ConfigMap.

    $ oc policy add-role-to-user view -n $(oc project -q) -z default
  3. Navigate to the root directory of your booster.

  4. Deploy your ConfigMap configuration to OpenShift using greeting-service/src/main/etc/application.properties.

    $ oc create configmap app-config --from-file=greeting-service/src/main/etc/application.properties
  5. Verify your ConfigMap configuration has been deployed.

    $ oc get configmap app-config -o yaml
    
    apiVersion: v1
    data:
      application.properties: |-
          greeting.message=Hello %s from a ConfigMap!
    ...
  6. Use Maven to start the deployment to OpenShift.

    $ mvn clean fabric8:deploy -Popenshift

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift and to start the pod.

  7. Check the status of your booster and ensure your pod is running.

    $ oc get pods -w
    NAME                                       READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-1-aaaaa               1/1       Running     0          58s
    MY_APP_NAME-s2i-1-build           0/1       Completed   0          2m

    Your MY_APP_NAME-1-aaaaa pod should have a status of Running once its fully deployed and started.

  8. Once your booster is deployed and started, determine its route.

    Example Route Information
    $ oc get routes
    NAME                 HOST/PORT                                                     PATH      SERVICES        PORT      TERMINATION
    MY_APP_NAME         MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME      MY_APP_NAME      8080

    The route information of a pod gives you the base URL which you use to access it. In the example above, you would use http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME as the base URL to access the application.

5.2.2. Deploying the Externalized Configuration Booster to Single-node OpenShift Cluster

Use one of the following options to execute the Externalized Configuration booster locally on Single-node OpenShift Cluster: * Using Fabric8 Launcher * Using the oc CLI client

Although each method uses the same oc commands to deploy your application, using Fabric8 Launcher provides an automated booster deployment workflow that executes the oc commands for you.

Getting the Fabric8 Launcher Tool URL and Credentials

You need the Fabric8 Launcher tool URL and user credentials to create and deploy boosters on Single-node OpenShift Cluster. This information is provided when the Single-node OpenShift Cluster is started.

Prerequisites
Procedure
  1. Navigate to the console where you started Single-node OpenShift Cluster.

  2. Check the console output for the URL and user credentials you can use to access the running Fabric8 Launcher:

    Example Console Output from a Single-node OpenShift Cluster Startup
    ...
    -- Removing temporary directory ... OK
    -- Server Information ...
       OpenShift server started.
       The server is accessible via web console at:
           https://192.168.42.152:8443
    
       You are logged in as:
           User:     developer
           Password: developer
    
       To login as administrator:
           oc login -u system:admin
Deploying the Booster Using the Fabric8 Launcher Tool
Prerequisites
Procedure
  • Navigate to the Single-node OpenShift Cluster URL in a browser and log in.

  • Follow on-screen instructions to create and launch your booster in Spring Boot.

Authenticating the oc CLI Client

To work with boosters on Single-node OpenShift Cluster using the oc command-line client, you need to authenticate the client using the token provided by the Single-node OpenShift Cluster web interface.

Prerequisites
Procedure
  1. Navigate to the Single-node OpenShift Cluster URL in a browser.

  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.

  3. Select Command Line Tools in the drop-down menu.

  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.

  5. Paste the command into a terminal application. The command uses your authentication token to authenticate your oc CLI client with your Single-node OpenShift Cluster account.

    $ oc login OPENSHIFT_URL --token=MYTOKEN
Deploying the Externalized Configuration Booster using the oc CLI Client
Prerequisites
Procedure
  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Assign view access rights to the service account before deploying your booster, so that the booster can access the OpenShift API in order to read the contents of the ConfigMap.

    $ oc policy add-role-to-user view -n $(oc project -q) -z default
  3. Navigate to the root directory of your booster.

  4. Deploy your ConfigMap configuration to OpenShift using greeting-service/src/main/etc/application.properties.

    $ oc create configmap app-config --from-file=greeting-service/src/main/etc/application.properties
  5. Verify your ConfigMap configuration has been deployed.

    $ oc get configmap app-config -o yaml
    
    apiVersion: v1
    data:
      application.properties: |-
          greeting.message=Hello %s from a ConfigMap!
    ...
  6. Use Maven to start the deployment to OpenShift.

    $ mvn clean fabric8:deploy -Popenshift

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift and to start the pod.

  7. Check the status of your booster and ensure your pod is running.

    $ oc get pods -w
    NAME                                       READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-1-aaaaa               1/1       Running     0          58s
    MY_APP_NAME-s2i-1-build           0/1       Completed   0          2m

    Your MY_APP_NAME-1-aaaaa pod should have a status of Running once its fully deployed and started.

  8. Once your booster is deployed and started, determine its route.

    Example Route Information
    $ oc get routes
    NAME                 HOST/PORT                                                     PATH      SERVICES        PORT      TERMINATION
    MY_APP_NAME         MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME      MY_APP_NAME      8080

    The route information of a pod gives you the base URL which you use to access it. In the example above, you would use http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME as the base URL to access the application.

5.2.3. Deploying the Externalized Configuration Booster to OpenShift Container Platform

The process of creating and deploying boosters to OpenShift Container Platform is similar to OpenShift Online:

Prerequisites
Procedure

5.2.4. Interacting with the Unmodified Spring Boot Booster

The booster provides a default HTTP endpoint that accepts GET requests.

  1. Use curl to execute a GET request against the booster. You can also use a browser to do this.

    $ curl http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/greeting
    {"content":"Hello World from a ConfigMap!"}
  2. Update the deployed ConfigMap configuration.

    $ oc edit configmap app-config

    Change the value for the greeting.message key to Bonjour! and save the file. After you save this, the changes will be propagated to your OpenShift instance.

  3. Deploy the new version of your application so the ConfigMap configuration changes are picked up.

    $ oc rollout latest dc/MY_APP_NAME
  4. Check the status of your booster and ensure your new pod is running.

    $ oc get pods -w
    NAME                             READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-1-aaaaa       1/1       Running     0          58s
    MY_APP_NAME-s2i-1-build   0/1       Completed   0          2m

    Your MY_APP_NAME-1-aaaaa pod should have a status of Running once it’s fully deployed and started.

  5. Execute a GET request using curl against the booster with the updated ConfigMap configuration to see your updated greeting. You can also do this from your browser using the web form provided by the application.

    $ curl http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/greeting
    {"content":"Bonjour!"}

5.2.5. Running the Externalized Configuration Booster Integration Tests

This booster includes a self-contained set of integration tests. When run inside an OpenShift project, the tests:

  • deploy a test instance of the application to the project,

  • execute the individual tests on that instance,

  • remove all instances of the application from the project when the testing is done.

Executing integration tests removes all existing instances of the booster application from the target OpenShift project. To avoid accidentally removing your booster application, ensure that you create and select a separate OpenShift project to execute the tests.

Prerequisites
  • the oc client authenticated

  • an empty project in your OpenShift namespace

  • view access permission assigned to the service account of your booster application. This allows your application to read the configuration from the ConfigMap:

    $ oc policy add-role-to-user view -n $(oc project -q) -z default
Procedure

Execute the following command to run the integration tests:

$ mvn clean verify -Popenshift,openshift-it

5.3. Relational Database Backend Mission - Spring Boot Booster

Limitation: Run this booster on a Single-node OpenShift Cluster. You can also use a manual workflow to deploy this booster to OpenShift Online Pro and OpenShift Container Platform. This booster is not currently available on OpenShift Online Starter.

Mission proficiency level: Foundational.

What the Relational Database Backend Booster Does

The Relational Database Backend booster expands on the REST API Level 0 booster to provide a basic example of performing create, read, update and delete (CRUD) operations on a PostgreSQL database using a simple HTTP API. CRUD operations are the four basic functions of persistent storage, widely used when developing an HTTP API dealing with a database.

The booster also demonstrates the ability of the HTTP application to locate and connect to a database in OpenShift. Each runtime shows how to implement the connectivity solution best suited in the given case. The runtime can choose between using JDBC, JPA, or access ORM APIs directly.

The booster application exposes an HTTP API, which provides endpoints that allow you to manipulate data by performing CRUD operations over HTTP. The CRUD operations are mapped to HTTP Verbs. The API uses JSON formatting to receive requests and return responses to the user. The user can also use an UI provided by the booster to use the application. Specifically, this booster provides an application that allows you to:

  • Navigate to the application web interface in your browser. This exposes a simple website allowing you to perform CRUD operations on the data in the my_data database.

  • Execute an HTTP GET request on the api/fruits endpoint.

  • Receive a response formatted as a JSON array containing the list of all fruits in the database.

  • Execute an HTTP GET request on the api/fruits/* endpoint while passing in a valid item ID as an argument.

  • Receive a response in JSON format containing the name of the fruit with the given ID. If no item matches the specified ID, the call results in an HTTP error 404.

  • Execute an HTTP POST request on the api/fruits endpoint passing in a valid name value to create a new entry in the database.

  • Execute an HTTP PUT request on the api/fruits/* endpoint passing in a valid ID and a name as an argument. This updates the name of the item with the given ID to match the name specified in your request.

  • Execute an HTTP DELETE request on the api/fruits/* endpoint, passing in a valid ID as an argument. This removes the item with the specified ID from the database and returns an HTTP code 204 (No Content) as a response. If you pass in an invalid ID, the call results in an HTTP error 404.

This booster also contains a set of automated integration tests that can be used to verify that the application is fully integrated with the database.

This booster does not showcase a fully matured RESTful model (level 3), but it does use compatible HTTP verbs and status, following the recommended HTTP API practices.

Viewing the Booster source code and README

To view the source code and README file of this booster, download and extract the ZIP file with the booster. To get the download link of the ZIP file, see the Creating and Deploying a Booster Using OpenShift Online chapter of the Getting Started with Application Development on OpenShift.

Table 4. Design Tradeoffs
Pros Cons
  • Each runtime determines how to implement the database interactions. One can use JDBC while others can use JPA or access ORM APIs directly. Each runtime decides what would be the best way.

  • Each runtime determines how the schema is created.

  • The PostgreSQL database example provided with this mission is not backed up with persistent storage. Changes to the database are lost if you stop or redeploy the database pod. To use an external database with your mission’s pod in order to preserve changes, see the Integrating External Services chapter of the OpenShift Documentation. It is also possible to set up persistent storage with database containers on OpenShift. (For more details about using persistent storage with OpenShift and containers, see the Persistent Storage, Managing Volumes and Persistent Volumes chapters of the OpenShift Documentation).

5.3.1. Deploying the Relational Database Backend Booster to OpenShift Online

Use one of the following options to execute the Relational Database Backend booster on OpenShift Online.

Although each method uses the same oc commands to deploy your application, using developers.redhat.com/launch provides an automated booster deployment workflow that executes the oc commands for you.

Deploying the Booster Using developers.redhat.com/launch
Prerequisites
Procedure
  • Navigate to the OpenShift Online URL in a browser and log in.

  • Follow on-screen instructions to create and launch your booster in Spring Boot.

Authenticating the oc CLI Client

To work with boosters on OpenShift Online using the oc command-line client, you need to authenticate the client using the token provided by the OpenShift Online web interface.

Prerequisites
Procedure
  1. Navigate to the OpenShift Online URL in a browser.

  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.

  3. Select Command Line Tools in the drop-down menu.

  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.

  5. Paste the command into a terminal application. The command uses your authentication token to authenticate your oc CLI client with your OpenShift Online account.

    $ oc login OPENSHIFT_URL --token=MYTOKEN
Deploying the Relational Database Backend Booster using the oc CLI Client
Prerequisites
Procedure
  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.

  3. Deploy the PostgreSQL database to OpenShift. Ensure that you use the following values for user name, password, and database name when creating your database application. The booster application is pre-configured to use these values. Using different values prevents your booster application from integrating with the database.

    $ oc new-app -e POSTGRESQL_USER=luke -ePOSTGRESQL_PASSWORD=secret -ePOSTGRESQL_DATABASE=my_data openshift/postgresql-92-centos7 --name=my-database
  4. Check the status of your database and ensure the pod is running.

    $ oc get pods -w
    my-database-1-aaaaa   1/1       Running   0         45s
    my-database-1-deploy   0/1       Completed   0         53s

    Your my-database-1-aaaaa pod should have a status of Running and should be indicated as ready once it is fully deployed and started.

  5. Use maven to start the deployment to OpenShift.

    $ mvn clean fabric8:deploy -Popenshift

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift and to start the pod.

  6. Check the status of your booster and ensure your pod is running.

    $ oc get pods -w
    NAME                             READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-1-aaaaa       1/1       Running     0          58s
    MY_APP_NAME-s2i-1-build   0/1       Completed   0          2m

    Your MY_APP_NAME-1-aaaaa pod should have a status of Running and should be indicated as ready once it is fully deployed and started.

  7. Once your booster is deployed and started, determine its route.

    Example Route Information
    $ oc get routes
    NAME                 HOST/PORT                                     PATH      SERVICES             PORT      TERMINATION
    MY_APP_NAME   MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME      MY_APP_NAME   8080

    The route information of a pod gives you the base URL which you use to access it. In the example above, you would use http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME as the base URL to access the application.

5.3.2. Deploying the Relational Database Backend Booster to Single-node OpenShift Cluster

Use one of the following options to execute the Relational Database Backend booster locally on Single-node OpenShift Cluster: * Using Fabric8 Launcher * Using the oc CLI client

Although each method uses the same oc commands to deploy your application, using Fabric8 Launcher provides an automated booster deployment workflow that executes the oc commands for you.

Getting the Fabric8 Launcher Tool URL and Credentials

You need the Fabric8 Launcher tool URL and user credentials to create and deploy boosters on Single-node OpenShift Cluster. This information is provided when the Single-node OpenShift Cluster is started.

Prerequisites
Procedure
  1. Navigate to the console where you started Single-node OpenShift Cluster.

  2. Check the console output for the URL and user credentials you can use to access the running Fabric8 Launcher:

    Example Console Output from a Single-node OpenShift Cluster Startup
    ...
    -- Removing temporary directory ... OK
    -- Server Information ...
       OpenShift server started.
       The server is accessible via web console at:
           https://192.168.42.152:8443
    
       You are logged in as:
           User:     developer
           Password: developer
    
       To login as administrator:
           oc login -u system:admin
Deploying the Booster Using the Fabric8 Launcher Tool
Prerequisites
Procedure
  • Navigate to the Single-node OpenShift Cluster URL in a browser and log in.

  • Follow on-screen instructions to create and launch your booster in Spring Boot.

Authenticating the oc CLI Client

To work with boosters on Single-node OpenShift Cluster using the oc command-line client, you need to authenticate the client using the token provided by the Single-node OpenShift Cluster web interface.

Prerequisites
Procedure
  1. Navigate to the Single-node OpenShift Cluster URL in a browser.

  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.

  3. Select Command Line Tools in the drop-down menu.

  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.

  5. Paste the command into a terminal application. The command uses your authentication token to authenticate your oc CLI client with your Single-node OpenShift Cluster account.

    $ oc login OPENSHIFT_URL --token=MYTOKEN
Deploying the Relational Database Backend Booster using the oc CLI Client
Prerequisites
Procedure
  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.

  3. Deploy the PostgreSQL database to OpenShift. Ensure that you use the following values for user name, password, and database name when creating your database application. The booster application is pre-configured to use these values. Using different values prevents your booster application from integrating with the database.

    $ oc new-app -e POSTGRESQL_USER=luke -ePOSTGRESQL_PASSWORD=secret -ePOSTGRESQL_DATABASE=my_data openshift/postgresql-92-centos7 --name=my-database
  4. Check the status of your database and ensure the pod is running.

    $ oc get pods -w
    my-database-1-aaaaa   1/1       Running   0         45s
    my-database-1-deploy   0/1       Completed   0         53s

    Your my-database-1-aaaaa pod should have a status of Running and should be indicated as ready once it is fully deployed and started.

  5. Use maven to start the deployment to OpenShift.

    $ mvn clean fabric8:deploy -Popenshift

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift and to start the pod.

  6. Check the status of your booster and ensure your pod is running.

    $ oc get pods -w
    NAME                             READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-1-aaaaa       1/1       Running     0          58s
    MY_APP_NAME-s2i-1-build   0/1       Completed   0          2m

    Your MY_APP_NAME-1-aaaaa pod should have a status of Running and should be indicated as ready once it is fully deployed and started.

  7. Once your booster is deployed and started, determine its route.

    Example Route Information
    $ oc get routes
    NAME                 HOST/PORT                                     PATH      SERVICES             PORT      TERMINATION
    MY_APP_NAME   MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME      MY_APP_NAME   8080

    The route information of a pod gives you the base URL which you use to access it. In the example above, you would use http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME as the base URL to access the application.

5.3.3. Deploying the Relational Database Backend Booster to OpenShift Container Platform

The process of creating and deploying boosters to OpenShift Container Platform is similar to OpenShift Online:

Prerequisites
Procedure

5.3.4. Interacting with the Application API

  1. Once the application is running, you can access it using the application URL. To obtain the URL, execute the following command:

    oc get route MY_APP_NAME
    NAME                 HOST/PORT                                         PATH      SERVICES             PORT      TERMINATION
    MY_APP_NAME           MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME              MY_APP_NAME           8080
  2. To access the web interface of the database application, navigate to the application URL in your browser:

    http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME

    Alternatively, you can make requests directly on the api/fruits/* endpoint using curl:

    List all entries in the database:
    curl http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/fruits
    [ {
      "id" : 1,
      "name" : "Cherry",
    }, {
      "id" : 2,
      "name" : "Apple",
    }, {
      "id" : 3,
      "name" : "Banana",
    } ]
    Retrieve an entry with a specific ID
    curl http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/fruits/3
    {
      "id" : 3,
      "name" : "Banana",
    }
    Create a new entry:
    curl -H "Content-Type: application/json" -X POST -d '{"name":"pear"}'  http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/fruits
    {
      "id" : 4,
      "name" : "pear",
    }
    Update an Entry
    curl -H "Content-Type: application/json" -X PUT -d '{"name":"pineapple"}'  http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/fruits/1
    {
      "id" : 1,
      "name" : "pineapple",
    }
    Delete an Entry:
    curl -X DELETE http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/fruits/1

If you receive an HTTP Error code 503 as a response after executing these commands, it means that the application is not ready yet.

5.3.5. Running the Relational Database Backend Booster Integration Tests

This booster includes a self-contained set of integration tests. When run inside an OpenShift project, the tests:

  • deploy a test instance of the application to the project,

  • execute the individual tests on that instance,

  • remove all instances of the application from the project when the testing is done.

Executing integration tests removes all existing instances of the booster application from the target OpenShift project. To avoid accidentally removing your booster application, ensure that you create and select a separate OpenShift project to execute the tests.

Prerequisites
  • the oc client authenticated

  • an empty project in your OpenShift namespace

Procedure

Execute the following command to run the integration tests:

$ mvn clean verify -Popenshift,openshift-it

5.4. Health Check Mission - Spring Boot Booster

Mission proficiency level: Foundational.

When you deploy an application, its important to know if it is available and if it can start handling incoming requests. Implementing the health check pattern allows you to monitor the health of an application, which includes if an application is available and whether it is able to service requests.

In order to understand the health check pattern, you need to first understand the following concepts:

Liveness

Liveness defines whether an application is running or not. Sometimes a running application moves into an unresponsive or stopped state and needs to be restarted. Checking for liveness helps determine whether or not an application needs to be restarted.

Readiness

Readiness defines whether a running application can service requests. Sometimes a running application moves into an error or broken state where it can no longer service requests. Checking readiness helps determine whether or not requests should continue to be routed to that application.

Fail-over

Fail-over enables failures in servicing requests to be handled gracefully. If an application fails to service a request, that request and future requests can then fail-over or be routed to another application, which is usually a redundant copy of that same application.

Resilience and Stability

Resilience and Stability enable failures in servicing requests to be handled gracefully. If an application fails to service a request due to connection loss, in a resilient system that request can be retried after the connection is re-established.

Probe

A probe is a Kubernetes action that periodically performs diagnostics on a running container.

The purpose of this use case is to demonstrate the health check pattern through the use of probing. Probing is used to report the liveness and readiness of an application. In this use case, you configure an application which exposes an HTTP health endpoint to issue HTTP requests. If the container is alive, according to the liveness probe on the health HTTP endpoint, the management platform receives 200 as return code and no further action is required. If the health HTTP endpoint does not return a response, for example if the JVM is no longer running or a thread is blocked, then the application is not considered alive according to the liveness probe. In that case, the platform kills the pod corresponding to that application and recreates a new pod to restart the application.

This use case also allows you to demonstrate and use a readiness probe. In cases where the application is running but is unable to handle requests, such as when the application returns an HTTP 503 response code during restart, this application is not considered ready according to the readiness probe. If the application is not considered ready by the readiness probe, requests are not routed to that application until it is considered ready according to the readiness probe.

Viewing the Booster source code and README

To view the source code and README file of this booster, download and extract the ZIP file with the booster. To get the download link of the ZIP file, see the Creating and Deploying a Booster Using OpenShift Online chapter of the Getting Started with Application Development on OpenShift.

5.4.1. Deploying the Health Check Booster to OpenShift Online

Use one of the following options to execute the Health Check booster on OpenShift Online.

Although each method uses the same oc commands to deploy your application, using developers.redhat.com/launch provides an automated booster deployment workflow that executes the oc commands for you.

Deploying the Booster Using developers.redhat.com/launch
Prerequisites
Procedure
  • Navigate to the OpenShift Online URL in a browser and log in.

  • Follow on-screen instructions to create and launch your booster in Spring Boot.

Authenticating the oc CLI Client

To work with boosters on OpenShift Online using the oc command-line client, you need to authenticate the client using the token provided by the OpenShift Online web interface.

Prerequisites
Procedure
  1. Navigate to the OpenShift Online URL in a browser.

  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.

  3. Select Command Line Tools in the drop-down menu.

  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.

  5. Paste the command into a terminal application. The command uses your authentication token to authenticate your oc CLI client with your OpenShift Online account.

    $ oc login OPENSHIFT_URL --token=MYTOKEN
Deploying the Health Check Booster using the oc CLI Client
Prerequisites
Procedure
  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.

  3. Use Maven to start the deployment to OpenShift.

    $ mvn clean fabric8:deploy -Popenshift

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift and to start the pod.

  4. Check the status of your booster and ensure your pod is running.

    $ oc get pods -w
    NAME                             READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-1-aaaaa               1/1       Running     0          58s
    MY_APP_NAME-s2i-1-build           0/1       Completed   0          2m

    Your MY_APP_NAME-1-aaaaa pod should have a status of Running once its fully deployed and started. You should also wait for your pod to be ready before proceeding, which is shown in the READY column. For example, MY_APP_NAME-1-aaaaa is ready when the READY column is 1/1.

  5. Once your booster is deployed and started, determine its route.

    Example Route Information
    $ oc get routes
    NAME                 HOST/PORT                                                     PATH      SERVICES        PORT      TERMINATION
    MY_APP_NAME         MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME      MY_APP_NAME      8080

    The route information of a pod gives you the base URL which you use to access it. In the example above, you would use http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME as the base URL to access the application.

5.4.2. Deploying the Health Check Booster to Single-node OpenShift Cluster

Use one of the following options to execute the Health Check booster locally on Single-node OpenShift Cluster: * Using Fabric8 Launcher * Using the oc CLI client

Although each method uses the same oc commands to deploy your application, using Fabric8 Launcher provides an automated booster deployment workflow that executes the oc commands for you.

Getting the Fabric8 Launcher Tool URL and Credentials

You need the Fabric8 Launcher tool URL and user credentials to create and deploy boosters on Single-node OpenShift Cluster. This information is provided when the Single-node OpenShift Cluster is started.

Prerequisites
Procedure
  1. Navigate to the console where you started Single-node OpenShift Cluster.

  2. Check the console output for the URL and user credentials you can use to access the running Fabric8 Launcher:

    Example Console Output from a Single-node OpenShift Cluster Startup
    ...
    -- Removing temporary directory ... OK
    -- Server Information ...
       OpenShift server started.
       The server is accessible via web console at:
           https://192.168.42.152:8443
    
       You are logged in as:
           User:     developer
           Password: developer
    
       To login as administrator:
           oc login -u system:admin
Deploying the Booster Using the Fabric8 Launcher Tool
Prerequisites
Procedure
  • Navigate to the Single-node OpenShift Cluster URL in a browser and log in.

  • Follow on-screen instructions to create and launch your booster in Spring Boot.

Authenticating the oc CLI Client

To work with boosters on Single-node OpenShift Cluster using the oc command-line client, you need to authenticate the client using the token provided by the Single-node OpenShift Cluster web interface.

Prerequisites
Procedure
  1. Navigate to the Single-node OpenShift Cluster URL in a browser.

  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.

  3. Select Command Line Tools in the drop-down menu.

  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.

  5. Paste the command into a terminal application. The command uses your authentication token to authenticate your oc CLI client with your Single-node OpenShift Cluster account.

    $ oc login OPENSHIFT_URL --token=MYTOKEN
Deploying the Health Check Booster using the oc CLI Client
Prerequisites
Procedure
  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.

  3. Use Maven to start the deployment to OpenShift.

    $ mvn clean fabric8:deploy -Popenshift

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift and to start the pod.

  4. Check the status of your booster and ensure your pod is running.

    $ oc get pods -w
    NAME                             READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-1-aaaaa               1/1       Running     0          58s
    MY_APP_NAME-s2i-1-build           0/1       Completed   0          2m

    Your MY_APP_NAME-1-aaaaa pod should have a status of Running once its fully deployed and started. You should also wait for your pod to be ready before proceeding, which is shown in the READY column. For example, MY_APP_NAME-1-aaaaa is ready when the READY column is 1/1.

  5. Once your booster is deployed and started, determine its route.

    Example Route Information
    $ oc get routes
    NAME                 HOST/PORT                                                     PATH      SERVICES        PORT      TERMINATION
    MY_APP_NAME         MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME      MY_APP_NAME      8080

    The route information of a pod gives you the base URL which you use to access it. In the example above, you would use http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME as the base URL to access the application.

5.4.3. Deploying the Health Check Booster to OpenShift Container Platform

The process of creating and deploying boosters to OpenShift Container Platform is similar to OpenShift Online:

Prerequisites
Procedure

5.4.4. Interacting with the Unmodified Spring Boot Booster

Once you have the Spring Boot booster deployed, you will have a service called MY_APP_NAME running that exposes the following REST endpoints:

/api/greeting

This endpoint returns a name as a String.

/api/stop

This endpoint forces the service to become unresponsive which is meant to simulate a failure in the service.

The following steps demonstrate how to verify the service availability and simulate a failure. This failure of an available service causes the OpenShift self-healing capabilities to be trigger on the service.

The below steps use the command line to interact with the service. Alternatively, you can use the web interface to perform the same steps (See #4).
  1. Use curl to execute a GET request against the MY_APP_NAME service. You can also use a browser to do this.

    $ curl http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/greeting
    {"content":"Hello, World!"}
  2. Invoke the /api/stop endpoint and verify the availability of the /api/greeting endpoint shortly after that.

    Invoking the /api/stop endpoint simulates an internal service failure and triggers the OpenShift self-healing capabilities. When invoking /api/greeting after simulating the failure, the service should return an Application is not available page.

    $ curl http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/stop
    
    (followed by)
    
    $ curl http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/greeting
    
    <html>
      <head>
      ...
      </head>
      <body>
        <div>
          <h1>Application is not available</h1>
          ...
        </div>
      </body>
    </html>
    Depending on when OpenShift removes the pod after you invoke the /api/stop endpoint, you may initially see a 404 error code. If continue to invoke the /api/greeting endpoint, you will see the Application is not available page after OpenShift removes the pod.
  3. Use oc get pods -w to continuously watch the self-healing capabilities in action.

    While invoking the service failure, you can watch the self-healing capabilities in action on OpenShift console, or with the oc client tools. You should see the number pods in a READY state move to zero (0/1) and after a short period (less than one minute) move back up to one (1/1). In addition the RESTARTS count increases every time you you invoke the service failure.

    $ oc get pods -w
    NAME                           READY     STATUS    RESTARTS   AGE
    MY_APP_NAME-1-26iy7   0/1       Running   5          18m
    MY_APP_NAME-1-26iy7   1/1       Running   5         19m
  4. Optional: Use the web interface to invoke the service.

    Alternatively to the interaction using the terminal window, you can use the web interface provided by the service to invoke the different methods and watch the service move through the life cycle phases.

    $ http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME
  1. Optional: Use the web console to view the log output generated by the application at each stage of the self-healing process.

    1. Navigate to your project.

    2. On the sidebar, click on Monitoring.

    3. In the upper right-hand corner of the screen, click on Events to display the log messages.

    4. Optional: Click View Details to display a detailed view of the Event log.

The health check application generates the following messages:

Message Status

Unhealthy

Readiness probe failed. This message is expected and indicates that the simulated failure of the /api/greeting endpoint has been detected and the self-healing process starts.

Killing

The unavailable Docker container running the service is being killed before being re-created.

Pulling

Downloading the latest version of docker image to re-create the container.

Pulled

Docker image downloaded successfully.

Created

Docker container has been successfully created

Started

Docker container is ready to handle requests

5.4.5. Running the Health Check Booster Integration Tests

This booster includes a self-contained set of integration tests. When run inside an OpenShift project, the tests:

  • deploy a test instance of the application to the project,

  • execute the individual tests on that instance,

  • remove all instances of the application from the project when the testing is done.

Executing integration tests removes all existing instances of the booster application from the target OpenShift project. To avoid accidentally removing your booster application, ensure that you create and select a separate OpenShift project to execute the tests.

Prerequisites
  • the oc client authenticated

  • an empty project in your OpenShift namespace

Procedure

Execute the following command to run the integration tests:

$ mvn clean verify -Popenshift,openshift-it

5.5. Circuit Breaker Mission - Spring Boot Booster

Limitation: Run this booster on a Single-node OpenShift Cluster. You can also use a manual workflow to deploy this booster to OpenShift Online Pro and OpenShift Container Platform. This booster is not currently available on OpenShift Online Starter.

Mission proficiency level: Foundational.

The Circuit Breaker Mission demonstrates a generic pattern for reporting the failure of a service and then limiting access to the failed service until it becomes available to handle requests. This helps prevent cascading failure in other services that depend on the failed services for functionality.

This mission shows you how to implement a Circuit Breaker and Fallback pattern in your services.

5.5.1. About Circuit Breaker

The Circuit Breaker is a pattern intended to mitigate the impact of network failure and high latency on service architectures where services synchronously invoke other services. In such cases, if one of the services becomes unavailable due to network failure or incurs unusually high latency values due to overwhelming traffic, other services attempting to call its endpoint may end up exhausting critical resources in an attempt to reach it, rendering themselves unusable. This condition is also known as cascading failure and can render the entire microservice architecture unusable.

Essentially, the Circuit Breaker acts as a proxy between a protected function and a remote function, which monitors for failures. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error or a predefined fallback response, without the protected call being made at all. The Circuit Breaker usually also contain an error reporting mechanism that notifies you when the Circuit Breaker trips.

5.5.2. Why Circuit Breaker is Important

In an architecture where multiple services depend on each other for functionality, a failure in one service can rapidly propagate to its dependent services, causing the entire architecture to collapse. Implementing a Circuit Breaker pattern helps prevent this. With the Circuit Breaker pattern implemented, a service client invokes a remote service endpoint via a proxy at regular intervals. If the calls to the remote service endpoint fail repeatedly and consistently, the Circuit Breaker trips, making all calls to the service fail immediately over a set timeout period and returns a predefined fallback response. When the timeout period expires, a limited number of test calls are allowed to pass through to the remote service to determine whether it has healed, or remains unavailable. If these test calls fail, the Circuit Breaker keeps the service unavailable and keeps returning the fallback responses to incoming calls. If the test calls succeed, the Circuit Breaker closes, fully enabling traffic to reach the remote service again.

Viewing the Booster source code and README

To view the source code and README file of this booster, download and extract the ZIP file with the booster. To get the download link of the ZIP file, see the Creating and Deploying a Booster Using OpenShift Online chapter of the Getting Started with Application Development on OpenShift.

Table 5. Design Tradeoffs
Pros Cons
  • Enables a service to handle the failure of other services it invokes.

  • Optimizing the timeout values can be challenging

    • Larger-than-necessary timeout values may generate excessive latency.

    • Smaller-than-necessary timeout values may introduce false positives.

5.5.3. Deploying the Circuit Breaker Booster to OpenShift Online

Use one of the following options to execute the Circuit Breaker booster on OpenShift Online.

Although each method uses the same oc commands to deploy your application, using developers.redhat.com/launch provides an automated booster deployment workflow that executes the oc commands for you.

Deploying the Booster Using developers.redhat.com/launch
Prerequisites
Procedure
  • Navigate to the OpenShift Online URL in a browser and log in.

  • Follow on-screen instructions to create and launch your booster in Spring Boot.

Authenticating the oc CLI Client

To work with boosters on OpenShift Online using the oc command-line client, you need to authenticate the client using the token provided by the OpenShift Online web interface.

Prerequisites
Procedure
  1. Navigate to the OpenShift Online URL in a browser.

  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.

  3. Select Command Line Tools in the drop-down menu.

  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.

  5. Paste the command into a terminal application. The command uses your authentication token to authenticate your oc CLI client with your OpenShift Online account.

    $ oc login OPENSHIFT_URL --token=MYTOKEN
Deploying the Circuit Breaker Booster using the oc CLI Client
Prerequisites
Procedure
  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.

  3. Use Maven to start the deployment to OpenShift.

    $ mvn clean fabric8:deploy -Popenshift

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift and to start the pod.

  4. Check the status of your booster and ensure your pod is running.

    $ oc get pods -w
    NAME                             READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-greeting-1-aaaaa     1/1       Running   0           17s
    MY_APP_NAME-greeting-1-deploy    0/1       Completed 0           22s
    MY_APP_NAME-name-1-aaaaa         1/1       Running   0           14s
    MY_APP_NAME-name-1-deploy        0/1       Completed 0           28s

    Both the MY_APP_NAME-greeting-1-aaaaa and MY_APP_NAME-name-1-aaaaa pods should have a status of Running once they are fully deployed and started. You should also wait for your pods to be ready before proceeding, which is shown in the READY column. For example, MY_APP_NAME-greeting-1-aaaaa is ready when the READY column is 1/1.

  5. Once your booster is deployed and started, determine its route.

    Example Route Information
    $ oc get routes
    NAME                 HOST/PORT                                                     PATH      SERVICES        PORT      TERMINATION
    MY_APP_NAME-greeting   MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME            MY_APP_NAME-greeting   8080                    None
    MY_APP_NAME-name       MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME            MY_APP_NAME-name       8080                    None

    The route information of a pod gives you the base URL which you use to access it. In the example above, you would use http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME as the base URL to access the application.

5.5.4. Deploying the Circuit Breaker Booster to Single-node OpenShift Cluster

Use one of the following options to execute the Circuit Breaker booster locally on Single-node OpenShift Cluster: * Using Fabric8 Launcher * Using the oc CLI client

Although each method uses the same oc commands to deploy your application, using Fabric8 Launcher provides an automated booster deployment workflow that executes the oc commands for you.

Getting the Fabric8 Launcher Tool URL and Credentials

You need the Fabric8 Launcher tool URL and user credentials to create and deploy boosters on Single-node OpenShift Cluster. This information is provided when the Single-node OpenShift Cluster is started.

Prerequisites
Procedure
  1. Navigate to the console where you started Single-node OpenShift Cluster.

  2. Check the console output for the URL and user credentials you can use to access the running Fabric8 Launcher:

    Example Console Output from a Single-node OpenShift Cluster Startup
    ...
    -- Removing temporary directory ... OK
    -- Server Information ...
       OpenShift server started.
       The server is accessible via web console at:
           https://192.168.42.152:8443
    
       You are logged in as:
           User:     developer
           Password: developer
    
       To login as administrator:
           oc login -u system:admin
Deploying the Booster Using the Fabric8 Launcher Tool
Prerequisites
Procedure
  • Navigate to the Single-node OpenShift Cluster URL in a browser and log in.

  • Follow on-screen instructions to create and launch your booster in Spring Boot.

Authenticating the oc CLI Client

To work with boosters on Single-node OpenShift Cluster using the oc command-line client, you need to authenticate the client using the token provided by the Single-node OpenShift Cluster web interface.

Prerequisites
Procedure
  1. Navigate to the Single-node OpenShift Cluster URL in a browser.

  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.

  3. Select Command Line Tools in the drop-down menu.

  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.

  5. Paste the command into a terminal application. The command uses your authentication token to authenticate your oc CLI client with your Single-node OpenShift Cluster account.

    $ oc login OPENSHIFT_URL --token=MYTOKEN
Deploying the Circuit Breaker Booster using the oc CLI Client
Prerequisites
Procedure
  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.

  3. Use Maven to start the deployment to OpenShift.

    $ mvn clean fabric8:deploy -Popenshift

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift and to start the pod.

  4. Check the status of your booster and ensure your pod is running.

    $ oc get pods -w
    NAME                             READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-greeting-1-aaaaa     1/1       Running   0           17s
    MY_APP_NAME-greeting-1-deploy    0/1       Completed 0           22s
    MY_APP_NAME-name-1-aaaaa         1/1       Running   0           14s
    MY_APP_NAME-name-1-deploy        0/1       Completed 0           28s

    Both the MY_APP_NAME-greeting-1-aaaaa and MY_APP_NAME-name-1-aaaaa pods should have a status of Running once they are fully deployed and started. You should also wait for your pods to be ready before proceeding, which is shown in the READY column. For example, MY_APP_NAME-greeting-1-aaaaa is ready when the READY column is 1/1.

  5. Once your booster is deployed and started, determine its route.

    Example Route Information
    $ oc get routes
    NAME                 HOST/PORT                                                     PATH      SERVICES        PORT      TERMINATION
    MY_APP_NAME-greeting   MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME            MY_APP_NAME-greeting   8080                    None
    MY_APP_NAME-name       MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME            MY_APP_NAME-name       8080                    None

    The route information of a pod gives you the base URL which you use to access it. In the example above, you would use http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME as the base URL to access the application.

5.5.5. Deploying the Circuit Breaker Booster to OpenShift Container Platform

The process of creating and deploying boosters to OpenShift Container Platform is similar to OpenShift Online:

Prerequisites
Procedure

5.5.6. Interacting with the Unmodified Spring Boot Circuit Breaker Booster

Once you have the Spring Boot booster deployed, you have the following services running:

MY_APP_NAME-name

Exposes the following endpoints:

  • the /api/name endpoint, which returns a name when this service is working, and an error when this service is set up to demonstrate failure.

  • the /api/state endpoint, which controls the behavior of the /api/name endpoint and determines whether the service works correctly or demonstrates failure.

MY_APP_NAME-greeting

Exposes the following endpoints:

  • the /api/greeting endpoint that you can call to get a personalized greeting response.

    When you call the /api/greeting endpoint, it issues a call against the /api/name endpoint of the MY_APP_NAME-name service as part of processing your request. The call made against the /api/name endpoint is protected by the Circuit Breaker.

    If the remote endpoint is available, the name service responds with an HTTP code 200 (OK) and you receive the following greeting from the /api/greeting endpoint:

    {"content":"Hello, World!"}

    If the remote endpoint is unavailable, the name service responds with an HTTP code 500 (Internal server error) and you receive a predefined fallback response from the /api/greeting endpoint:

    {"content":"Hello, Fallback!"}
  • the /api/cb-state endpoint, which returns the state of the Circuit Breaker. The state can be:

    • open : the circuit breaker is preventing requests from reaching the failed service,

    • closed: the circuit breaker is allowing requests to reach the service.

The following steps demonstrate how to verify the availability of the service, simulate a failure and receive a fallback response.

  1. Use curl to execute a GET request against the MY_APP_NAME-greeting service. You can also use the Invoke button in the web interface to do this.

    $ curl http://MY_APP_NAME-greeting-MY_PROJECT_NAME.LOCAL_OPENSHIFT_HOSTNAME/api/greeting
    {"content":"Hello, World!"}
  2. To simulate the failure of the MY_APP_NAME-name service you can:

    • use the Toggle button in the web interface.

    • scale the number of replicas of the pod running the MY_APP_NAME-name service down to 0.

    • execute an HTTP PUT request against the /api/state endpoint of the MY_APP_NAME-name service to set its state to fail.

      $ curl -X PUT -H "Content-Type: application/json" -d '{"state": "fail"}' http://MY_APP_NAME-name-MY_PROJECT_NAME.LOCAL_OPENSHIFT_HOSTNAME/api/state
  3. Invoke the /api/greeting endpoint. When several requests on the /api/name endpoint fail:

    1. the Circuit Breaker opens,

    2. the state indicator in the web interface changes from CLOSED to OPEN,

    3. the Circuit Breaker issues a fallback response when you invoke the /api/greeting endpoint:

      $ curl http://MY_APP_NAME-greeting-MY_PROJECT_NAME.LOCAL_OPENSHIFT_HOSTNAME/api/greeting
      {"content":"Hello, Fallback!"}
  4. Restore the name MY_APP_NAME-name service to availability. To do this you can:

    • use the Toggle button in the web interface.

    • scale the number of replicas of the pod running the MY_APP_NAME-name service back up to 1.

    • execute an HTTP PUT request against the /api/state endpoint of the MY_APP_NAME-name service to set its state back to ok.

      $ curl -X PUT -H "Content-Type: application/json" -d '{"state": "ok"}' http://MY_APP_NAME-name-MY_PROJECT_NAME.LOCAL_OPENSHIFT_HOSTNAME/api/state
  5. Invoke the /api/greeting endpoint again. When several requests on the /api/name endpoint succeed:

    1. the Circuit Breaker closes,

    2. the state indicator in the web interface changes from OPEN to CLOSED,

    3. the Circuit Breaker issues a returns the Hello World! greeting when you invoke the /api/greeting endpoint:

      $ curl http://MY_APP_NAME-greeting-MY_PROJECT_NAME.LOCAL_OPENSHIFT_HOSTNAME/api/greeting
      {"content":"Hello, World!"}

5.5.7. Running the Circuit Breaker Booster Integration Tests

This booster includes a self-contained set of integration tests. When run inside an OpenShift project, the tests:

  • deploy a test instance of the application to the project,

  • execute the individual tests on that instance,

  • remove all instances of the application from the project when the testing is done.

Executing integration tests removes all existing instances of the booster application from the target OpenShift project. To avoid accidentally removing your booster application, ensure that you create and select a separate OpenShift project to execute the tests.

Prerequisites
  • the oc client authenticated

  • an empty project in your OpenShift namespace

Procedure

Execute the following command to run the integration tests:

$ mvn clean verify -Popenshift,openshift-it

5.5.8. Using Hystrix Dashboard to Monitor the Circuit Breaker

Hystrix Dashboard lets you easily monitor the health of your services in real time by aggregating Hystrix metrics data from an event stream and displaying them on one screen. For more detail, see the Hystrix Dashboard wiki page.

You must have the Circuit Breaker booster application deployed before proceeding with the steps below.
  1. Log in to your Single-node OpenShift Cluster cluster.

    $ oc login OPENSHIFT_URL --token=MYTOKEN
  2. To access the Web console, use your browser to navigate to your Single-node OpenShift Cluster URL.

  3. Navigate to the project that contains your Circuit Breaker application.

    $ oc project MY_PROJECT_NAME
  4. Import the YAML template for the Hystrix Dashboard application. You can do this by clicking Add to Project, then selecting the Import YAML / JSON tab, and copying the contents of the YAML file into the text box. Alternatively, you can execute the following command:

    $ oc create -f https://raw.githubusercontent.com/snowdrop/openshift-templates/master/hystrix-dashboard/hystrix-dashboard.yml
  5. Click the Create button to create the Hystrix Dashboard application based on the template. Alternatively, you can execute the following command.

    $ oc new-app --template=hystrix-dashboard
  6. Wait for the pod containing Hystrix Dashboard to deploy.

  7. Obtain the route of your Hystrix Dashboard application.

    $ oc get route hystrix-dashboard
    NAME                HOST/PORT                                                    PATH      SERVICES            PORT      TERMINATION   WILDCARD
    hystrix-dashboard   hystrix-dashboard-MY_PROJECT_NAME.LOCAL_OPENSHIFT_HOSTNAME                 hystrix-dashboard   <all>                   None
  8. To access the Dashboard, open the Dashboard application route URL in your browser. Alternatively, you can navigate to the Overview screen in the Web console and click the route URL in the header above the pod containing your Hystrix Dashboard application.

  9. To use the Dashboard to monitor the MY_APP_NAME-greeting service, replace the default event stream address with the following address and click the Monitor Stream button.

    http://MY_APP_NAME-greeting-MY_PROJECT_NAME.LOCAL_OPENSHIFT_HOSTNAME/hystrix.stream

5.5.9. Circuit Breaker Resources

Follow the links below for more background information on the design principles behind the Circuit Breaker pattern

5.6. Secured Mission - Spring Boot Booster

Limitation: Run this booster on a Single-node OpenShift Cluster. You can also use a manual workflow to deploy this booster to OpenShift Online Pro and OpenShift Container Platform. This booster is not currently available on OpenShift Online Starter.

Mission proficiency level: Advanced.

What the Secured Booster does

The Secured booster secures a REST endpoint using Red Hat SSO. (This booster expands on the REST API Level 0 booster).

Red Hat SSO

  • Implements the Open ID Connect protocol which is an extension of the OAuth 2.0 specification.

  • Issues access tokens to provide clients with various access rights to secured resources.

Securing an application with SSO enables you to add security to your applications while centralizing the security configuration.

This mission comes with Red Hat SSO pre-configured for demonstration purposes, it does not explain its principles, usage, or configuration. Before using this mission, ensure that you are familiar with the basic concepts related to Red Hat SSO.
Viewing the Booster source code and README

To view the source code and README file of this booster, download and extract the ZIP file with the booster. To get the download link of the ZIP file, see the Creating and Deploying a Booster Using OpenShift Online chapter of the Getting Started with Application Development on OpenShift.

5.6.1. Configuring Your Single-node OpenShift Cluster

Before you can run the Secured Booster, you must update your Single-node OpenShift Cluster from the default configuration. You only need to do this once for the secured booster missions. They all share the same Red Hat SSO/Single-node OpenShift Cluster setup. You can use your Single-node OpenShift Cluster to run your secured booster, but you need to update it from the default configuration.

Before you can use your Single-node OpenShift Cluster, you need to have it installed, configured, and running. You can find details on installing a Single-node OpenShift Cluster for your platform in Install and Configure the Fabric8 Launcher Tool.

Updating your Single-node OpenShift Cluster from the default Configuration

Red Hat SSO will not start up unless you switch from the default configuration. The SSO Booster currently only works with the CentOS base image. If you have already run the minishift command with a different memory setting and iso-url value, you need to stop it and completely delete the ~/.minishift directory before running the startup sequence.

  1. Delete the current Single-node OpenShift Cluster configuration and restart it with a new ISO.

$ minishift delete
$ rm -r ~/.minishift
$ minishift start --memory=6144 --iso-url=centos

5.6.2. Project Structure

The SSO booster project contains:

  • the sources for the Greeting service, which is the one which we are going to to secure

  • a template file (service.sso.yaml) to stand up the SSO server

  • the Keycloak adapter configuration to secure the service

5.6.3. Standing up Red Hat SSO

The service.sso.yaml file contains all OpenShift configuration items to stand up a pre-configured Red Hat SSO server. The SSO server configuration has been simplified for the sake of this exercise and does provide an out-of-the-box configuration, with pre-configured users and security settings.

It is not recommended to use this SSO configuration in production. Specifically, the simplifications made to the booster security configuration impact the ability to use it in a production environment.
Table 6. SSO Booster Simplifications
Change Reason Recommendation

The default configuration includes both public and private keys in the yaml configuration files.

We did this because the end user can deploy Red Hat SSO module and have it in a usable state without needing to know the internals or how to configure Red Hat SSO.

In production, do not store private keys under source control. They should be added by the server administrator.

The configured clients accept any callback url.

To avoid having a custom configuration per runtime we avoid the verification of the callback as mandated by the OAuth2 specification.

An application-specific callback URL should be provided with a valid domain name.

Clients do not require SSL/TLS and the secured applications are not exposed over HTTPS.

The boosters are simplified by not requiring certificates generated for each runtime.

In production a secure application should use HTTPS rather than plain HTTP.

The token timeout has been increased to 10 minutes from the default of 1 minute.

Provides a better user experience when working with the command line examples

From a security perspective, the window an attacker would have to guess the access token is extended. It is recommended to keep this window short as it makes it much harder for a potential attacker to guess the current token.

5.6.4. Red Hat SSO Realm Model

The master realm is used to secure this booster. There are two pre-configured application client definitions that provide a model for command line clients and the secured REST endpoint.

There are also two pre-configured users in the Red Hat SSO master realm that can be used to validate various authentication and authorization outcomes: admin and alice.

Red Hat SSO Users

The realm model for the secured boosters includes two users:

admin

The admin user has a password of admin and is the realm administrator. This user has full access to the Red Hat SSO administration console, but none of the role mappings that are required to access the secured endpoints. You can use this user to illustrate the behavior of an authenticated, but unauthorized user.

alice

The alice user has a password of password and is the canonical application user. This user will demonstrate successful authenticated and authorized access to the secured endpoints. An example representation of the role mappings is provided in this decoded JWT bearer token:

{
  "jti": "0073cfaa-7ed6-4326-ac07-c108d34b4f82",
  "exp": 1510162193,
  "nbf": 0,
  "iat": 1510161593,
  "iss": "https://secure-sso-sso.LOCAL_OPENSHIFT_HOSTNAME/auth/realms/master", (1)
  "aud": "demoapp",
  "sub": "c0175ccb-0892-4b31-829f-dda873815fe8",
  "typ": "Bearer",
  "azp": "demoapp",
  "nonce": "90ff5d1a-ba44-45ae-a413-50b08bf4a242",
  "auth_time": 1510161591,
  "session_state": "98efb95a-b355-43d1-996b-0abcb1304352",
  "acr": "1",
  "client_session": "5962112c-2b19-461e-8aac-84ab512d2a01",
  "allowed-origins": [
    "*"
  ],
  "realm_access": {
    "roles": [ (2)
      "booster-admin"
    ]
  },
  "resource_access": { (3)
    "secured-booster-endpoint": {
      "roles": [
        "booster-admin" (4)
      ]
    },
    "account": {
      "roles": [
        "manage-account",
        "view-profile"
      ]
    }
  },
  "name": "Alice InChains",
  "preferred_username": "alice", (5)
  "given_name": "Alice",
  "family_name": "InChains",
  "email": "alice@keycloak.org"
}
1 The iss field corresponds to the Red Hat SSO realm instance URL that issues the token. This must be configured in the secured endpoint deployments in order for the token to be verified.
2 The roles object provides the roles that have been granted to the user at the global realm level. In this case alice has been granted the booster-admin role. We will see that the secured endpoint will look to the realm level for authorized roles.
3 The resource_access object contains resource specific role grants. Under this object you will find an object for each of the secured endpoints.
4 The resource_access.secured-booster-endpoint.roles object contains the roles granted to alice for the secured-booster-endpoint resource.
5 The preferred_username field provides the username that was used to generate the access token.
The Application Clients

The OAuth 2.0 specification allows you to define a role for application clients that access secured resources on behalf of resource owners. The master realm has the following application clients defined:

demoapp

This is a confidential type client with a client secret that is used to obtain an access token that contains grants for the alice user which enable alice to access the WildFly Swarm, Eclipse Vert.x and Spring Boot based REST booster deployments.

secured-booster-endpoint

The secured-booster-endpoint is a bearer-only type of client that requires a booster-admin role for accessing the associated resources, specifically the Greeting service.

SSO Adapter Configuration

The SSO adapter is the client side, or client to the SSO server, component that enforces security on the web resources. In this specific case, it is the Greeting service.

Both the SSO adapter and endpoint security are configured in src/main/resources/application.properties.

Example application.properties file
$ # Adapter configuration
keycloak.realm=${realm:master} (1)
keycloak.realm-key=...
keycloak.auth-server-url=${sso.auth.server.url} (2)
keycloak.resource=${client.id:secured-booster-endpoint} (3)
keycloak.credentials.secret=${secret:1daa57a2-b60e-468b-a3ac-25bd2dc2eadc} (4)
keycloak.use-resource-role-mappings=true (5)
keycloak.bearer-only=true (6)
# Endpoint security configuration
keycloak.securityConstraints[0].securityCollections[0].name=admin stuff (7)
keycloak.securityConstraints[0].securityCollections[0].authRoles[0]=booster-admin (8)
keycloak.securityConstraints[0].securityCollections[0].patterns[0]=/api/greeting (9)
1 The security realm to be used.
2 The address of the Red Hat SSO server (Interpolation at build time).
3 The actual keycloak client configuration.
4 Secret to access authentication server.
5 Check the token for application level role mappings for the user.
6 If enabled the adapter will not attempt to authenticate users, but only verify bearer tokens.
7 A simple name for the security constraint.
8 A roles needed to access a secured endpoint.
9 A secured endpoints path pattern.

5.6.5. Deploying the Secured booster to Single-node OpenShift Cluster

Getting the Fabric8 Launcher Tool URL and Credentials

You need the Fabric8 Launcher tool URL and user credentials to create and deploy boosters on Single-node OpenShift Cluster. This information is provided when the Single-node OpenShift Cluster is started.

Prerequisites
Procedure
  1. Navigate to the console where you started Single-node OpenShift Cluster.

  2. Check the console output for the URL and user credentials you can use to access the running Fabric8 Launcher:

    Example Console Output from a Single-node OpenShift Cluster Startup
    ...
    -- Removing temporary directory ... OK
    -- Server Information ...
       OpenShift server started.
       The server is accessible via web console at:
           https://192.168.42.152:8443
    
       You are logged in as:
           User:     developer
           Password: developer
    
       To login as administrator:
           oc login -u system:admin
Creating the Secured Booster Using Fabric8 Launcher
Prerequisites
Procedure
  • Navigate to the Fabric8 Launcher URL in a browser and log in.

  • Follow the on-screen instructions to create your booster in Spring Boot. When asked about which deployment type, select I will build and run locally.

  • Follow on-screen instructions.

    When done, click the Download as ZIP file button and store the file on your hard drive.

Authenticating the oc CLI Client

To work with boosters on Single-node OpenShift Cluster using the oc command-line client, you need to authenticate the client using the token provided by the Single-node OpenShift Cluster web interface.

Prerequisites
Procedure
  1. Navigate to the Single-node OpenShift Cluster URL in a browser.

  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.

  3. Select Command Line Tools in the drop-down menu.

  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.

  5. Paste the command into a terminal application. The command uses your authentication token to authenticate your oc CLI client with your Single-node OpenShift Cluster account.

    $ oc login OPENSHIFT_URL --token=MYTOKEN
Deploying the Secured booster using the oc CLI client
Prerequisites
Procedure
  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.

  3. Deploy the Red Hat SSO server using the service.sso.yaml file from your booster ZIP file:

    $ oc create -f service.sso.yaml
  4. Use Maven to start the deployment to Single-node OpenShift Cluster.

    $ mvn clean fabric8:deploy -Popenshift -DskipTests \
          -DSSO_AUTH_SERVER_URL=$(oc get route secure-sso -o jsonpath='{"https://"}{.spec.host}{"/auth\n"}')

    This command uses the Fabric8 Maven Plugin to launch the S2I process on Single-node OpenShift Cluster and to start the pod.

This process generates the uberjar file as well as the OpenShift resources and deploys them to the current project on your Single-node OpenShift Cluster server.

5.6.6. Deploying the Secured Booster to OpenShift Container Platform

In addition to the Single-node OpenShift Cluster, you can create and deploy the booster on OpenShift Container Platform with only minor differences. The most important difference is that you need to create the booster application on Single-node OpenShift Cluster before you can deploy it with OpenShift Container Platform.

Prerequisites
Authenticating the oc CLI Client

To work with boosters on OpenShift Container Platform using the oc command-line client, you need to authenticate the client using the token provided by the OpenShift Container Platform web interface.

Prerequisites
  • An account at OpenShift Container Platform.

Procedure
  1. Navigate to the OpenShift Container Platform URL in a browser.

  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.

  3. Select Command Line Tools in the drop-down menu.

  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.

  5. Paste the command into a terminal application. The command uses your authentication token to authenticate your oc CLI client with your OpenShift Container Platform account.

    $ oc login OPENSHIFT_URL --token=MYTOKEN
Deploying the Secured booster using the oc CLI client
Prerequisites
  • The booster application created using the Fabric8 Launcher tool on a Single-node OpenShift Cluster.

  • The oc client authenticated. For more information, see Authenticating the oc CLI Client.

Procedure
  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.

  3. Deploy the Red Hat SSO server using the service.sso.yaml file from your booster ZIP file:

    $ oc create -f service.sso.yaml
  4. Use Maven to start the deployment to OpenShift Container Platform.

    $ mvn clean fabric8:deploy -Popenshift -DskipTests \
          -DSSO_AUTH_SERVER_URL=$(oc get route secure-sso -o jsonpath='{"https://"}{.spec.host}{"/auth\n"}')

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift Container Platform and to start the pod.

This process generates the uberjar file as well as the OpenShift resources and deploys them to the current project on your OpenShift Container Platform server.

5.6.7. Authenticating to the Secured Booster API Endpoint

The Secured booster provides a default HTTP endpoint that accepts GET requests if the caller is authenticated and authorized. The client first authenticates against the Red Hat SSO server and then performs a GET request against the Secured booster using the access token returned by the authentication step.

Getting the Secured Booster API Endpoint

When using a client to interact with the booster, you must specify the Secured booster endpoint, which is the Greeting service.

Prerequisites
  • The Secured booster deployed and running.

  • The oc client authenticated.

Procedure
  1. In a terminal application, execute the oc get routes command.

    A sample output is shown in the following table:

    Example 1. List of Secured endpoints
    Name Host/Port Path Services Port Termination

    secure-sso

    secure-sso-myproject.LOCAL_OPENSHIFT_HOSTNAME

    secure-sso

    <all>

    passthrough

    PROJECT_ID

    PROJECT_ID-myproject.LOCAL_OPENSHIFT_HOSTNAME

    PROJECT_ID

    <all>

    sso

    sso-myproject.LOCAL_OPENSHIFT_HOSTNAME

    sso

    <all>

PROJECT_ID is based on the name you entered when generating your booster using developers.redhat.com/launch or the Fabric8 Launcher tool.
Authenticating HTTP requests using the Command Line

Request a token by sending a HTTP POST request to the Red Hat SSO server. In the following example, the jq CLI tool is used to extract the token value from the JSON response.

Prerequisites
Procedure
  1. Request an access token:

    The attributes are usually shared with each service and kept secret, but for demonstration purposes, they are displayed here:

    Example 2. Secured booster credentials
    REALM=master
    USER=alice
    PASSWORD=password
    CLIENT_ID=demoapp
    SECRET=1daa57a2-b60e-468b-a3ac-25bd2dc2eadc
    • Using the credentials, use the curl command to request a token:

      $ curl -sk -X POST https://<SSO_AUTH_SERVER_URL>/auth/realms/$REALM/protocol/openid-connect/token \
        -d grant_type=password \
        -d username=$USER \
        -d password=$PASSWORD \
        -d client_id=$CLIENT_ID \
        -d client_secret=$SECRET
      The -sk option tells curl to ignore failures resulting from self-signed certificates. Do not use this option in a production environment.

      On macOS, you must have curl version 7.56.1 or greater installed. It must also be built with OpenSSL.

    • Extract the access token information, for example using the jq tool:

      $ curl ... | jq -r '.access_token'
      
      eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJRek1nbXhZMUhrQnpxTnR0SnkwMm5jNTNtMGNiWDQxV1hNSTU1MFo4MGVBIn0.eyJqdGkiOiI0NDA3YTliNC04YWRhLTRlMTctODQ2ZS03YjI5MjMyN2RmYTIiLCJleHAiOjE1MDc3OTM3ODcsIm5iZiI6MCwiaWF0IjoxNTA3NzkzNzI3LCJpc3MiOiJodHRwczovL3NlY3VyZS1zc28tc3NvLWRlbW8uYXBwcy5jYWZlLWJhYmUub3JnL2F1dGgvcmVhbG1zL21hc3RlciIsImF1ZCI6ImRlbW9hcHAiLCJzdWIiOiJjMDE3NWNjYi0wODkyLTRiMzEtODI5Zi1kZGE4NzM4MTVmZTgiLCJ0eXAiOiJCZWFyZXIiLCJhenAiOiJkZW1vYXBwIiwiYXV0aF90aW1lIjowLCJzZXNzaW9uX3N0YXRlIjoiMDFjOTkzNGQtNmZmOS00NWYzLWJkNWUtMTU4NDI5ZDZjNDczIiwiYWNyIjoiMSIsImNsaWVudF9zZXNzaW9uIjoiMzM3Yzk0MTYtYTdlZS00ZWUzLThjZWQtODhlODI0MGJjNTAyIiwiYWxsb3dlZC1vcmlnaW5zIjpbIioiXSwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbImJvb3N0ZXItYWRtaW4iXX0sInJlc291cmNlX2FjY2VzcyI6eyJzZWN1cmVkLWJvb3N0ZXItZW5kcG9pbnQiOnsicm9sZXMiOlsiYm9vc3Rlci1hZG1pbiJdfSwiYWNjb3VudCI6eyJyb2xlcyI6WyJtYW5hZ2UtYWNjb3VudCIsInZpZXctcHJvZmlsZSJdfX0sIm5hbWUiOiJBbGljZSBJbkNoYWlucyIsInByZWZlcnJlZF91c2VybmFtZSI6ImFsaWNlIiwiZ2l2ZW5fbmFtZSI6IkFsaWNlIiwiZmFtaWx5X25hbWUiOiJJbkNoYWlucyIsImVtYWlsIjoiYWxpY2VAa2V5Y2xvYWsub3JnIn0.mjmZe37enHpigJv0BGuIitOj-kfMLPNwYzNd3n0Ax4Nga7KpnfytGyuPSvR4KAG8rzkfBNN9klPYdy7pJEeYlfmnFUkM4EDrZYgn4qZAznP1Wzy1RfVRdUFi0-GqFTMPb37o5HRldZZ09QljX_j3GHnoMGXRtYW9RZN4eKkYkcz9hRwgfJoTy2CuwFqeJwZYUyXifrfA-JoTr0UmSUed-0NMksGrtJjjPggUGS-qOn6OgKcmN2vaVAQlxW32y53JqUXctfLQ6DhJzIMYTmOflIPy0sgG1mG7sovQhw1xTg0vTjdx8zQ-EJcexkj7IivRevRZsslKgqRFWs67jQAFQA
  2. Invoke the Secured service. Attach the access (bearer) token to the HTTP headers:

    $ curl -v -H "Authorization: Bearer <TOKEN>" http://<SERVICE_HOST>/api/greeting
    
    {
        "content": "Hello, World!",
        "id": 2
    }
    Example 3. A sample GET Request Headers with an Access (Bearer) Token
    > GET /api/greeting HTTP/1.1
    > Host: <SERVICE_HOST>
    > User-Agent: curl/7.51.0
    > Accept: */*
    > Authorization: Bearer <TOKEN>
  3. Verify the signature of the access token.

    The access token is a JSON Web Token, so you can decode it using the JWT Debugger:

    1. In a web browser, navigate to the JWT Debugger website.

    2. Select RS256 from the Algorithm drop down menu.

      Make sure the web form has been updated after you made the selection, so it displays the correct RSASHA256(…​) information in the Signature section. If it has not, try switching to HS256 and then back to RS256.
    3. Paste the following content in the topmost text box into the VERIFY SIGNATURE section:

      -----BEGIN PUBLIC KEY-----
      MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAoETnPmN55xBJjRzN/cs30OzJ9olkteLVNRjzdTxFOyRtS2ovDfzdhhO9XzUcTMbIsCOAZtSt8K+6yvBXypOSYvI75EUdypmkcK1KoptqY5KEBQ1KwhWuP7IWQ0fshUwD6jI1QWDfGxfM/h34FvEn/0tJ71xN2P8TI2YanwuDZgosdobx/PAvlGREBGuk4BgmexTOkAdnFxIUQcCkiEZ2C41uCrxiS4CEe5OX91aK9HKZV4ZJX6vnqMHmdDnsMdO+UFtxOBYZio+a1jP4W3d7J5fGeiOaXjQCOpivKnP2yU2DPdWmDMyVb67l8DRA+jh0OJFKZ5H2fNgE3II59vdsRwIDAQAB
      -----END PUBLIC KEY-----
      This is the master realm public key from the Red Hat SSO server deployment of the Secured booster.
    4. Paste the token output from the client output into the Encoded box.

      The Signature Verified sign appears on the debugger page.

Authenticating HTTP requests using the Web Interface

In addition to the HTTP API, the secured endpoint also contains a web interface to interact with.

The following procedure is an exercise for you to see how security is enforced, how you authenticate, and how you work with the authentication token.

Prerequisites
Procedure
  1. In a web browser, navigate to the endpoint URL.

  2. Perform an unauthenticated request:

    1. Click the Invoke button.

      sso main
      Figure 1. Unauthenticated Secured Booster Web Interface

      The services responds with an HTTP 401 Unauthorized status code.

      sso unauthenticated
      Figure 2. Unauthenticated Error Message
  3. Perform an authenticated request as a user:

    1. Click the Login button to authenticate against Red Hat SSO. You will be redirected to the SSO server.

    2. Log in as the Alice user. You will be redirected back to the web interface.

      You can see the access (bearer) token in the command line output at the bottom of the page.
      sso alice
      Figure 3. Authenticated Secured Booster Web Interface (as Alice)
    3. Click Invoke again to access the Greeting service.

      Confirm that there is no exception and the JSON response payload is displayed. This means the service accepted your access (bearer) token and you are authorized access to the Greeting service.

      sso invoke alice
      Figure 4. The Result of an Authenticated Greeting Request (as Alice)
    4. Log out.

  4. Perform an authenticated request as an admininstrator:

    1. Click the Invoke button.

      Confirm that this sends an unauthenticated request to the Greeting service.

    2. Click the Login button and log in as the admin user.

      sso admin
      Figure 5. Authenticated Secured Booster Web Interface (as admin)
  5. Click the Invoke button.

    The service responds with an HTTP 403 Forbidden status code because the admin user is not authorized to access the Greeting service.

    sso unauthorized
    Figure 6. Unauthorized Error Message

5.6.8. Running the Secured Booster Integration Tests

Prerequisites
  • The oc client authenticated.

Procedure

 

Executing integration tests removes all existing instances of the booster application from the target OpenShift project. To avoid accidentally removing your booster application, ensure that you create and select a separate OpenShift project to execute the tests.

  1. In a terminal application, navigate to the directory with your project.

  2. Create the Red Hat SSO server application:

    oc create -f service.sso.yaml
  3. Wait until the Red Hat SSO server is ready. Go to the Web console or view the output of oc get pods to check if the pod running the Red Hat SSO server is ready.

  4. Execute the integration tests:

    mvn clean verify -Popenshift,openshift-it -DSSO_AUTH_SERVER_URL=$(oc get route secure-sso -o jsonpath='{"https://"}{.spec.host}{"/auth\n"}')

5.6.9. Secured SSO Resources

Follow the links below for additional information on the principles behind the OAuth2 specification and on securing your applications using Red Hat SSO and Keycloak:

Appendix A: The Source-to-Image (S2I) Build Process

Source-to-Image (S2I) is a build tool for generating reproducible Docker-formatted container images from online SCM repositories with application sources. With S2I builds, you can easily deliver the latest version of your application into production with shorter build times, decreased resource and network usage, improved security, and a number of other advantages. OpenShift supports multiple build strategies and input sources.

For more information, see the Source-to-Image (S2I) Build chapter of the OpenShift Container Platform documentation.

You must provide three elements to the S2I process to assemble the final container image:

  • The application sources hosted in an online SCM repository, such as GitHub.

  • The S2I scripts.

  • The Builder image, which serves as the foundation for the assembled image and provides the ecosystem in which your application is running. This includes environment variables and parameters used by S2I scripts.

The process injects your application source and dependencies into the Builder image according to instructions specified in the S2I script, and generates a Docker-formatted container image that runs the assembled application. For more information, check the S2I build requirements, build options and how builds work sections of the OpenShift Container Platform documentation.

Appendix B: Deploying a Spring Boot Application using WAR Files

Red Hat does not support packaging and deploying Spring Boot applications using WAR files in this release of Application Development on OpenShift.

As an alternative to the supported application packaging and deployment workflow using fat JAR files, you can package and deploy a Spring Boot application as a WAR (Web Application Archive) file. You must configure your build and deployment settings to ensure that your application builds and deploys correctly on OpenShift.

Prerequisites
  • Fabric8 Maven Plugin used to deploy your application to OpenShift.

  • Spring Boot Maven Plugin used to package your application.

Procedure
  1. Define the repackage Maven goal for the Spring Boot Maven plugin in the pom.xml file of your project:

    ...
      <build>
        <plugins>
          <plugin>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
            <executions>
              <execution>
                <goals>
                  <goal>repackage</goal>
                </goals>
              </execution>
            </executions>
          </plugin>
        </plugins>
      </build>
    ...

    This ensures that the Spring Boot classes used to launch the application are included in the WAR file, and that the corresponding properties for these classes are defined in the MANIFEST.mf file of the WAR file:

    • Main-Class: org.springframework.boot.loader.WarLauncher

    • Spring-Boot-Classes: WEB-INF/classes/

    • Spring-Boot-Lib: WEB-INF/lib/

    • Spring-Boot-Version: 1.5.8.RELEASE

  2. Add the ARTIFACT_COPY_ARGS environment variable to the pom.xml file of your project. The Fabric8 Maven Plugin consumes this variable during the build process, and ensures that the Build and Deploy tool uses the WAR file (rather than the default fat JAR file) to create the application container image:

    ...
         <configuration>
             <images>
                 
             </images>
         </configuration>
    ...
  3. Add the JAVA_APP_JAR environment variable to the DeploymentConfig resource section in the src/main/fabric8/deployment.yml file. This variable instructs the Fabric8 Maven Plugin to launch your application using the WAR file included with the container.

    ...
        spec:
          template:
            spec:
              containers:
              - env:
                - name: JAVA_APP_JAR
                  value: ${project.artifactId}-${project.version}.war
    ...
  4. Build and deploy your application:

    mvn clean fabric8:deploy -Popenshift

Appendix C: Additional Resources

Appendix D: Application Development Resources

For additional information on application development with OpenShift see:

Appendix E: Proficiency Levels

Each mission available on Fabric8 Launcher teaches you about certain topics, but requires certain minimum knowledge, which varies by mission. For clarity, the minimum requirements and concepts are organized in several proficiency levels. In addition to the levels described in this chapter, there can be additional requirements with each mission, specific to its aim or the technologies it uses.

Foundational

The missions rated at Foundational proficiency generally require no prior knowledge of the subject matter; they provide general awareness and demonstration of key elements, concepts, and terminology. There are no special requirements except those directly mentioned in the description of the mission.

Advanced

When using Advanced missions, the assumption is that you are familiar with the common concepts and terminology of the subject area of the mission in addition to Kubernetes and OpenShift. You must also be able to perform basic tasks on your own, for example configure services and applications, or administer networks. If a service is needed by the mission, but configuring it is not in the scope of the mission, the assumption is that you have the knowledge to to properly configure it, and only the resulting state of the service is described in the documentation.

Expert

Expert missions require the highest level of knowledge of the subject matter. You are expected to perform many tasks based on feature-based documentation and manuals, and the documentation is aimed at most complex scenarios.

Appendix F: Glossary

F.1. Product and Project Names

developers.redhat.com/launch

developers.redhat.com/launch is a standalone getting started experience offered by Red Hat for jumpstarting cloud-native application development on OpenShift. It provides a hassle-free way of creating functional example applications, called missions, as well as an easy way to build and deploy those missions to OpenShift.

Fabric8 Launcher

The Fabric8 Launcher is the upstream project on which developers.redhat.com/launch is based.

Single-node OpenShift Cluster

An OpenShift cluster running on your machine using Minishift.

F.2. Terms Specific to Fabric8 Launcher

Booster

A language-specific implementation of a particular mission on a particular runtime. Boosters are listed in a booster catalog.

For example, a booster is a web service with a REST API implemented using the WildFly Swarm runtime.

Booster Catalog

A Git repository that contains information about boosters.

Mission

An application specification, for example a web service with a REST API.

Missions generally do not specify which language or platform they should run on; the description only contains the intended functionality.

Runtime

A platform that executes boosters. For example, WildFly Swarm or Eclipse Vert.x.