Part 3: Deploying a Serverless Service to Knative

Serverless computing in action Install Knative and Istio, deploy your code, and invoke it from a React application.

Now that you've built and tested your service, it's time to put everything together. In this article, you'll use resources from Kamesh Sampath's excellent Knative tutorial to install Istio and Knative on top of Kubernetes. With that infrastructure in place, it's easy to deploy your service to Knative. After taking a look at your service in the OpenShift console, you'll invoke it from the command line. Finally, we'll look at a Knative proxy that lets the Don Schenck's React front end access your service as well. 

Doug Tidwell
Red Hat Developer Alumnus

Part 2: Building a Serverless Service

Serverless computing in action Take a look at the image manipulation code behind the photo booth, then look at a modern web app that uses it.

In this article, we take an in-depth look at the image manipulation code at the heart of the Coderland photo booth. After explaining the code, we run it and show how it creates the custom images we'll be selling at the Coderland Swag Shop. Finally, we'll look at a modern web application that lets us interact with the service directly. 

Doug Tidwell
Red Hat Developer Alumnus

Part 1: Introduction to Serverless with Knative

Serverless computing in action Read all about the Compile Driver photo booth and why it's such a good fit for serverless.

The Knative serverless environment lets you deploy code to Kubernetes, but no resources are consumed unless your code needs to do something. With Knative, you create a service by packaging your code as a Docker image and handing it to the system. Your code only runs when it needs to, with Knative starting and stopping instances automatically. 

This article introduces you to the Compile Driver, a new attraction at the Coderland theme park. To increase revenue, management has installed a camera next to the ride. It captures images of happy guests as they plunge through the air. Your assignment is to write a service that transforms those images into souvenir photos. The resulting pictures feature the Coderland logo, a message, and a date stamp.

Over the next two articles, you'll examine how the service works and you'll learn how to deploy that service to Knative. 

Doug Tidwell
Red Hat Developer Alumnus

Red Hat Developer Istio Video Series: Number 2 - Istio Pool Ejection

This video demonstrates how Istio Pool Ejection enables you to remove under- or non-performing pods from your kubernetes-based system. [Note: Yes, I know I hammer on the 'return' key much too hard. I'm working on that.]

service mesh istio kubernetes microservice OpenShift Containers Red Hat Developers 1 0 https://img.youtube.com/vi/OEo99GjUv6Q/hqdefault.jpg
39 Seconds
6 Minutes
Istio Pool Ejection allows you to temporarily block under- or non-performing pods from your system.

Istio: Canaries and Kubernetes | DevNation Live

Being a cloud native developer requires learning some new language and new skills like circuit-breakers, canaries, service mesh, linux containers, dark launches, tracers, pods and sidecars. In this session, we will introduce you to cloud native architecture by demonstrating numerous principles and techniques for building and deploying Java microservices via Spring Boot, Wildfly Swarm and Vert.x, while leveraging Istio on Kubernetes with OpenShift.

Burr Sutter: A lifelong developer advocate, community organizer, and technology evangelist, Burr Sutter is a featured speaker at technology events around the globe—from Bangalore to Brussels and Berlin to Beijing (and most parts in between)—he is currently Red Hat’s Director of Developer Experience. A Java Champion since 2005 and former president of the Atlanta Java User Group, Burr founded the DevNexus conference—now the second largest Java event in the U.S.—with the aim of making access to the world’s leading developers affordable to the developer community. When not speaking abroad, Burr is also the passionate creator and orchestrator of highly-interactive live demo keynotes at Red Hat Summit, the company’s premier annual event.

microservices kubernetes DevNation Live Burr Sutter 360 11 https://img.youtube.com/vi/YQLOcjvbo9s/maxresdefault.jpg
34 Minutes
See the Code Learn about cloud native architecture, including principles and techniques for building and deploying Java microservices via Spring Boot, Wildfly Swarm and Vert.x.

Fabric8 create Camel Java project and deploy on OpenShift

This video shows how to create a new Java project using Apache Camel and create the git repository along with creating a build and deployment on OpenShift. Then using the fabric8 console to look inside the deployed Java container to visualise the running camel routes. The entire demo is running on OpenShift V3 / Kubernetes with everything packaged as a Docker container. More background on this demo here: https://medium.com/@jstrachan/demo-using-fabric8-to-create-a-new-java-project-git-repo-and-automatically-build-deploy-it-on-d34d776098a9

OpenShift kubernetes java ee PaaS Red Hat Developer Program 2102 1

Red Hat AMQ

A lightweight, high-performance, robust messaging platform
Hide Get Started
Off
Url
Overview
Page Description
Product and development information about Red Hat AMQ
Additional Content

AMQ provides fast, lightweight, and secure messaging for Internet-scale applications. AMQ components use industry-standard message protocols and support a wide range of programming languages and operating environments. AMQ gives you the strong foundation you need to build modern distributed applications.

AMQ Overview
AMQ Overview

Massively-scalable, distributed, and high performance data streaming

AMQ enables a massively scalable, distributed, and high performance data streaming platform. AMQ Streams, based on the Apache Kafka project, provides an event streaming backbone that allows microservices and other application components to exchange data with extremely high throughput and low latency.

AMQ Streams
AMQ Broker

Messaging for enterprise applications

AMQ offers the rich feature set and reliability that enterprise customers depend on. AMQ Broker is a pure-Java multiprotocol message broker, with fast message persistence and advanced high availability modes. AMQ Clients is a suite of messaging APIs that lets you add message-based integration to any application.

Global Messaging

AMQ gives you the power to build a worldwide messaging backbone. AMQ Interconnect provides new messaging capabilities for deploying fault-tolerant messaging networks that connect clients and brokers into one seamless fabric. AMQ Clients is a suite of messaging APIs that lets you add message-based integration to any application.

AMQ Interconnect

AMQ is a flexible and capable suite of messaging servers and clients that work together to enable you to build advanced distributed applications.

Focused on standards - AMQ implements the Java JMS 1.1 and 2.0 API specifications. AMQ components support the ISO-standard AMQP 1.0 message protocol and the MQTT, STOMP, and WebSocket protocols.

Lightweight and embeddable - Small-footprint components allow AMQ to go anywhere. AMQ Broker is deployable standalone or in a Java EE container. The event-driven AMQ clients are ideal for integration with existing applications.

Centralized, standards-based management - The AMQ console provides a single view into your deployment. In addition to centralized web-based management, AMQ supports standard management protocols for integration with your existing tools.

AMQ Components

AMQ is a suite of servers and clients that work together to form the foundation of a reliable distributed application.

AMQ Streams

AMQ Streams is a Java/Scala publish-subscribe-based messaging broker. Based on the Apache Kafka project, offers a distributed backbone that allows microservices and other applications to share data with extremely high throughput and extremely low latency.

  • Publish and subscribe - Many to many dissemination in a fault tolerant, durable manner

  • Long-term data retention - Efficiently stores data for immediate access in a manner limited only by disk space

  • Advanced queueing - Last value queues, message groups, topic hierarchies, and large message support

  • Replayable events - Serves as a repository for microservices to build in memory copies of source data, up to any point in time

  • Partition messages for scalability - Allows for organizing messages to maximum concurrent access

AMQ Streams is based on Strimzi and Apache Kafka projects.

AMQ Broker

AMQ Broker is a pure-Java multiprotocol message broker. It’s built on an efficient, asynchronous core, with a fast native journal for message persistence and the option of shared-nothing state replication for high availability.

  • Persistence - A fast, native-IO journal or a JDBC-based store

  • High availability - Shared store or shared-nothing state replication

  • Advanced queueing - Last value queues, message groups, topic hierarchies, and large message support

  • Multiprotocol - AMQP 1.0, MQTT, STOMP, OpenWire, and HornetQ Core

  • Integration - Full integration with Red Hat JBoss EAP

AMQ Broker is based on the Apache ActiveMQ Artemis project.

AMQ Interconnect

AMQ Interconnect is a high-speed, low-latency AMQP 1.0 message router. You can deploy multiple AMQ Interconnect routers to build a fault-tolerant messaging network that connects your clients and brokers into one seamless fabric.

  • Disaster recovery - Deploy redundant network routers across geographies

  • Advanced routing - Control the distribution and processing of messages on your network

  • Integration - Connect clients, brokers, and standalone services

  • Management - Streamlined management makes large deployments practical

AMQ Interconnect is based on the Apache Qpid Dispatch project.

AMQ Online

Developers can use Red Hat AMQ Online to provision messaging when and where they need it with zero installation, configuration, and maintenance. The developers serve themselves from an easy to-use browser console. Administrators can also configure a cloud-native, multitenant messaging service in the cloud or on-premise. Their teams can then serve themselves messaging. Multiple development teams can provision the brokers and queues they need from a simple console, without requiring each team to install, configure, deploy, maintain, or patch any software..

  • Low or no administrative costs - Consolidate operations in one small team so developers can serve themselves

  • Easy provisioning - Work from a browser with no installation or configuration required

  • Multi-tenancy - Installation is managed as a unit, but instances are protected from each other

  • Flexible multi-language connectivity - Integrate systems and adapt to market demands and emerging technologies

  • Open standards - Provide interoperability between other vendor’s messaging clients and products providing protection from vendor lock-in

AMQ Online is based on the enmasse project.

AMQ Clients

AMQ Clients is a suite of AMQP 1.0 messaging APIs that allow you to make any application a messaging application. It includes both industry-standard APIs such as JMS and new event-driven APIs that make it easy to integrate messaging anywhere.

  • New event-driven APIs - Fast, efficient messaging that integrates everywhere

  • Industry-standard APIs - JMS 1.1 and 2.0

  • Broad language support - C++, Java, JavaScript, Python, and .NET

  • Wide availability - Linux, Windows, and JVM-based environments

AMQ Clients is based on the following projects.

 
Download
Off
Show Subscription Offering

Apache ActiveMQ Artemis 2

Visit the project download page for more options and all versions.

Download current or older versions of AMQ.
Url
Hello World
Hide Get Started
Off
Tabs
Title
AMQ Broker
Blue Sections
Body

Download and Install AMQ Broker:

  1. Download and extract:

    1. Download the latest version of Red Hat AMQ Broker. The latest at the time of this writing is Red Hat AMQ Broker 7.4.0 GA.

    2. Unzip AMQ Broker to any destination. The document will refer to this directory as AMQ_HOME.

      • On Windows or Mac, you can extract the contents of the ZIP archive by double clicking on the ZIP file.

      • On Red Hat Enterprise Linux, open a terminal window in the target machine and navigate to where the ZIP file was downloaded. Extract the ZIP file by executing the following command:

        unzip amq-broker-7.4.0-bin.zip

Title
Set up your development environment
Minutes to Complete
5minutes
Body

Create and Run an AMQ Broker Instance:

The artemis script located in the bin folder is the starting point to manage our AMQ installation.

  1. Create a Broker Instance:

    1. Execute the following command to create a new broker in the instances folder.

      $ <AMQ_HOME>/bin/artemis create --user admin --password password --role admin --allow-anonymous y <AMQ_HOME>/instances/mybroker

      This command creates a broker in <AMQ_HOME>/instances/mybroker folder, with the admin default user with admin role and password as their password. This broker also accepts anonymous users to connect to the broker from localhost.  The folder where the broker is installed is referred as AMQ_INSTANCE.

  2. Start the Broker Instance:

    To start the broker you only need to run this command from the <AMQ_INSTANCE> folder:

    $ bin/artemis run

    After you run this command a console log will appear and the broker will be running.

Title
Create and Run an AMQ Broker Instance
Minutes to Complete
3minutes
Body

Produce and consume messages:

  1. Sending 1000 messages to the broker:

    • AMQ_INSTANCE/bin/artemis producer

  2. Consume messages from broker:

    • AMQ_INSTANCE/bin/artemis consumer

You just sent and received messages via Red Hat A­MQ Broker. Visit frequently to view more tutorials on connecting via MQTT, STOMP and other topics around A­MQ.

Title
Start using the Broker
Minutes to Complete
1minute
Title
AMQ Interconnect
Blue Sections
Body

Install AMQ Interconnect Router:

AMQ Interconnect 1.5 is distributed as a set of RPM packages, which are available through your Red Hat subscription.

  1. Ensure your subscription has been activated and your system is registered.

    For more information about using the customer portal to activate your Red Hat subscription and register your system for packages, see Using Your Subscription.

  2. Subscribe to the required repositories:

    Red Hat Enterprise Linux 6

    $ sudo subscription-manager repos --enable=amq-interconnect-1-for-rhel-6-server-rpms --enable=amq-clients-2-for-rhel-6-server-rpms

    Red Hat Enterprise Linux 7

    $ sudo subscription-manager repos --enable=amq-interconnect-1-for-rhel-7-server-rpms --enable=amq-clients-2-for-rhel-7-server-rpms

  3. Use the yum or dnf command to install the qpid-proton-c and python-qpid-proton packages.

    $ sudo yum install qpid-proton-c python-qpid-proton python-qpid-proton-docs -y

    The AMQ Interconnect packages depend on these Qpid Proton packages.

  4. Use the yum or dnf command to install the qpid-dispatch-router and qpid-dispatch-tools packages.

    $ sudo yum install qpid-dispatch-router qpid-dispatch-console qpid-dispatch-tools -y

  5. Use the which command to verify that the qdrouterd executable is present.

    $ which qdrouterd

  6. The qdrouterd executable should be located at /usr/sbin/qdrouterd.

Title
Install AMQ Interconnect Router
Minutes to Complete
3minutes
Body

Start the Router service:

  1. Start the router with the default configuration, do one of the following:
    To…​ Enter this command…​

    Run the router as a service in Red Hat Enterprise Linux 6

    $ sudo service qdrouterd start

    Run the router as a service in Red Hat Enterprise Linux 7

    $ sudo systemctl start qdrouterd.service

    Run the router as a daemon

    $ qdrouterd -d

    To start the router in the foreground, do not use the -d parameter.

  2. View the log to verify the router status:

    $ qdstat --log

     

Title
Start the Router
Minutes to Complete
2minutes
Body

Starting the Receiver Client

To start the receiver by using the Python receiver client, navigate to the Python examples directory (for example, /usr/share/proton-0.28.0) and run the simple_recv.py example:

$ cd <install_dir>/examples/python/
$ python simple_recv.py -a 127.0.0.1:5672/examples -m 5

This command starts the receiver and listens on the default address (127.0.0.1:5672/examples). The receiver is also set to receive a maximum of five messages.

Sending Messages

After starting the receiver client, you can send messages from the sender. These messages will travel through the router to the receiver.

In a new terminal window, navigate to the Python examples directory and run the simple_send.py example:

$ cd <install_dir>/examples/python/
$ python simple_send.py -a 127.0.0.1:5672/examples -m 5

This command sends five auto-generated messages to the default address (127.0.0.1:5672/examples) and then confirms that they were delivered and acknowledged by the receiver:

all messages confirmed

The receiver client receives the messages and displays their content:

{u'sequence': int32(1)}
{u'sequence': int32(2)}
{u'sequence': int32(3)}
{u'sequence': int32(4)}
{u'sequence': int32(5)}

You just sent and received messages via Red Hat A­MQ Interconnect. Visit frequently to view more tutorials and other topics around A­MQ.

Title
Send and Receive Messages
Minutes to Complete
3minutes
Title
AMQ Streams on OpenShift
Blue Sections
Body

Download and extract

  1. Download the latest version of Red Hat AMQ Streams installation examples. The latest version at the time of this writing is Red Hat AMQ Streams 1.2.

  2. Unzip Red Hat AMQ Streams installation and example resources to any destination.

    • On Windows or Mac, you can extract the contents of the ZIP archive by double clicking on the ZIP file.

    • On Red Hat Enterprise Linux, open a terminal window in the target machine and navigate to where the ZIP file was downloaded. Extract the ZIP file by executing the following command:

      # unzip amq-streams-1.2.0-ocp-install-examples.zip

Install Custom Resource Definitions (CRDs)

  1. Login in to the OpenShift cluster with cluster admin privileges, for example:

    # oc login -u system:admin

  2. By default the installation files work in the myproject namespace. Modify the installation files according to the namespace where you will install the AMQ Streams Kafka Cluster Operator, for example kafka.

    • On Linux, use:

      # sed -i 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yaml
    • On Mac:

      # sed -i '' 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yaml
  3. Deploy the Custom Resource Definitions (CRDs) and and role-based access control (RBAC) resources to manage the CRDs.

    # oc new-project kafka
    # oc apply -f install/cluster-operator/ 

  4. Create the project where you want to deploy your Kafka cluster, for example my-kafka-project.

    # oc new-project my-kafka-project

  5. Give it access to your non-admin user developer

    # oc adm policy add-role-to-user admin developer -n my-kafka-project

  6. Enable the Cluster Operator to watch that namespace.

    # oc set env deploy/strimzi-cluster-operator STRIMZI_NAMESPACE=kafka,my-kafka-project -n kafka
    # oc apply -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n my-kafka-project
    # oc apply -f install/cluster-operator/032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml -n my-kafka-project
    # oc apply -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n my-kafka-project

  7. Create the new Cluster Role strimzi-admin.

    # cat << EOF | oc create -f -
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRole
    metadata:
      name: strimzi-admin
    rules:
    - apiGroups:
      - "kafka.strimzi.io"
      resources:
      - kafkas
      - kafkaconnects
      - kafkaconnects2is
      - kafkamirrormakers
      - kafkausers
      - kafkatopics
      verbs:
      - get
      - list
      - watch
      - create
      - delete
      - patch
      - update
    EOF
  8. Add the role to the non-admin user developer.

    # oc adm policy add-cluster-role-to-user strimzi-admin developer

Title
Install AMQ Streams Custom Resource Defintions
Minutes to Complete
3minutes
Body

Create Cluster and Topic resources

The Cluster Operator now will listen for new Kafka resources.

  1. Login as normal user, for example:

    # oc login -u developer

    # oc project my-kafka-project

  2. Create the new my-cluster Kafka Cluster with 3 zookeeper and 3 broker nodes using ephemeral storage and exposing the Kafka cluster outside of the OpenShift cluster using Routes:

    # cat << EOF | oc create -f -
    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata: 
      name: my-cluster
    spec:
      kafka:
        replicas: 3
        listeners:
          external:
            type: route
        storage:
          type: ephemeral
      zookeeper:
        replicas: 3
        storage:
          type: ephemeral
      entityOperator:
        topicOperator: {}
    EOF
  3. Now that our cluster is running, we can create a topic to publish and subscribe from our external client. Create the following my-topicTopic custom resource definition with 3 replicas and 3 partitions in my-cluster Kafka cluster:

    # cat << EOF | oc create -f -
    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaTopic
    metadata:
      name: my-topic
      labels:
        strimzi.io/cluster: "my-cluster"
    spec:
      partitions: 3
      replicas: 3
    EOF

You are now ready to start sending and receiving messages.

Title
Create your first Apache Kafka Cluster
Minutes to Complete
5minutes
Body

Test using an external Red Hat Fuse application. You will need to have Maven installed and a Java 8 JDK.

  1. Clone this git repo to test the access from to your new Kafka cluster:

    $ git clone https://github.com/hguerrero/amq-examples.git

  2. Switch to the camel-kafka-demo folder.

    $ cd amq-examples/camel-kafka-demo/

  3. As we are using Routes for external access to the cluster, we need the cluster CA certificate to enable TLS in the client. Extract the public certificate of the broker certification authority.

    $ oc extract secret/my-cluster-cluster-ca-cert --keys=ca.crt --to=- > src/main/resources/ca.crt

  4. Import the trusted cert to a keystore

    $ keytool -import -trustcacerts -alias root -file src/main/resources/ca.crt -keystore src/main/resources/keystore.jks -storepass password -noprompt

  5. Now you can run the Red Hat Fuse application to send and receive messages to the Kafka cluster using the following maven command:

    $ mvn -Drun.jvmArguments="-Dbootstrap.server=`oc get routes my-cluster-kafka-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}'`:443" clean package spring-boot:run

    After finishing the clean and package phases you will see the Spring Boot application start creating a producer and consumer sending and receiving messages from the “my-topic” Kafka topic.

    14:36:18.170 [main] INFO  com.redhat.kafkademo.Application - Started Application in 12.051 seconds (JVM running for 12.917)
    14:36:18.490 [Camel (MyCamel) thread #1 - KafkaConsumer[my-topic]] INFO  o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=6de87ffa-c7cf-441b-b1f8-e55daabc8d12] Discovered coordinator my-cluster-kafka-1-myproject.192.168.99.100.nip.io:443 (id: 2147483646 rack: null)
    14:36:18.498 [Camel (MyCamel) thread #1 - KafkaConsumer[my-topic]] INFO  o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=6de87ffa-c7cf-441b-b1f8-e55daabc8d12] Revoking previously assigned partitions []
    14:36:18.498 [Camel (MyCamel) thread #1 - KafkaConsumer[my-topic]] INFO  o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=6de87ffa-c7cf-441b-b1f8-e55daabc8d12] (Re-)joining group
    14:36:19.070 [Camel (MyCamel) thread #3 - KafkaProducer[my-topic]] INFO  producer-route - producer >>> Hello World from camel-context.xml with ID ID-hguerrer-osx-1540578972584-0-2
    14:36:19.987 [Camel (MyCamel) thread #4 - KafkaProducer[my-topic]] INFO  producer-route - producer >>> Hello World from camel-context.xml with ID ID-hguerrer-osx-1540578972584-0-4
    14:36:20.982 [Camel (MyCamel) thread #5 - KafkaProducer[my-topic]] INFO  producer-route - producer >>> Hello World from camel-context.xml with ID ID-hguerrer-osx-1540578972584-0-6
    14:36:21.620 [Camel (MyCamel) thread #1 - KafkaConsumer[my-topic]] INFO  o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=6de87ffa-c7cf-441b-b1f8-e55daabc8d12] Successfully joined group with generation 1
    14:36:21.621 [Camel (MyCamel) thread #1 - KafkaConsumer[my-topic]] INFO  o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=6de87ffa-c7cf-441b-b1f8-e55daabc8d12] Setting newly assigned partitions [my-topic-0, my-topic-1, my-topic-2]
    14:36:21.959 [Camel (MyCamel) thread #6 - KafkaProducer[my-topic]] INFO  producer-route - producer >>> Hello World from camel-context.xml with ID ID-hguerrer-osx-1540578972584-0-8
    14:36:21.973 [Camel (MyCamel) thread #1 - KafkaConsumer[my-topic]] INFO  consumer-route - consumer >>> Hello World from camel-context.xml with ID ID-hguerrer-osx-1540578972584-0-8
    14:36:22.970 [Camel (MyCamel) thread #7 - KafkaProducer[my-topic]] INFO  producer-route - producer >>> Hello World from camel-context.xml with ID ID-hguerrer-osx-1540578972584-0-11
    14:36:22.975 [Camel (MyCamel) thread #1 - KafkaConsumer[my-topic]] INFO  consumer-route - consumer >>> Hello World from camel-context.xml with ID ID-hguerrer-osx-1540578972584-0-11
    14:36:23.968 [Camel (MyCamel) thread #8 - KafkaProducer[my-topic]] INFO  producer-route - producer >>> Hello World from camel-context.xml with ID ID-hguerrer-osx-1540578972584-0-14
    
    

    You’re done! Press Ctrl + C to stop the running program.

Title
Start sending and receiving from a Topic
Minutes to Complete
5minutes
Title
AMQ Online on OpenShift
Blue Sections
Body

Download and extract

  1. Download the latest version of Red Hat AMQ Online installation file. The latest version at the time of this writing is Red Hat AMQ Online 1.2.

  2. Unzip Red Hat AMQ Online installation resources to any destination.

    • On Windows or Mac, you can extract the contents of the ZIP archive by double clicking on the ZIP file.

    • On Red Hat Enterprise Linux, open a terminal window in the target machine and navigate to where the ZIP file was downloaded. Extract the ZIP file by executing the following command:

      # unzip amq-online-install-1-2.zip

Install AMQ Online using a YAML bundle

  1. Login in to the OpenShift cluster with cluster admin privileges, for example:

    # oc login -u system:admin

  2. By default the installation files work in the amq-online-infra namespace. Modify the installation files according to the namespace where you will install the AMQ Online, for example messaging.

    • On Linux, use:

      # sed -i 's/amq-online-infra/messaging/' install/bundles/amq-online/*.yaml
    • On Mac:

      # sed -i '' 's/amq-online-infra/messaging/' install/bundles/amq-online/*.yaml
  3. Create the project where you want to deploy AMQ Online, for example messaging.

    # oc new-project messaging

  4. Deploy using the AMQ Online bundle.

    # oc apply -f install/bundles/amq-online

Title
Installing AMQ Online
Minutes to Complete
5minutes
Body

Create a Messaging Address Space

In AMQ Online, you create address spaces using standard command-line tools.

  1. Login as normal user, for example:

    # oc login -u developer

    # oc new-project my-app-project

  2. Create a new address space my-address-space definition of type standard with a standard small plan:

    # cat << EOF | oc create -f -
    apiVersion: enmasse.io/v1beta1
    kind: AddressSpace
    metadata:
      name: my-address-space
    spec:
      type: standard
      plan: standard-small
    EOF
  3. Check the status of the address space creating, should return true when provisioning is done. It may take a few moments:

    # echo `oc get addressspace my-address-space -o jsonpath='{.status.isReady}'`

  4. Now that you have an address space, you can create an address definition named my-queue of type queue with a standard-small-queue plan:

    # cat << EOF | oc create -f -
    apiVersion: enmasse.io/v1beta1
    kind: Address
    metadata:
        name: my-address-space.my-queue
    spec:
        address: my-queue
        type: queue
        plan: standard-small-queue
    EOF
  5. Create an messaging user to access the queue with send and receive permissions:

    # cat << EOF | oc create -f -
    apiVersion: user.enmasse.io/v1beta1
    kind: MessagingUser
    metadata:
      name: my-address-space.user1
    spec:
      username: user1
      authentication:
        type: password
        password: cGFzc3dvcmQ= # Base64 encoded
      authorization:
        - addresses: ["my-queue"]
          operations: ["send", "recv"]
    EOF

You are now ready to start sending and receiving messages.

Title
Create Address Endpoint
Minutes to Complete
10minutes
Body

Test using an external application

  1. Clone this git repo to test the access from to your new AMQ Online addresses:

    $ git clone https://github.com/hguerrero/amq-examples.git

  2. Switch to the camel-amqp-demo folder.

    $ cd amq-examples/camel-amqp-demo/

  3. As we are using Routes for external access to the cluster, we need the cluster CA certificate to enable TLS in the client. Extract the public certificate of the broker certification authority.

    $ oc get addressspace my-address-space -o jsonpath='{.status.endpointStatuses[?(@.name=="messaging")].cert}{"\n"}' | base64 --decode > src/main/resources/ca.crt

  4. Import the trusted cert to a keystore

    $ keytool -import -trustcacerts -alias root -file src/main/resources/ca.crt -keystore src/main/resources/truststore.ts -storepass password -noprompt

  5. Now you can run the Red Hat Fuse application to send and receive messages to the AMQ Online messaging address endpoint using the following maven command:

    $ mvn -Drun.jvmArguments="-Damq.url=amqps://`oc get addressspace my-address-space -o jsonpath='{.status.endpointStatuses[?(@.name=="messaging")].externalHost}{"\n"}'`:443 -Damq.destination=queue:my-queue" clean package spring-boot:run

    After finishing the clean and package phases you will see the Spring Boot application start creating a producer and consumer sending and receiving 100 messages to the “my-queue” AMQ Online queue address.

    14:35:52.607 [CamelMainRunController] INFO  o.a.camel.spring.SpringCamelContext - Apache Camel 2.18.1.redhat-000021 (CamelContext: camel) started in 1.368 seconds
    14:35:57.675 [Camel (camel) thread #1 - timer://foo] INFO  producer-route - Sent Message 1
    14:35:57.842 [Camel (camel) thread #1 - timer://foo] INFO  producer-route - Sent Message 2
    14:35:57.854 [Camel (camel) thread #0 - JmsConsumer[my-queue]] INFO  consumer-route - Message received >>> Message 1
    14:35:57.886 [Camel (camel) thread #0 - JmsConsumer[my-queue]] INFO  consumer-route - Message received >>> Message 2
    14:35:57.890 [Camel (camel) thread #1 - timer://foo] INFO  producer-route - Sent Message 3
    14:35:57.928 [Camel (camel) thread #0 - JmsConsumer[my-queue]] INFO  consumer-route - Message received >>> Message 3
    14:35:57.940 [Camel (camel) thread #1 - timer://foo] INFO  producer-route - Sent Message 4
    14:35:57.971 [Camel (camel) thread #0 - JmsConsumer[my-queue]] INFO  consumer-route - Message received >>> Message 4
    14:35:57.992 [Camel (camel) thread #1 - timer://foo] INFO  producer-route - Sent Message 5
    14:35:58.024 [Camel (camel) thread #0 - JmsConsumer[my-queue]] INFO  consumer-route - Message received >>> Message 5
    14:35:58.044 [Camel (camel) thread #1 - timer://foo] INFO  producer-route - Sent Message 6
    14:35:58.080 [Camel (camel) thread #0 - JmsConsumer[my-queue]] INFO  consumer-route - Message received >>> Message 6
    14:35:58.096 [Camel (camel) thread #1 - timer://foo] INFO  producer-route - Sent Message 7
    14:35:58.130 [Camel (camel) thread #0 - JmsConsumer[my-queue]] INFO  consumer-route - Message received >>> Message 7
    14:35:58.147 [Camel (camel) thread #1 - timer://foo] INFO  producer-route - Sent Message 8
    14:35:58.179 [Camel (camel) thread #0 - JmsConsumer[my-queue]] INFO  consumer-route - Message received >>> Message 8
    14:35:58.198 [Camel (camel) thread #1 - timer://foo] INFO  producer-route - Sent Message 9
    14:35:58.230 [Camel (camel) thread #0 - JmsConsumer[my-queue]] INFO  consumer-route - Message received >>> Message 9
    14:35:58.247 [Camel (camel) thread #1 - timer://foo] INFO  producer-route - Sent Message 10
    

    You’re done! Press Ctrl + C to stop the running program.

Title
Send and Receive Messages
Minutes to Complete
3minutes
Page Description
Learn how to install and start using AMQ.
Url
Docs and APIs
Documents Links Section

There are many resources available for Red Hat AMQ here on Red Hat Developer Program, and on the Red Hat Customer Portal. On this page, we highlight our pick of those resources.

AMQ 7

Introducing Red Hat AMQ

Introduction to AMQ 7.4

AMQ Clients

AMQ Clients Overview 2.4

JMS Clients

Using the AMQ JMS Client (AMQP)

Adapters and Libraries

Using the AMQ Spring Boot Starter

AMQ Streams

AMQ Streams on OpenShift

Using AMQ Streams on OpenShift Container Platform

AMQ Streams on Red Hat Enterprise Linux

Using AMQ Streams on Red Hat Enterprise Linux (RHEL)

AMQ Online

Installing and Managing AMQ Online on OpenShift

Installing and Managing AMQ Online on OpenShift Container Platform

Using AMQ Online on OpenShift

Using AMQ Online on OpenShift Container Platform

AMQ Broker

AMQ Broker

Getting Started with AMQ Broker 7.4

AMQ Broker on OpenShift

Deploying AMQ Broker on OpenShift Container Platform

AMQ Interconnect

AMQ Interconnect

Using AMQ Interconnect 1.5

AMQ Interconnect on OpenShift

Deploying AMQ Interconnect on OpenShift Container Platform

Hide Get Started
Hide Get Started
Page Description
Find documentation, videos, articles, and other resources available for AMQ.
Url
Help
Hide Get Started
Off
Show Stack Overflow
No
Page Description
Red Hat AMQ provides fast, lightweight, and secure messaging for Internet-scale applications.
Url
Community
Main Content

Using AMQ is a great way to build real-world enterprise applications based on the latest technologies. But what if you find something that needs fixing or have a new feature to suggest? By getting involved with the AMQ community you can give feedback, improve the docs, review code and discuss and propose new features whenever they’re needed. Answering user questions or taking part in development discussions is also a great way to build a reputation for collaboration and expertise in your field.

No matter what your skill level, contributing to AMQ can be very rewarding and a great learning experience. You’ll meet lots of smart, passionate developers who are all driven to create the best middleware possible in open source! You don’t have to be an expert to get involved and it doesn’t have to take a lot of time.

Hide Get Started
Off
Display Projects
No
Page Description
Get involved with the community around AMQ.
amq AMQ Application Development amq Integration And Automation Off
https://i.vimeocdn.com/video/515027804_200x150.jpg
29 Seconds
4 Minutes