Setting up KVM on Red Hat Enterprise Linux

Editor’s Note:  If you have a Linux system that runs KVM and would like to try Red Hat Enterprise Linux on KVM, follow our KVM Get started guide, http://developers.redhat.com/products/rhel/get-started/#tab-kvm


The kernel-based Virtual Machine (KVM) is a virtualization infrastructure many have become familiar with throughout the industry. This article will guide you through getting a basic KVM hypervisor up and running and ready for use. In order to fully utilize the KVM, you will need a CPU that has virtualization extensions, and these will need to be enabled in the BIOS of the machine you’re working on. In general, you’ll need to look to enable VT-X or AMD-V depending on your system architecture.

Our Objectives

  • Set up a RedHat Enterprise Linux (RHEL 7.2) server
  • Identify whether Virtualisation extensions are present
  • Install KVM and associated software components
  • Networking Considerations
  • Configure VNC
  • Demonstrate how to fire up a new Virtual Machine running on the KVM hypervisor
  • Listing existing Virtual Machines

Installing RHEL

For the purposes of this article, I’m going to be showing you how to manually install KVM from the command line, rather than opt to have it installed as part of the RHEL installation process. This will allow us to fine tune the installation by only installing what we need, and it also gives us a better understanding of how everything fits together. With this in mind, we will be working on the basis that you have opted for a ‘minimal install’ of RHEL. After first boot, you will want to register to the Red Hat network to receive updates and download software. This can be done by running the following command:

subscription-manager register –auto-attach

You will be prompted to enter your username and password.

Continue reading “Setting up KVM on Red Hat Enterprise Linux”

Share

12 Simple Tips for Your Next Highly Available Cloud Buildout

Situation: You’re a great software developer and a fearless leader. Your CEO bursts into your cubicle and he is giving you vast amounts of investment capital, no data center, and limited staff. Your task: build a multi-region, highly available presence in AWS (or your favorite cloud provider) that can be maintained by minimal man-power. Your multi-tier Java EE app is almost ready. You are going to be required to create, maintain, and monitor a large amount of servers, RDS instances, S3 buckets, queues, public DNS entries, private DNS entries, etc. This series of articles aims to provide some ideas that help you go to market without a snag.

You heard multiple servers and you started to build your Ansible tower, puppet master, chef recipes, glue scripts. STOP!

Before you get yourself into a situation where your company is paying your favorite coffee shop’s franchise fee in cloud services and is getting the functionality of a french press, let’s think this through. There are a few things you need to consider. Are you creating EC2 instances manually? What is your staging environment like? Do you have one? Where should it live? Let’s take a few moments and discover the steps we need to take using the flowchart in Figure 1.

Figure 1: HA Cloud build out demands the use of an IAC tool set.

Note: The toolset we chose is Hashicorp Terraform [http://www.terraform.io] as our Infrastructure as Code (IAC) tool, and puppet community [http://puppet.com] for configuration management. If you choose a different set of tools, the principles in this series will still apply. As an obvious caveat, some scripts may not work, and please substitute your tools’ names in your head while following along.

Continue reading “12 Simple Tips for Your Next Highly Available Cloud Buildout”

Share