This is part 2 of a 2-part article about Advanced Integration with RHEV-M. The first part is available here.
In the last part you learned how to perform different operations on the engine from the outside using the API/SDK. In this part you’ll learn how you can influence the engine from the inside, using extension APIs
In this section we will describe the following APIs:
- UI plugins API (also covered in http://ovedou.blogspot.com and http://www.ovirt.org/Features/UIPlugins) – an API that allows extending the Administrator Portal UI. It allows you to add UI components with a RHEV-M look-and-feel, but with your own functionality. Useful in order to integrate your product with the Administrator Portal
- Scheduling API – an API that allows you to change the way the the engine schedules VMs in your data center, and fit it to your specific needs
- VDSM hooks – A mechanism that allows you to modify the VM in different lifecycle events. Useful for modifying / extending the VM’s functionality
UI plugins API
Looking at the RHEV-M Administrator Portal, there are several main UI components:
Continue reading “Advanced Integration with RHEV-M – Part 2 of 2”
Hey all, I’m Jen Krieger and I am Team Inception’s Product Owner and Scrum Master/Agile Coach. Hopefully, you have read Bill Montgomery’s blog post last week about Red Hat IT’s DevOps’ journey. He referred to Underpants Gnomes strategy and how to get from point A to point C if there is a big question mark in the middle. You could say my job on the team is to clarify that question mark. I’m going to share my method… but first, let’s break down what the first 6 weeks of team activities looked like from my perspective:
- Formed team in late October and immediately swarmed around a presentation to explain what we are doing, backlog, strategy & roadmap needed, limited time to talk to stakeholders, hey… maybe we should read some information on CI/CD and DevOps.
- Do all the most critically needed steps for agile best practices surrounding team formation. We selected Kanban as an initial focus for Agile practices.
- Continued to talk to stakeholders… held meetings.
- Presented “What is Inception” to Red Hat IT, included cookie monster and cookies as a defense mechanism to ward off attendees sleeping during presentation.
- Continuing to read Jez Humble & David Farley’s Continuous Delivery book and flip to the end of the book (spoiler alert!) to find the maturity model and how to use it … “OMG, why didn’t we read this in the reverse direction?” #readtheindexnexttimedummy
- Presentation. Then beer. Then sleep. Then beer.
Continue reading “Let’s clarify that DevOps question mark”
Came across this post on the Red Hat Services blog about some solid practices for “doing” DevOps. I liked Mr. Hoffman’s simple explanation of what DevOps is. I also liked that he points out how just using a tool (or a set of tools) does not equal “doing DevOps.” Check it out and make sure you leave some comments here or there about what you think.
This is part 1 of a 2-part article about Advanced Integration with RHEV-M.
At CloudOpen Europe 2013, in Edinburgh, I presented a talk about advanced integration with the oVirt engine. This technical article is covering the contents of this session.
RHEV-M is a Large scale, centralized management for server and desktop virtualization. It is based on leading performance, scalability and security infrastructure technologies, focusing on KVM for best integration/performance. It provides an alternative to Center/vSphere, providing end-to-end IaaS platform.
In this post I’ll show you how you can integrate with the RHEV-M engine, covering both new and cool features, as well as some “old” useful features. Let’s start with the REST-based APIs. These APIs allow you to perform different operations on the engine externally. You can do anything using the REST APIs, even operations that aren’t exposed through the different UI interfaces.
Continue reading “Advanced integration with Red Hat Enterprise Virtualization Manager (RHEV-M) – Part 1 of 2”
DevOps is all about culture, right? Yeah, it’s developers and operators working more closely, but it’s more than that. DevOps is a culture that exudes the principles of Agile, Lean, and Open Source, to deliver higher quality products and services faster, more continuously, and more predictably.
So, if we accept DevOps is a culture, and your CIO gives you a mandate to transform his or her organization to a DevOps organization, you’re now effectively responsible for an organizational culture change initiative. That’s the situation I found myself in recently, when Red Hat CIO Lee Congdon asked me to lead a DevOps enablement team in his IT organization. At a few weeks in with our new team, called Inception, I’d like to share our approach:
1. Build culture through practices. Culture is the sum of the habits and behaviors of a group. I don’t believe culture can be changed through mandates. But, Inception’s mission is to change the organization’s culture. So, how?
Continue reading “Red Hat IT Begins Its DevOps Journey”
Countless products uses XML files, whether it is for data persistence, serialization or mere configuration. This is even more true when it comes to the Red Hat middleware portfolio, the JBoss projects having always been keen on using this format for configuration files – on top of the ones specified by JEE such as the famous (or infamous ?) web.xml. While the XML format has some definitive qualities, it is not the easiest format to parse, and this often causes issues when integrating product inside an RPM or designing an automated installation procedure.
As I’ve been working on such automation for most of my career, I’ve picked up a bunch of nifty tricks and also designed some useful practices that I wanted to share on this blog.
Continue reading “XML editing with Bash script”
Hi, I’m Steve, a member of the Inception team at Red Hat. The Inception team was pulled from different parts of IT to foster DevOps culture in Red Hat. Though we’ve only been a team for a little over a month, we’ve been trying to do some early projects to make everyone’s lives easier.l
We spent quite a bit of time in our early meetings identifying pain points in the current processes. We talked with a few developers, ops folks, noted historical issues and ran with general brain storming. We heard a lot about configuration management, lack of information, redundant and time consuming tasks, and many more of what one expects when asking tech people what pains them.
Of course, Tim (another Incepticon) and I were itching to write some code so once the team identified a common issue for both developers and ops engineers we jumped in head first.
The rest of this post describes our journey from initially trying to implement a simple solution to improve the day-to-day lives of developers, through the technical limitations we experienced along the way, and finally arrives at the empathy for our developers we’ve gained from that experience. We’ll wrap up with a note on how Red Hat Software Collections (announced as GA in September) would’ve simplified our development process.
Continue reading “Feeling Developer Pain”
In an earlier post we looked into using the Performance Co-Pilot toolkit to explore performance characteristics of complex systems. While surprisingly rewarding, and often unexpectedly insightful, this kind of analysis can be rightly criticized for being “hit and miss”. When a system has many thousands of metric values it is not feasible to manually explore the entire metric search space in a short amount of time. Or the problem may be less obvious than the example shown – perhaps we are looking at a slow degradation over time.
There are other tools that we can use to help us quickly reduce the search space and find interesting nuggets. To illustrate, here’s a second example from our favorite ACME Co. production system.
Continue reading “Performance Regression Analysis with Performance Co-Pilot “
Investigating performance in a complex system is a fascinating undertaking. When that system spans multiple, closely-cooperating machines and has open-ended input sources (shared storage, or faces the Internet, etc) then the degree of difficulty of such investigations ratchets up quickly. There are often many confounding factors, with many things going on all at the same time.
The observable behaviour of the system as a whole can be frequently changing even while at a micro level things may appear the same. Or vice-versa – the system may appear healthy, average and 95th percentile response times are in excellent shape, yet a small subset of tasks are taking an unusually large amount of time to complete, just today perhaps. Fascinating stuff!
Let’s first consider endearing characteristics of the performance tools we’d want to have at our disposal for exploring performance in this environment.
Continue reading “Exploratory Performance Analysis with Performance Co-Pilot “
A few short months ago, I was managing an operations team at another firm. There had been a sea change in executive leadership over the summer, and the DevOps transformation that I’d helped to kick off was quickly being unraveled by the sorts of executive shenanigans that can ensue when a C level departs and leaves an opening. I was open minded to a change in scenery and got the call of a lifetime from a Red Hat recruiter.
You see, I’ve been involved in the Linux community since around 1998. I helped grow the Triangle Linux Users Group in its early years, and served a term on the steering committee as Vice Chair. When the community was looking for an enterprise class Linux distribution without the cost of a subscription model, I joined the cAosity project (now gone) and helped deliver CentOS to the Linux community. Open Source was in my DNA, and living in the Raleigh area the success of Red Hat was always right there for me to admire. “Someday I’d like to work there,” I often thought to myself.
This DevOps thing has gotten a lot of traction with me. I’ve been a volunteer co-organizer at Triangle DevOps, and have even given a few public talks on the subject, too.
Continue reading “Incepting DevOps at Red Hat”