Continuous integration is a hot topic these days, and the GNU Project Debugger is keeping up with the trend. Who better to serve as a role model for tracking and exterminating bugs than a debugger?
The GDB Buildbot started as a pet project back in 2014 but is now an integral part of the development process. It provides an infrastructure to test new commits pushed to the official repository, as well as a service (which we call try builds) for developers to submit their proposed changes. In this article, I share the story of our Buildbot instance, where we are right now in terms of functionality, and the plans (and challenges) for the future.
Continue reading “Continuous integration with GDB Buildbot”
With companies generating more and more revenue through their APIs, these APIs also have become even more critical. Quality and reliability are key goals sought by companies looking for large scale use of their APIs, and those goals are usually supported through well-crafted DevOps processes. Figures from the tech giants make us dizzy: Amazon is deploying code to production every 11.7 seconds, Netflix deploys thousands of time per day, and Fidelity saved $2.3 million per year with their new release framework. So, if you have APIs, you might want to deploy your API from a CI/CD pipeline.
Deploying your API from a CI/CD pipeline is a key activity of the “Full API Lifecycle Management.” Sitting between the “Implement” and “Secure” phases, the “Deploy” activity encompasses every process needed to bring the API from source code to the production environment. To be more specific, it covers Continuous Integration and Continuous Delivery.
Continue reading “5 principles for deploying your API from a CI/CD pipeline”
Join us for the next online DevNation Live on Thursday, July 19th at 12pm EDT for Container pipeline master: Continuous integration + continuous delivery with Jenkins, presented by Red Hat principal technical product marketing manager for Red Hat OpenShift, Siamak Sadeghianfar.
In this session, we’ll take a detailed look into how you can build a super slick, automated continuous integration and continuous delivery (CI/CD) Jenkins pipeline that delivers your application payloads onto the enterprise Kubernetes platform, Red Hat OpenShift. You see how zero-downtime deployment patterns can be part of your release process when you are using a container platform based on Kubernetes.
Automating your build, test, and deployment processes can improve reliability and reduce the need for rollbacks. However, we’ll show you how rollbacks can be handled too.
Register now and join the live presentation at 12pm EDT, Thursday, July 19th.
Continue reading “July 19th DevNation Live: Container pipeline master: Continuous integration + continuous delivery with Jenkins”
Typical CI process (source: www.pepgotesting.com)
Continuous Integration (CI) is a phase in the software development cycle where code from different team members or different features are integrated together. This usually involves merging code (integration), building the application and carrying out basic tests all within an ephemeral environment.
Continue reading “Continuous Integration: A “Typical” Process”
I’m thrilled to announce the availability of a mini-book about Eclipse Vert.x. This book focuses on the development of reactive microservices in Java and covers reactive systems and reactive programming.
Writing a book, even for a mini-book is a tough task. While writing code and writing a book are very different experiences, you can apply the same process and good practice. I would like to list a couple of tips I’ve used to make the writing a bit easier.
Continue reading “Continuously Building a Book”
It’s been a while since Red Hat released version 3.3 of OpenShift Container Platform, this version is full of features.
Continue reading Using Pipelines in OpenShift 3.3+ for CI/CD
In the current world of DevOps, Continuous Delivery, Microservices, and PaaS many organizations want to know how Software Governance practices and requirements fit. Some of this comes from a regulatory perspective, ensuring compliance (e.g HIPAA/PHI, SOX, PCI) and auditing requirements are met. Another perspective focuses on existing technology standards, design practices, and application architecture. At the same time developers and teams are being told to be more agile, adaptable, and self-directed. How do we achieve the latter while mitigating the risks associated with the former?
I would argue that a “Descriptive” approach to Software Governance is required. In my perfect world, Solution Architectures are monitored for exception as they progress through the delivery process. The technical underpinnings of systems, in terms of infrastructure and software, are “described” in code and configuration that can be easily audited against established policies. The runtime implementation of a particular solution is then transparent to all interested parties. In many ways, this is just an extension of Open Source practices to the delivered solutions and systems themselves.
Continue reading “Shouldn’t Software Governance Practices be more Descriptive than Prescriptive?”
The Journey to Delivery Efficiency
For as long as I can remember in my career in the Information Technology industry, there’s been talk about faster time-to-market, reduced waste, ideas on how to exceed (or simply meet) customer expectations. You get the picture. This notion of how to do things faster, maintain quality and give the customer what they want is proclaimed in the practices of Lean, Lean Six Sigma, Agile Development (including Scrum, Kanban, and Scrumban), Incremental Development, XP (Extreme Programming), Test Driven Development, and DevOps, to name a few (whew!). What strikes me, though, is that the faster we try to deliver solutions, the more likely I am to hear things like “let’s just get started” or “we don’t really need architecture”. An example I heard a few weeks ago in a meeting between a consultant and a project delivery team:
Continue reading “Faster, Better, Stronger, Wiser”
In Part 1, we talked a bit about this DevOps thing and why people won’t stop talking about it. In Part 2, we’ll talk about the areas where you can change your IT focus today to help benefit from DevOps.
A classic mistake is to focus primarily on the tools associated with successful DevOps shops. It’s not as if you can bring up your own Deployinator and suddenly become as high-functioning an IT shop as Etsy. The tooling is important, but won’t succeed if used for its own sake. The most relevant tooling is used to support a culture that can consume it effectively.
Continue reading “DevOps in Straight English – Part 2 of 2”
If you’re like me, you may be suffering a bit of buzzword fatigue, especially relating to how this word is used (or misused) within the IT community. But for those of us who have been a part of the community for awhile, it holds deeper meaning than the oft repeated platitude of “Software Developers and Sysadmins working together, riding unicorns over rainbows“. Okay, while I may have gotten slightly carried away, you get the point.
What is DevOps to the broader community that embraces it, and is helping even now to define it? What does that even mean for Red Hat’s IT efforts? We’re going to dive deeper into both questions in this installment.
Continue reading “DevOps in Straight English – Part 1 of 2 – Enter the Buzzword”