Join Steve Speicher, Head of Developer Tools Product Management, to ask your questions about what Red Hat and open source communities are doing around containers, Kubernetes, OpenShift, runtimes, serverless, pipelines, security, and more. This session will be moderated by Parag Dave of the Developer Product Management team.
Steve Speicher (00:05):
Hello, everyone. Welcome to Ask the Experts. I'm Steve Speicher. I lead the Developer Tools Product Management Team here at Red Hat, and I will be your expert for the day. So, I'll do my best. Ask all the questions. So, I've been working at Red Hat for five years and many years in the developer productivity and tool space, so look forward to your questions. There's a Q&A space. I'll share some resources as we talk, so if we need to use them, that'd be great. And with that, I'll turn over to my colleague Parag.
Parag Dave (00:43):
Thank you, Steve. I have the pleasure and the responsibility of being the moderator today. So, please fire away your questions. So, let's see if we can stump Steve for the day. Hi, I'm Parag Dave, I'm a member of the Product Management Team at Red Hat and the Developer Tools Team, and looking forward to chatting with all of you guys today and seeing a lot of the things that are on top of your mind and help you answer them.
Steve Speicher (01:12):
As mentioned, there's a Q&A there on the side. We can feel free to add in. We have some questions we've been hearing quite often as we've even, in the last couple of days, or some sessions we've been part of. So, be interested to hear any other questions you have, or Parag can pick them from the list.
Parag Dave (01:34):
Yes. Let's pick one, Steve, to get the ball rolling. So, what are the big changes you see for developers in the next 12 months?
Steve Speicher (01:44):
No doubt a lot of things are changing in our industry normally, but especially in 2020, there's a lot of different ways of working that we'll see. We see a lot of—as this happens now with this virtual event, how we do things. We changed the way we get together and do group meetings or working together. I think that we look at solutions, we've seen a great interest in our Red Hat CodeReady Workspaces, which is based on Eclipse Che, which allows for full development environments to be hosted remotely and then you can just simply access it through the luxury of your browser at home or wherever you're at. That gives a developer a way to have access to those tools but also in a controlled environment. So, I think that will change how developers have worked in the past.
Steve Speicher (02:36):
But not just from those aspects. I think there's a lot of change as we've seen in the world with the technology and what it means for actually delivering business value to the customers. Instead of everyone coming forward, looking at themselves as a software company, an innovation company, and how they delivered the goods and services. And so moving more and more towards how can they adopt practices like DevOps. Seems like so long ago in a sense, but still teams are not really doing the heart of what that means. And tooling can help with that. Platforms like Kubernetes and OpenShift really help drive and enable that capability. And then tooling—we see on top of that, how you can do GitOps methodology or improve a CICD through those terms. So, we see a lot of those things impact how the teams deliver software and how they develop those things.
Steve Speicher (03:31):
And of course, everyone has talked over the last year about even the technologies around serverless and functions. And so, again, just being able to have that compute when you need it. Quickly scale up, scale down, even as we've seen working with different platforms, right? It's important. That you have certain peak demands, and you rise to that peak and then you kind of shrink down as that resource is not needed anymore. And so, we just see these common patterns and developers having to worry about more and more things. And another great thing is we'll see a reemergence of Java, I would say. Java has gotten a lot of interest in technologies like the Quarkus project, where now Java can be used more in a cloud-native way, where you can use it in serverless contexts where it's leveraging fewer resources, quicker startup shutdown time. So, a lot of great things there.
Parag Dave (04:28):
Yeah. Cool. That's good. Yeah, we're getting a few questions coming in now. That's good. One of the questions is around CodeReady Workspaces and pipelines to pick up the cluster-wide proxy. So, this comes from Chet Hosey. He's asking, "Any plans for CodeReady Workspaces and pipelines to pick up cluster-wide proxy and trust settings? Connecting to Git or working with Maven builds from the developer tools isn't guaranteed for customers in a proxy environment." So, that's what the question is about.
Steve Speicher (05:00):
Yeah. So, we've gone back and forth on the exact details. It'd be good to follow up with some of the teams that are actively engaged in this. Because once you get into work in these environments, it requires some kind of close coordination. But some of the cluster proxy settings are intended for certain usage within the cluster communication. But as certain services that run on top of it sometimes require their own proxy settings, because oftentimes whether what happens in the cluster may be a little different than what you have, as far as access to Maven repos, etc. And so, that takes some careful coordination. And we just had a release in CodeReady Workspaces 2.1, which we shipped on April 14, which has improvements around this limited connective or proxy setting to help in those scenarios.
Parag Dave (05:55):
Here's is a good question from Richard Gomet. He's asking, "There are so many ways to build containers. So, what are the key emerging technologies for simplifying application deployment?"
Steve Speicher (06:08):
Well, let's see. That is an interesting question. So, I think the key parts. Building containers has sort of been done in a way—it kind of created with the Docker image format in docker build, and so, there's many different ways that that has been popularized or different innovations around that. Some of them, we see within how we can run without a Docker daemon need. And so projects like Buildah, which allows you to simply build up a container with bare metal resources. Google has a similar effort in Kaniko. Then you see some really interesting innovation around Java Image Builder, which has a nice way to leverage Java primitives. There's a nice Maven plug-in, and then you can build it. And then abstractions around that with Tekton task, which are customized for certain versions of that.
Steve Speicher (07:11):
There's the developer preview feature working on rethinking build on top of the platform, which abstracts away some of those things. So, yeah, there's a lot of interesting ways. And then on top of that, there's lots of various plug-ins too: IDEs, VsCode, Eclipse Che, etc. When you want to learn more about even the Docker file format, etc. And lastly, even we've changed quite a bit: How we've done sourced image in OpenShift. So, now there's a nice stand-alone tool that you can just emit a Docker file from it and then kind of pipe that Docker file into one of the other different tools, which usually use Docker files and import. So, a lot of good things there.
Parag Dave (07:56):
Right, thanks. Anthony Antonet had a good question. "What's the difference between CodeReady Workspaces, CodeReady Containers, and CodeReady Studio?"
Steve Speicher (08:05):
Ah, a great question. So, CodeReady Containers. I wonder if I should flip through my slide here real quick. I didn't mean to talk too much about some of these things, but I did have some material in case these questions came up. So, let's talk real quick about CodeReady Containers. So, CodeReady Containers itself is really just—if you've ever used Minishift or CDK, CodeReady Containers is the product that you run on your laptop or your desktop. So, it's a single machine instance of OpenShift Version 4. It doesn't rely on Vagrant or other virtual box type of virtualization but uses local hypervisors.
Steve Speicher (08:56):
So, why do we do this? This is important because we wanted to minimize the dependencies but we also wanted to have a way in version 4 that we were using the same installer you use to install OpenShift to actually lay down the bits to build up this local instance. And so we made real simple to do crc setup and crc start, and then you can get the status. So, download, real easy to do. You can go to try.openshift.com and give it a go there. Download it.
Steve Speicher (09:28):
So, let me—so, CodeReady Workspaces. And so, why do things have "CodeReady" in the beginning? So, "CodeReady" is a portfolio name. So, "CodeReady" and then the product that's part of it. So, "Containers" is the container development tool to run OpenShift. "Workspaces" is hosted Eclipse. Eclipse Che. So, our product, guys, Eclipse Che, and what this gives you is the ability to take a remote workspace, like I was saying earlier, host it into a cluster, provide the rendering of the plug-ins and the experience into a browser setting. And so, this gives you the power of your desktop IDE but containerized running natively in Kubernetes and OpenShift and taking advantage of that to leverage your infrastructure, but a setting by which is similar to what production is to be able to support these things.
Steve Speicher (10:27):
A lot of great qualities. "CodeReady Studio" is the name of what we have, our traditional Eclipse desktop tooling and the plug-ins associated with that for the various Red Hat technologies and middleware plug-ins, etc., which we provide in Workspaces as well in Studio.
Parag Dave (10:49):
I know we had some technical difficulties during the previous roadmap session. So, Eric Murphy had a question: "Is it possible to review highlights of the code-ready roadmap in brief?"
Steve Speicher (11:05):
Oh, that was already shrunk down to 30 minutes. It was a lot to cover. So, I guess the quick highlights are, we're moving forward with quite aggressively a number of areas and trying to track where we're seeing a lot of needs. So, I covered a couple of things with the CodeReady portfolio name, but there's a lot of things we work across Red Hat and OpenSource community on. There's the tooling with Knative project and the core service with OpenShift Serverless. So, not only can you get the core Knative component, the KNCLI, you also get VS Code plug-ins.
Steve Speicher (11:44):
You also get experience within Eclipse Che as that evolves forward, is on the roadmap and support integration into tools like odo. Expanding the developer-focused CLI as well as the OpenShift console, the web UI experience. So, how you can monitor and interact and build some of those components and OpenShift Pipelines is a mechanism by which we expand, continue to support Jenkins as a tool for building your traditional CICD experiences but bringing things forward with cloud-native experience with Tekton.
Steve Speicher (12:23):
So, building out that core capability. And as that technology goes upstream, GA will also be shipping that and supported within OpenShift and building out the experience again with the web console, enabling it through CLI with odo, doing some experimentation across a number of tools, supporting different GitOps scenarios and different GitOps configurations: meaning how we can build the right Git infrastructure, lay down the right manifest and templates, and put tooling around to help automate it and handle the triggers. I could go on for a while, but I maybe I'll—there's the roadmap, so I would recommend to watch the replay of that. There's also an OpenShift platform services one, which covers a number of things as well.
Parag Dave (13:12):
Yep. No, that makes sense. Yeah. On demand a session would be great, because it'll give you a chance of catching up on the entire gamut of the CodeReady portfolio that we covered in the session. Walid Shaari had a question about, "What are your thoughts of local image development versus remote?" Like CodeReady Workspaces and OpenShift. So, what are your thoughts about local versus remote?
Steve Speicher (13:36):
It's interesting you brought that question, because we actually had a session plan, which we done later, like the three ways of developing with OpenShift. It kind of touches on that. Why you do local, why you do remote, and kind of a hybrid model. It depends on a lot of things. I mean, the obvious answer is like, how big is your laptop? How good is your internet connection? And what regulatory restrictions or workplace restrictions you have and where your code actually lives and runs. So, a number of our customers, their consultants can't have the source code on their machines itself. So, when you come down to it, a true developer point of view, that's kind of again, it depends on a few things. I've sort of been traditionally getting the best amount of RAM, the most cores, the best solid state drive and get things set up locally.
Steve Speicher (14:27):
But then again, you're highly optimized, usually for one kind of application development pattern. So, if I switch projects a lot, then I'm going to be switching my environment a lot. But there's a lot of things you can do in the cloud that really help with that, like caching of Maven repos or dependencies or other caching services that you could really leverage, that your data center could have a strong connection out to those or have bigger caches. So, you can do optimizations on that.
Steve Speicher (15:00):
And so, yeah, I don't always like the answer, "It depends." But I feel like it strongly depends in this case on what you're looking to do. The way I like to say it is, if you're working on a project, like we're 50% of your time, and then maybe it makes sense to optimize locally and then be set up to work remote when you need to switch context and say, "That thing I worked on six months ago that my boss keeps reminding me I need to go fix some high-severity issues on it before I can continue on my new fund project."
Steve Speicher (15:31):
Then you can swap over in a workspace in CodeReady Workspaces. You can have a new environment up and running in a minute or less, or certainly around a minute. And then you can get what you need done there. And then just tear down that environment, ID development environment, and then move on. So.
Parag Dave (15:51):
Yep. That's good. And Marvin Streinfeld had a question along the same lines. "How usable are CodeReady Workspaces today?" Do we actively use a CRW at Red Hat, and how is the acceptance by the developers? It's a good question, it's an exciting space to be in.
Steve Speicher (16:02):
It is a good question. I was a big skeptic a long time ago when I was even—worked around the Eclipse Orion Project and knew about hosted IDEs and then thinking, and I gave it a try. And it was tough for me. And so, I was one of the early skeptics, but I can really see as I described some cases where I think it makes a lot of sense. And so, the answer, the short answer is, yeah, there's various teams at Red Hat that are using it successfully. Like I mentioned, there was a new release. We just put out 2.1, which 2.0 was a pretty much a major revision update: 2.1 helped smooth some of those things out. And we're seeing a lot of adoption with it as well. So, the overall good feedback on it seemed to be fairly successful as far as developer usage.
Steve Speicher (17:04):
Like any product, there's always room for improvement. So, we're looking at some core use cases that we can help streamline, improve the performance, increase the number of plug-ins that are available. But it's been a big win too, just to have the VS Code plug-in ecosystem kind of at your fingertips. Because, now you can—it uses the same kind of framework to build up the plug-ins, and they actually render them as the VS Code does. So, now you just have to go through a packaging step and ensure some APIs aren't used or built in a certain way. And it's usually a pretty quick process to containerize it and make it available for CodeReady Workspaces.
Parag Dave (17:46):
Yep. Great answer. I think when you talk internally, one of the proof points for us was to see how it speeds up certain areas of development for us at Red Hat. Right? So, when we started using Eclipse Che internally, we saw that the amount of time it took to actually onboard and troubleshoot problems was drastically reduced. And hence we were able to actually spend more time on actual coding versus troubleshooting. One of the questions is—
Steve Speicher (18:14):
Ah, yeah, sure.
Parag Dave (18:14):
Steve Speicher (18:14):
No, go ahead.
Parag Dave (18:18):
So, one of the questions was from Chris Bolls. "Are there any plans for CodeReady Containers to support VMware? We have security concerns regarding containers on laptops." So, you want to provide developers VMs for the container development. So, I'm assuming this means it can be run in CodeReady Containers from VMs.
Steve Speicher (18:36):
Yeah. So, that's an interesting question. So, we haven't gotten that request that much. A lot of the requests most recently are dealing with proxy settings and limited or no network connectivity. So, kind of air-gapped environments. And also how can I just make it smaller? Because the footprint for CodeReady Workspaces is fairly high compared to other tools out there that run bare-bones Kubernetes. And so, the short answer to the VMware where I can continue to track where that's at right now, we don't have any plans for that. So, go ahead—I can take a note of that and follow up and see what that interest is.
Parag Dave (19:34):
The question is, "Is odo tool relationship between—" Well, sorry, the question has changed. "Is odo replacing oc?"
Steve Speicher (19:48):
No, it is not. Odo and oc serve two different usages. So, it was hot debate whenever we came up with the concept of odo within the teams and those within that own and maintain oc. So, oc is really just in some ways a superset of what kubectl CLI tool does. And in some ways it masks off certain things if it's unlimited support. But it gives you the full—I like to say oc is like the full experienced OpenShift. And we found that there are certain developers who really just want to work with kind of higher-level constructs as like, "Here's my application, here's a proponent. I want to tie in services. I want to push it to the cluster and I just want it to run, or I want to iterate on it. And I don't want to get tied up on deployments or deployment configs and pods. And one of these concepts are where I just want to push it."
Steve Speicher (20:42):
So, we felt it was important to separate the two out. So, when simple things like when you type, "odo usage," you just get things that are focused on odo and that developer, which is a very different person who might be using oc to look at how to automate things, using the OC CLI through some automation tools or using it for more of high-power cases of working with config maps and checking pod status, etc.
Parag Dave (21:11):
Yep. Someone had a question about CodeReady Containers and about architecture. How is that different? Is it single mode where OpenShift is the architecture any different, or is it similar to OpenShift? I guess what's the difference between CodeReady Containers and regular open?
Steve Speicher (21:26):
Yep. Well, when you think about it, it's exactly the same software. So, in the end it's the same. OpenShift runs with a minimum requirement of three masters and obviously one worker in order to have some working. So, then you're talking about—I guess you probably shouldn't have more than that obviously, but it's usually a multi-node setup. What CodeReady Containers does, is—actually the engineering team takes CodeReady Containers. It basically runs the installer. It runs an instance of OpenShift. It does some work to ensure that—we can get more details and I'll hand wave over some of them, but the nodes are labeled in such a way since worker pods don't land on masters and master work doesn't on workers. So, we have to do some work there to set up the labeling and the intoleration so that all the components will run on a single node.
Steve Speicher (22:27):
And then that happens. We kind of custom cut them actually, and wait until a certain rotation occurs. Cut the instance, and then we release it. And then that gives you the ability to do that crc start, which then it'll just start pretty quickly. And then you wait for all the Operator normal pods starting up on the cluster itself. So, that's the main difference. And I know working the CodeReady Container team's within the broader OpenShift team to improve that experience and look at how we can better support a single node setup for these cases. Because as you know, the install and upgrade path that it requires kind of a bootstrap node in order to get the whole process going. So, that's hard to do in a machine where you have very limited resources.
Parag Dave (23:15):
Great. Amir Singh has a question about the Jenkins Operator. "We have it on our roadmap the next year. So is it going to be based upon the good old Jenkins, or is the Operator going to be based on Jenkins X?"
Steve Speicher (23:29):
Okay. So, it's going to be based on Jenkins, because it feels like what Jenkins and Jenkins X are—Jenkins itself is the main component for executing pipeline, definitely sharp jobs as defined in Jenkins. And there's the various worker or agents that are part of that. And then the main master of Jenkins that runs that. Jenkins X itself is an experience around tools like Jenkins. So, actually Jenkins X is moving towards supporting even Tekton as a main component. And so, if you look at what our strategy is with OpenShift Pipelines is—it is similar to that in a sense of, we're continuing to support Jenkins, but we're improving the support of Jenkins by providing a Jenkins Operator, Because that's how we deliver some of these systems on our platform. And then we continue to evolve the future of cloud-native CICB tooling with Tekton and add in the features there. So, I would say Jenkins in a sense, continue to maintain what we have there. The Operator helps in the operational aspects of Jenkins. But don't expect to invest more in features there but move more towards the Tekton-based solution.
Parag Dave (24:46):
Chad Hosey has a question about the Java console. "Is it coming back? Are we bringing that integration back that used to be 3.X in the OpenShift concept going forward?"
Steve Speicher (24:57):
Oh, it sounds like we might have had the question yesterday too. So, no. We've been talking on the ways of doing that. The console is evolving quite a bit in the 4X. The OpenShift console was completely rewritten in a sense. So, the previous one was an angular-based one, moved to react and merged with the core Tectonic kind of web console they had. Some of the concepts from there, we didn't redo all the different integrations we had. So, yeah, I got a ping from even our monitoring PMC and what we can do about it. So, something that, I don't have a timeframe on it, but it's something we definitely need to look at.
Parag Dave (25:40):
Conatsul Dunn has a question which is around the future. "So, we know that today there's a lot of technologies, there's a lot of glove offerings out there. It's hard for engineering and developers to keep up with all those. And companies are looking at not to lock in into a single cloud provider. So, what's the—if you look at the crystal ball, what's the future we see for software developers and engineers? How to handle this ever-changing technology landscape and also not locking into single cloud?"
Steve Speicher (26:11):
Yeah, I think that's an interesting question I think a lot about, because as Red Hat, we do a lot of focusing on multi-cloud and hybrid cloud experience. So that's where our customers really look for us to shine. I think one of the things that we do is we try to, in building off a platform like OpenShift, in some basic capabilities that gives us that portability. So, we're even doing—some of what we've done in OpenShift predates even or is ahead of the curve, if you want, some of the features in OpenShift. So, we're doing work to kind of enable more-standard tooling, standard features, that make it more portable or allow the customer to opt in, because sometimes they may have a need. The question comes up around Kubernetes deployments versus OpenShift deployment config.
Steve Speicher (27:01):
And we added a feature even in our web console to default to deployments. And then you can pick deployment configs, or you can pick a deployment, or you can pick, if you have Knative Installer or OpenShift Serverless, then you can have that. So, I think the one thing we try to do is try to be behind the clear OpenSource projects that we see our developers getting behind around containerized applications in Kubernetes. And we try to fill that gap when we can and help foster the community and work with other partners and what we can do there. And so that tooling will ever change but I see it—I think we've done a good job of sticking with the tools where the developers are. The Java IDE space, VS Code emerging. You can see some top plug-ins in VS Code land contributed by Red Hat. So, we'll continue to move how we participate in those things.
Parag Dave (28:07):
Cool. Great. We have about two minutes left. There's a question in the chat that I want to bring in. "What are the different security and vulnerability scanners which are available?"
Steve Speicher (28:22):
You might be a better one to answer this, Parag.
Parag Dave (28:24):
Thanks. So, one of the products that I lead is CodeReady Dependency Analytics, which is a vulnerability scanner for your application stack. And you can run them directly into the IDE from your CodeReady Workspaces, from VS Code, from IntelliJ. The idea is that it takes your application manifest, runs a scan on it using one level of your data that currently is coming from NVD. But we have an exciting partnership that we formed with Snyk, and so we'll now be providing analysis based upon the intel, the Snyk intel, which is a curated database and one of the most comprehensive one-level databases out there.
Parag Dave (29:01):
And so, developers can basically address the security one level produced in the stacks before you even view your build processes. And this way you can bring in DevSecOps into your development life cycle, right from the beginning, and build it yourself, You don't have to worry about finding out about issues down the road and then coming back and retroactively fixing them. You can create CI pipeline tasks for that. You can do them directly from the command line. And like I said, from IDE itself. It's an exciting thing. You take a look at it. It's called CodeReady Dependency Analytics. It's on the Red Hat Developer site that shares more data about it.
Parag Dave (29:33):
So, Steve, to close, we've got less than 30 seconds left. Anything you want in closing? I want to thank you very much for doing questions.
Steve Speicher (29:42):
No. Yeah. I want to say thanks a lot for the great questions and attending. It was very insightful to see what people were asking about. So, thanks a lot for attending.
Parag Dave (29:50):
Thank you. Take care, everybody.
Steve Speicher leads Product Management for Developer Tooling at Red Hat. He previously led the Developer Experience with OpenShift & Kubernetes at Red Hat. Steve has helped lead the Developer Experience with OpenShift & Kubernetes since Kubernetes 0.13.
Parag is a member of Red Hat’s Developer Tools Product Management team. Parag leads the SaaS and AI solutions for Red Hat Developer Tools. Parag has a diverse IT background with over 20 years of experience in enterprise software.