Video

9 Steps to Awesome with Kubernetes

  • By Burr Sutter
  • Published: 2018-11-13

Everybody seems to be rocking with Kubernetes! Even your favorite repos at GitHub are running on top of it. Don't be the last developer/architect to board this bullet train. Come and learn a LOT in this session about Kubernetes - from getting started to staying productive. We will provide numerous practical tips & techniques that will take you from cloud newbie to cloud-native.

Installation & Getting Started
Building & Running Linux container Images
Your JVM goes boom - OOMKilled in Kubernetes
Peering inside your container Streaming logs
IDEs and Debugging
Load-balancing and service discovery
Configuration management
Blue/green and Canary deployments
Istio Service Mesh for advanced microservices architecture

Show More Show Less
Transcript

well hello and good morning it's an

honor for me to be here this morning

with all of you here at devoxx because

this is obviously the greatest

conference for developers there is

globally if you don't feel that way

I actually knowed this to be true

because I see all the different

conferences around the globe so this is

one the longest standing one of the

highest quality developer conferences

out there in the market so kudos to you

guys too for being here today

and of course buying your tickets before

it all sells out now we have a lot of

ground to cover today I tend to talk

really fast I'll try not to talk too

fast but we have easily four or five

hours of things to talk about

but we're kind of try to dream it all

jam it all into three hours now we have

normally there's a break that happens in

the middle but we have a lot of things

to cover so let's see how we how far

along and how fast we move we may or may

not get the break is that okay okay now

I'm a bit nervous you guys are a large

audience this could get really go

sideways on me I got all kinds of crazy

things to show you as it relates to

kubernetes and OpenShift and

technologies like that now well quick

show of hands how many people have put

hands on docker at this point you know

you've done some docker run docker build

fantastic that's practically everybody

so that was well as I expected how many

people but hands-on with kubernetes at

this point all right a fair number of

you but maybe only 20% of your so 25% so

this is kind of an introduction to

kubernetes but I also have lots of

little advance tips and tricks and

things like that to show you as well

depending on how far along we get so

there's a lot in this presentation we're

gonna spend a lot of time showing you

things from the command line

cube CTL cube cuddle cube control

depending on what you name you have for

that but at the same time I want to show

you a bunch of different demonstrations

of the different capabilities you can

find in this kubernetes world okay so

we're gonna have a lot of fun now the

first thing you should note is the slide

deck is that bitly 9 steps awesome do

you see that the bottom of the

presentation there bitly 9 steps awesome

you have to have that link because

there'll be a bunch of other links that

you'll want later on so grimm make sure

you grab that link and then also look at

the github link as well I have all the

notes

for my presentation in that github link

and as well as the sample application

for this introduction and therefore you

don't have to take a lot of notes

because I already wrote them all down

for you as a matter of fact I refer to

my own notes because I can't remember

all these things either so we're gonna

actually dig into that github repo at

great in great detail and you're going

to want access to that okay but like I

said there's a ton of other links in

this presentation you're going to want

access to as well so I do this axe the

three-hour class mostly for O'Reilly

with safari live typically that's a

virtual event meaning I was actually the

last time I ran it was in Turkey last

week I was sitting in Turkey in my hotel

room and broadcasted out live so we also

run an sto one on a regular basis as

well so a 3-hour sto deep dive with a

safari live team so keep that in mind

now there's going to be a raffle okay

we're gonna have a little fun maybe

towards the end maybe somewhere in the

middle depends on how sleepy I think you

guys are because for me it's really

early in the morning I don't know about

you folks but for me it really is early

in the morning it's what 3 a.m. or

something of that nature so I'm totally

not switching into the right time zone

so maybe this will help wake you up also

we're gonna give away a couple

Chromebooks I actually have some

Chromebooks here you must be in the room

to win for those people watching the

livestream or whatever or the recording

you won't be a part of that but just

send out a tweet you know follow at

birth Sutter sent out a tweet with a

picture of the session and of course

mention at birth Sutter and dev ox and

then we have a little system that just

simply randomly picks from the tweets to

see what's out there and then we'll just

give away a couple Chromebooks is that

cool

you guys like Twitter no alright then

email me now I'm kidding just go to

Twitter okay so normally I run this is

kind of a hands-on session we won't be

able to do that today because you guys

don't have power strips and laptops and

we don't even know if we have enough

network connectivity for everyone to be

trying this sort of thing plus the setup

by itself is the hardest part I have

already done the setup here on my laptop

so getting a working kubernetes

environment can actually take a long

time so we're in that neighborhood of

let's say 30 minutes if you have really

good hardware and you really know your

hardware well 2 somewhere around 3 days

and in the case of this little AB setup

that I have here with a mini cube or a

mini shift you can still take an hour or

two and often when I'm putting students

through this little class they might

take the whole three-hour class

to just get set up so just be aware of

that setting up can be very challenging

I do have it documented as well on that

Kasane github repo and then I have some

links to tips for virtualization drivers

the biggest challenge for most people is

do they have enough hardware number one

and number two have they ever set up

with a virtualization hypervisor at all

on their laptop before like it might be

turned off near BIOS it might be you've

never used them Mac that way before

you've never used Linux that way before

so that's where it gets challenging if

you have a lot of experience of

virtualization on your laptop then

you're probably fine it won't be that

hard okay so this is one of the things I

use in all my presentations I've spoken

devoxx now I don't know over the last

twelve years or so and so I probably use

this before so if you've seen this

before please forgive me because I use

it all the time but it's my favorite

thing I like to think about your journey

to awesomeness your journey as that

means to your learning journey as you go

from one place to the next to the next

because we're always learning right

we're always growing

hopefully we're always trying to consume

and understand that next new thing and

when the key things in this new world is

organization around DevOps right really

focusing on DevOps and how dev and ops

build better software together I imagine

many of you who work for a large

organization probably already have a dev

team and an ops team and then last year

or two or three your ops team declare

themselves the DevOps team has anyone

seen that happen you have a DevOps team

now okay are there any devs on the

DevOps team often there's not right you

have some fantastic so that's one

challenge already we've basically like

when software developers all declare

themselves to be architects probably 10

15 years ago so we can make more money

ops people declare themselves to be

DevOps so they can make more money and

we have that's a problem if you think

about it because dev and ops are

supposed to be together

all right we'll keep going here you have

to have some form of cell service on

demand elastic infrastructure this is

where kubernetes really plays and all

the cloud infrastructure really plays

because if you're still waiting for a

virtual machine from your centralized IT

department and you're waiting three

weeks or two weeks or one week or three

days you're waiting vastly too long you

should put a spin up a new VM in this

case we'll spin up

in the case of kubernetes you should be

to do that instantly and wait not at all

and of course it should be quota driven

an API managed so therefore you

basically will spin up only the

resources you can are allowed to but you

should be to turn on a new experiment

instantaneously and try something

instantly that's the beauty of the cloud

that's the purpose of software software

infrastructure that the sound

software-defined ok so we're gonna do a

lot more of that you got to think in

terms of automation you got to think in

terms of I'm no longer feeding CDs and

DVDs and the trays anymore like we used

to back when MSDN used to ship a big old

catalogue to us we do have to feed CDs

or DVDs into CD trays DVD trays no no

and you're no longer going to SSH into a

machine and um install this or apt-get

install that you now we're going to

fully automate your environment meaning

you're gonna use an answerable playbook

a puppet a chef or even a bash shell

script for that matter and automate the

building of your server from scratch and

then tear it all back down again so they

call that the phoenix server right the

concept of you can burn it to the ground

to ashes and rebuild it from a script so

that concept is going to be fairly

important and you're gonna see that

concept as we get more into description

anything you also should think in terms

of your CI CD and deployment pipeline we

won't have much time on it for this

session just because there you know it's

another thing like another thing that's

kind of advanced and set up for you

depending on the environment that you

have but we we should mention that the

fact that you want to fully automate

your deployment pipeline as well it

should no longer be on the back of a

napkin or in a presentation that your

architect or CIO or CTO has in a slide

it should be something fully automated

with maybe a jenkins to environmental

jenkins file we're gonna see some

advanced deployment techniques though on

this presentation we're going to show

you some basics of blue/green deployment

a little bit of kind of canary

deployment if we had enough time if we

have enough time I'll show you some SDO

as well we can do really smart canary

deployment you can do dart launches and

things of that nature but you can really

take advantage of super interesting ways

to deploy and what this fancy new

deployment model means is you can test

in production I know this kind of

mind-blowing you're probably thinking oh

my god I don't want to test in

production that never happens around

here but there that's the new idea what

Bluegreen deployment canary deployment

dark launch

basically you are trying to deploy your

new software as fast as possible

because one of the things always like

saying in my presentations that tends to

bother some people I recognize this

but then for all you folks are software

developers you know you people who put

hands on keyboard and you're knocking

down texts into that text editor or IDE

and you get it to compile that actually

adds no value your code offers no value

to your organization until it runs in

production okay so it doesn't matter how

many months or weeks or days you spent

crafting that awesome algorithm until

it's running a production and

demonstratively adding value to the

business organizational function is

supposed to add value to then I had no

value perhaps you've seen that survey

that came out of a while back is by by

filename KO havi where they have

quantified now that most software that

makes it to production only well is for

this way

one third of software that makes the

production actually increases the

metrics it was intended to increase only

one third another third basically adds

no value and another third adds negative

value all right so it's a great study

that done by Microsoft a guy who came

from Amazon went to Microsoft I'd

encourage you to check that one out guy

check it out by again in Mohave so we

tend to produce a lot of software but we

have to make sure we demonstrate it

running in production so this concept of

deploying quickly deploying every week

deploying every day you can start doing

that with these new techniques that

you'll see here and then you to Akutan

be a micro services unicorn is even

recognized for I stole this image it's

the github unicorn it's the 503 they've

used before so that's actually the give

up unicorn but maybe we want to think

about this centerpiece because that's

where we're gonna focus we're gonna

focus our attention on how we deal with

some self-service things and automation

things not too much on deployment

pipeline but definitely some advanced

deployment techniques this is where

kubernetes really shines is where

OpenShift takes advantage of those

things this is where sto as well really

shines and then we can be a really

awesome unicorn okay all right

so the fundamental focus that you have

to think about here is we're all about

agility now so this there's often a

question that arises in these kubernetes

sessions that i do i've talked a lot of

people about kurby Nettie's and the

number of question is okay I've got this

application this big ole WebSphere

application you know it's still yeah

still a monolithic application with a

big old ear

and I'm thinking of you know putting in

a docker in kubernetes and when the

responses I have is well this

criminality is actually gonna add value

and agility to your situation how often

do you deploy we deploy we deploy like

clockwork you know we're on a mission we

deploy every three months and how many

applications do you have we have one one

big ear okay I don't know if cribben is

gonna help you a whole lot if you're

only deploying every three months and

deploying one thing kubernetes really

shines when you're basically you're

going to deploy every week every day

every day at noon during the business

day right and I got 25 or 45 things to

deploy now you're really living in this

new cloud native world that's really

where micro services has taken over and

you'll focus on technologies like

kubernetes to give you greater agility

that's the trump card so whenever you're

asking a question of well that looks

hard that looks painful I'm not sure why

I'm going through that learning curve

fundamentally try to understand is it

giving you greater agility or not so I

want to show you a demonstration real

quick we got to keep this interesting

and it's more fun for me if I show you

stuff live and probably mess things up

along the way because it keeps it

exciting for me too I want to show you

this little thing that we have right

here okay

what I have is I've deployed so by the

way openshift

this little orange sent red symbol here

is red hats distribution kubernetes

right so when I say openshift kubernetes

kubernetes openshift they all mean the

same thing and you'll notice I'm

actually running both I'm running a

little mini cube here I get pods okay

nope I'm not in in the right namespace

namespaces let's see here uh-huh so I

got a bunch of pods running on this

criminales cluster but I also have

another kubernetes cluster here okay

cube CTO get pods all namespaces so I

actually have at this point I have this

cluster as well as the other one they're

both running on VirtualBox let's see if

VirtualBox will come up for me here we

go so the mini q1 running right here is

running with 6 gigs of RAM 2 cores as an

example and I'm running this one that

I've named sto but 10 gigs are ram and 4

cores so I'm running 6

gigs a RAM just for these two VMs right

together and then I'm running about six

cores I only have a six core machine so

that does tax the Machine a little bit

but I have 32 gigs of RAM on this

machine so that makes it very doable I'd

still have Headroom and all these VMs

but I'm not only just running that I'm

also running three publicly hosted cloud

environments because here's really the

point and reason why there's the whole

the whole lie the whole reason you

should be thinking about using

kubernetes is once you learn it you can

run your application anywhere okay

any cloud private cloud laptop cloud a

bunch of raspberry PI's or in the public

cloud across massive virtual machines if

you want so I want to show you this

little application I have this little

guy running here locally make this

window a little bit bigger maybe bump up

the font make it look easier see and so

what I have here is this thing called

hopefully you guys have heard of AMQP

right so it's an interoperable message

broker protocol so basically you can

move messages from one broker to another

to another but one of the things we've

done is we've we've actually taken that

message protocol and we built a router

around it so no words this is not a

broker it's just a router there's no

message storage there's no store and

Ford like you have in traditional

message brokers it's purely always for

it as long as it can make the next

connection the next handoff it just

seems continues to route the message but

as long as you can speak AMQP you're

good to go so anything in the Java world

we're AMQP or nodejs or Python it

doesn't really matter and what this

little application does is arouses

around all these clouds okay so we're

gonna try that real quick so I basically

have here my Amazon environment running

with a single virtual machine and

running and openshift instance in it

okay I have my Azure environment I got a

connection error okay my Azure

environment that's Azure console but the

OpenShift running inside it I have the

Google console ok GCP and OpenShift

running inside it so here's the point

right I have all these different

environments and quite honestly on this

side of things they look very different

and they behave very differently so if

I'm an infrastructure person I do have

to learn a lot about those individual

clouds you know how do I set up my keys

how do I get this virtual machine spun

up is

being attached correctly like in the

case of a sure it attaches a regular

hard drive instead of an SSD you got to

know that because it's slow you know and

there's a lot of infrastructure things

but at the developer level they all look

alike right so if I if I come back out

here you know they all look alike from

the developers perspective and and again

this is just a standard kubernetes kind

of environment okay but what we're gonna

do now is have a little fun with this

let me put in a message here I'll just

put my name in and hit Send okay it says

merhaba cuz I was last in Turkey notice

my message went my my message message

flow screen ax also pushed to pushed up

a message you saw that flash up there

but watch what happens it actually says

it went to the burger cloud alright so

the bird cloud is the one running on

this laptop but there's also Amazon

Azure and Google there but let's

actually kill the local burger cloud

okay so let's say there's no more

available capacity in my local cloud and

I'm just basically going to turn off

this message consumer all right so

that's the consumer and responders to

that message so never ever remove that

let's do Stefan here right I'm gonna hit

request and notice they went to Google

and notice it flashed up red there too

because red is Google okay so right red

was Google so when to Google this time

there's no distinction from the user

standpoint where that message would

process it was just processed in a

different cloud that's the beauty of the

router but that's also the beauty of the

flexibility the agility gained by using

an architecture like this one so if I

come over here and go to Google let's go

to Google okay and that's this guy right

here I'm gonna shut it down all right

I'm gonna turn it off so it can no

longer process messages all right and

let's just put in another name here

helps if you get your keys your fingers

on our keyboard and then we go to Alex

and notice they went to AWS and AWS okay

right there so that's kind of the point

you can fail over now more importantly

if you run out of capacity in any one

cloud you have all the other available

CPU available to you now and memory

available to you so like let me bring up

my local processor again

okay this is the local one alright we're

just spinning up pods taking pods down

remember and I mentioned earlier you

need that elasticity you need that

ability to self-serve this is kind of

that concept I'm not waiting for anyone

to provision anything for me I'm

basically saying I want one now I want

to now I want three of these now so if I

come back over here and type in my

sister's name is Dax someone put Dax in

here okay notice Dax came back to burr

because burr is online now receiving

messages but let me pump a little load

into this thing

okay let's actually drive some load in

so this is load on the local system the

local cloud and you kind of see what's

happening right now okay

by default we try to keep all the load

local and this gets local cloud birds

running about a thousand for 1,400

transactions per second you kind of see

it the number changing there but some

are going out to Amazon eight

transactions per second or so and then

some are going out to Azure based on the

overall load and actually if I bring my

Google one back online I have it set so

that Google is a higher priority than

the other two so we'll see Google

actually grab some load and take

advantage of those transactions also and

you can kind of see if you'd notice over

here this is also showing the state of

each of these component processors that

are running up there and this is a

little vertex java application but you

can see Google now is taking a little

bit more load okay and then if I

basically come back to my local system

and let's go here and here and watch

what happens I've basically took the

local one offline again

mase again we may have run out of

resource and now we can burst out to all

those public clouds and if I take down

one of those clouds you'll see it spill

over and spill over and what's really

kind of cool if you have too much load

let's say in Amazon

it'll also spill back over to the select

on which is kind of far out if you think

about it in other words I can drive too

much load in Amazon so it then starts

running transactions here or vice versa

so this little concept of being able to

run anything across a cloud environment

you can kind of see what's happening

here basically burr is driving load out

to the other three but again I can

reverse the flow if I want to and you

can also put other connections between

these different clouds but this is just

based on the cupid dispatch router okay

so if you're familiar with Apache cupid

project it's called the dispatch router

and that's what this

technology is an open-source Apache

project but we can once we have the

infrastructure in place we have

kubernetes everywhere we can do some

really cool things with it so what do

you guys think about that is that cool

okay so I just wanna kind of give you

that perspective so you may be basically

have a better feel for what it means to

live in this new kubernetes like world

okay so let's get back over here to the

presentation and let's get marching

along here now looks like not all of you

have joined the slide deck

I see 52 people have but we can should

get more of you in here there's a few

hundred of you in here right now

so make sure you join that slide deck

because there'll be other links that you

want so the whole concept here is we

have to reduce our deployment cycle from

so many months or so many weeks down to

basically one week or two weeks or three

weeks basically if you if you think

about it the whole concept of agile and

what we do with our scrum teams and

everything else is that we should be the

deploy to production at the end of every

sprint not at the end of a batch of

Sprint's is which is what we normally do

where we have let's say three weeks

friends and we do six of those and at 18

weeks later we deploy no we should be

deploy it in of a three week sprint

directly into production that's kind of

goal okay now here's our nine steps you

can see it actually starts to zero and I

have a bunch of bonus items the steps

are now probably some are getting close

to 15 or more because I keep adding

things to the presentation and trying to

figure out what to take away so we're

gonna give you a quick introduction to

what kubernetes is we're gonna quickly

talk about installation and getting

started only because we're not here to

do the install you guys won't be to get

a chance to get to the installation but

I'll just kind of show you some tips and

tricks of how to get it running locally

and then we're gonna talk about you

build images and run containers in this

architecture we're going to talk about

the cube CTL exec command or OC exec

command will kind of show you that one

and why that matters because you're

gonna need to know that when it comes to

debugging your containerized application

one of the things I'll show you is how

the java virtual machine by default

blows up in a cgroups architecture right

so in other words if you use docker or

if you use kubernetes and you have you

ever used docker in production and you

notice that the JVM every now and then

restarts and you probably don't know why

I'll show you why because it does blow

up and deep and restart by default and

if you take default settings and so

we'll talk about configuration and

logging we'll talk about storage

discovery and load

cleansing the live and readiness Pro

which are super critical to this what it

means to truly be cloud native truly

kubernetes native and then we'll show

you some Bluegreen deployments rolling

updates things that nature okay and then

I had a lot of feedback from my earlier

classes I had a little section on

debugging and people like I forget

debugging I don't it's it's just setting

up the right port and setting up the

debugger it's not a big deal

assume you can get the port open which

can be a big deal

but the real big one is databases so I

have a whole section on databases now

how to run Postgres as a containerized

application within kubernetes also again

all of this is documented so if you go

to github repo you're gonna see a

document for one two three or four or

five you can see two number 9's and

there's even number ten I start working

on as a for that matter ok so this is

the challenge that we've tried to

address over the last several years as a

matter of fact when we learned the j2ee

specification we learned about the

different roles associated with who

builds what and who does what with the

artifact and we put our jar and our war

and our war in our ear you guys remember

all that good stuff

I know it's ancient history at this

point everyone only uses Wars at this

point but here's really the key thing to

understand your application sits on an

entire stack of configured other points

of infrastructure it's not just the

things in your pom.xml that matter to

you and the versions really matter

they're obviously a lot but it also your

custom configuration your JDBC driver

which version of the JDBC driver am I

supposed to use how is that configured

what is the name of the data source to

have beena JMS cues that are predefined

any users and passwords associated with

that what is the app server or

application runtime is that a spring

boot this WebSphere WebLogic JBoss

whatever it might be okay Tomcat and

then you got your JVM right that has to

be set up correctly because you know

that it only works on this version of

JVM it doesn't quite work on that

version of the JVM hopefully you know

that at least and then if you deploy

analytics environment what version of

Linux are you running what version of

the kernel and what you know

dependencies might you have there

because that matters too in some cases

and then of course you have your server

hardware so the good news is we've been

trying to solve this problem for quite

some time you know we actually would

bake out golden images of virtual

machines if you live on the Amazon world

you bake out a Mis and that's what

Netflix did they have a Mis now to solve

this problem okay but here's really

where it got rough for us here in a

traditional data center

on Prem world we would as developers and

architects we would basically identify

our stack we would write it all down if

we you did our jobs well with email the

QA Department say here's how we think

you should set up our war file that we

just baked out of the development team

and here's they came out of a Jenkins

bill right we have a nice war file

here's where you pick it up from and

here's they set it up that's what we

used to do we used to email everybody

with those configuration parameters well

the promise the email has always gone

stale what version of the email goes

with the version the source code right

that was always a problem too and so we

got really clever a few years back we'd

start using wiki pages instead right how

do people still use a wiki page that

identify the configuration for their

application only a few of you fantastic

well actually a few you not willing to

waste your hands too but here's the

issue I'm running on a Windows desktop

or Mac desktop or a number two desktop

or something like that but my production

environment is really rent Hat

Enterprise Linux or some form of Solaris

or AIX whatever might be it has this

version of the JVM it has this WebSphere

version WebLogic version whatever might

be and it's running on a certain version

of the of the JVM right so that all

matters the certain version operating

system the JVM all that stacked and so

that's really where the challenge comes

in and this is really the beauty of the

docker file the docker file codifies

your stack it basically is a

programmatic artifact you can check in

to your source code repository and helps

everyone understand oh this is what that

application needs it needs this version

of the OS with these dependencies from

the operating systems perspective it

needs this version of JVM right with

these configuration parameters for the

JVM it needs this app server layer

whatever the application runtime might

be and that could still be no js' or

Python or Ruby it doesn't really matter

it's just your application runtime

environment and of course whatever

custom config that you have like oh I

need this data source connectivity I

expect this kind of driver for my Oracle

database whatever it might be you can

then build that in your docker file so

that was really the beauty of docker and

that as we saw earlier with most people

raising their hands we've certainly took

advantage of that most people now are

living in that docker world at least

from a development standpoint even if

they're not using it in production just

so they can take advantage of the docker

file right this is what we mean it's no

longer wiki page it's no longer an email

this is

we need to run our app now it gets a

little bit crazier though when we run a

bunch of these things as I mentioned

earlier right you don't really enter the

kubernetes ecosystem when you only have

one single application and you'd only

deploy every three months but we know

that we live in this new digitally

transforming world where we're the

heroes of that story we're the people

who create digital assets we're the

people who know how to put hands to

keyboard and create new code new

configuration new scripts new

infrastructure whatever it might be

we're the people who literally can take

what's in our minds and make it real in

the world okay and what we need is those

digital creators to work ever faster

than ever before that's what's happening

so the fact you can deploy every three

months was nice but we need you to apply

every week now as a matter of fact the

business has regulations that they have

to respond to from a government entity

standpoint in a particular country or

they might have a certain set of

business rules and requirements based on

let's say the marketing campaign that

has to get run and somebody has to

change okay somebody has to change

quickly and then how do we get that

change into production so when we scale

this out across let's say six servers

and actually have yeah six nodes here

technically I have to running on this

laptop and three running in the public

cloud so I only have five and I think

about it running right now but how do i

scale out across those architectures how

do I avoid port conflicts how do I

manage all those hosts how do I deal

with the fact that something might have

died and so if you live with a pure

docker world okay when I say docker I

mean - docker command line tool and

actually if you notice in my laptop I'm

not running docker for Mac or docker for

Windows I turned it off because I'm

using kubernetes as my docker

environment if you will all I'm using

from docker is just a docker command

line tool you know docker build

docker run docker PS docker images can

thing but what this means is I can now

go into each of these machines and have

them managed for me I don't have to go

into SSH into each machine and go docker

run docker run docker run docker run

docker run docker run and actually if

something dies come back around and go

up it died docker run that again Dokka

run this again

docker run that again especially if it's

Java okay because Java tends to fall

over if it's been docker runned because

of the nature of something I'll show you

in a second okay so how do we keep it up

button healthy how do we keep it alive

and well how do we basically make sure

that all those docker runs are happening

magically for me and that's really where

something like kubernetes comes into

play

so this is the symbol icon for

kubernetes it's pretty awesome stuff

kubernetes is actually the greek term

for helmsman I like to think it was the

pilot if you will ships pilot or the

governor it's basically the orchestrator

that's going to basically make all those

docker runs happen for you okay across

all the servers that you've applied it

to and it ensures that those docker

images and docker containers those

containers Linux containers are up and

healthy and the cool thing is kubernetes

is and doesn't require docker you could

use other container formats as well but

it supports multiple cloud environments

you saw five different clouds two of

which are on this machine running in

VirtualBox and three of which are

running on the public cloud but bare

metal environments also so you can

actually run this completely without

virtual machines and just use bare metal

literally plug in real physical Hardware

into your data centers racks and light

them up as a communities node also it's

inspired by Google's focus on containers

you know they have ten years of running

containers of scale and so this is where

that project originally came from it's

open source written and go and it

manages applications not machines it was

meant to be for us as developers to take

advantage of though honestly it does

look a little bit like infrastructure

often when we're looking at it okay so

some key capabilities some self-healing

and we'll see some of that horizontal

scaling automatic restarting and that's

a very important aspect of this as well

in other words it looks for this health

and readiness of your application if it

is not healthy in a restarted

automatically and that becomes very

important later on and then you're going

to also schedule across multiple hosts

it has built in load balancers and has

rolling upgrades and lots of cool stuff

the first term that you're going to hear

in this criminalities world is this

thing called a pod and so the most

common question I get is why is the pod

called a pod and some people remember

that 70 science fiction horror film you

know the you know the Body Snatchers the

pod people yeah it's not that

no and you might say it's a pea pod you

know multiple peas in a pod right nice

piece like that but realistically it

comes from this icon you see here the

concept of the whale and a family of

whales is a pod okay because you can

actually put more than one container in

a pod and that becomes really important

when you get into an architecture like

sto is an example which has a sidecar

container which lives alongside your pod

so the pod is more than one container

Linux container potentially typically

you only put one in it and then you have

a shared IP shared lifecycle shared

storage volume and all those things get

cycled together okay so if I have two

pods in there or sorry two containers in

my pod they all get cycled together and

that's important to know so therefore

you don't want to put your database and

your application in the same pod I get

that question a lot to know the database

should be in its own pod and your

application in its own pod because you

can think of this pod as the machine

from the applications perspective in

other words my Java app my node.js

application my Python application sees

that pod as the machine itself okay it's

like them and when you restart the pod

it's like the whole computer got

restarted that's how the JVM responds to

it so just keep that in mind the shared

life cycle of multiple containers in a

pod is the key unit of work that we're

mostly dealing with we're trying to get

pods out there and running and they

weren't you know tearing pods down

rolling update the pods things that are

nature and then you have these other two

constructs that you might have seen in

the communities ecosystem the

replication controller is the concept of

I want to basically make sure that I

have two or three or four of these pods

running and you can kind of pick the

number you want so a replication

controller is looking at the desired

state and ensuring that we have in

numbers of those pods running okay and

then we have this newer concept and when

I say newer it's actually pretty old at

this point called the deployment so in

this session we're going to focus on the

deployment because replication control

is a bit older and the deployment is

that truly declared state meaning what

image am I going to use to deploy how

many of those do I want what are my

liveness produce readiness probes my

constraints for memory and CPU your

deployment artifact is that one you

primarily interact with at this point

okay so deployment yeah Mille but is

that desired state okay you also have

this concept of the service will play

around with the service because the

service is incredibly powerful if

there's anything that kind of really set

this architecture apart from previous

architectures that I've seen in this

kind of you know highly elastic style

way of doing computing the service being

separate from the pod is really a piece

of magic okay the service holds the real

IP if you will the DNS entry for the

thing and our words that's my customer

service that's my human resource service

that's my accounting service that's my

billing service think of it like that

it is not

ephemeral how often do you have

customers that you decide all of a

sudden oh we don't want to do customers

of this organization we're gonna only do

inventory and we're gonna drop customers

I mean it does happen but not too often

right how often do you move away from

inventory to move over to patients or

maybe you move from patients over to you

know we're gonna just do people you know

you don't tend to ten you don't tend to

change those major artifacts you and

your domain-driven design right that

often so the service represents that it

represents something that tends to be

around a little bit longer if you will

like customer patient client inventory

HR but it then sets itself up to invoke

or basically pass through the invocation

into the actual pod the pods are very

ephemeral they can come and go all the

time they can get rescheduled from one

machine to another machine to another

machine and that's the point the service

is kind of what sticks around and then

you have the concept of labels and the

labels really make the magic happen

because that's how the service knows

what pods belong to it it's based on the

label something simples that all right

so your criminales cluster would look

something like this set of nodes I have

all these different VMs or actual bare

metal servers I might have like Tomcat

and Postgres and Bob fly and spring boot

and you know who knows what running

across those different nodes and they

don't have to be like for like you know

decide use the icons here to represent

the fact that some nodes have more load

than others they have different

components and others and then you have

this master right the math kubernetes

master is responsible for ensuring that

the workload lands on the right note

okay your application is basically going

to make it out there to one of those

things and your dev and ops people are

interacting with this API API servers

some people like to call it and this is

the endpoint if you will for your cube

cuddle command cube control command and

then once you give it the declared and

desired request I want this to have two

of them I want this image to be running

two times two replicas with this

liveness program this readiness probe it

makes it happen okay you make the

request it eventually makes it happen

okay so the concept of labels as I

mentioned are very important across your

different nodes you can might see here

that I have this Bob fly icon here and

it has an app called cool so it's just a

key value pair that's all the label is

it's an app called cool over here it's

an app called cool as well you can see

it's basically in the environment called

dev this one's also an environment

called dev

but it also this is version 1 and this

is version 2 ok so literally I have a

different maven build a different piece

of code running is version 2

simultaneously alongside version 1

that's a very powerful concept and this

is something you want to take advantage

of because that's how you get your

Bluegreen deployment that's how you do

your canary deployments you'll see a

little bit later okay so these labels

really matter and you can notice we kind

of mixed prod and dev here so pre prod

and prod obviously that's not overly

common there are definitely

organizations running these clusters at

great scale that run one big cluster

that is production and non production

but more often than not you have a

production cluster but a bunch of big

honkin servers and then you have a pre

production cluster okay and you just

have to move the image and the right

artifact over from one to the other

that's all that's not that big a deal

and that's if you look at those three

public clouds I showed you earlier

that's all I do right I just basically

say you go to that cloud basically

update it with the right set of

deployment artifacts now here you can

kind of see we have app called cool but

also this is a different environment and

a different version okay then what we

had earlier but if we look at it from

this perspective here's our dev

environment here's a production

environment here's our version one okay

make sense so the labels really matter

and you will want to make sure you

understand that aspect because that's

exactly how the magic happens how does

the deployment know what it's managing

through labels how does the service know

what its load balancing two labels okay

right so here's your key commands I'll

remove it too fast by the way you can

tell me because I tend to move fast

because there's always a lot of things

to show okay cube code will get

namespaces all your components do you

create all the pods you launch need to

go in a namespace the easiest way to

think of it typically you might have in

a development environment one namespace

per developer or you might have a

namespace per department like oh that's

the HR department these are all the

application components for the HR team

right human resource team or these are

all the application components for the

customer service team maybe that's a

namespace the namespace matters though

because that's actually how you set up

your our back right your role based

account control right that's where you

set up your security also it's where you

set up your quotas so in other words I'm

going to basically have a namespace for

developer or for human resources and

they're only allowed to use so much CPU

and only so much memory as an example so

that you know the namespace we're not

gonna really delve into that that's more

of an ops issue but as a developer just

recognize you either are going to create

a namespace for yourself and there's

going to have some default quota and

privileges associated with it or most

likely the ops team the dev ops team is

going to give you a namespace and that's

what you're going to work within with

the dead again with the quota and the

certain privileges associated with it

okay so basically you can use - in

namespace that's a fairly common thing

to do cube cuddle cube control you

notice this command right here it has a

lot different names we were confused

from the very GetGo when terminators was

born several years ago we called it cube

control cube cuddle cube CTL often I

used cube CTL and other people did too

but I think the correct name is cube

control though some people refer to this

cube be cuddle that's kind of cute also

isn't it okay but just keep that in mind

you hear that phrase you might be

confused and you're gonna see that

little CTL thing show up again and again

for instance the sto command-line tools

sto CTL okay I'm wearing a t-shirt from

the podcast called pod CTL pod control

right so you might hear control CTL

cuddle you know in that phrase then you

can see on my run command logs exposed

and scale all kinds of good stuff there

so let's actually hop over here real

quick show you some things that's my

mini shift environment let me go to my

mini cube environment okay so I can say

QB cuddle cube cuddle cube control get

namespaces and you can see I have

several namespaces here I've default

demo cube public cube system and test

the ones I created were demo and test

we're gonna create another one in a

second but at the same time I can come

over here to look at my mini shift

environment and not that mini shift

environment and QB cuddle get namespaces

you can see I have a lot more namespaces

here I have a bunch of different ones

here because within a mini shift

environment an open shipped environment

actually sets up a few more things for

for me like a cool console and other

things of that nature but I have one

call tutorial there and I can say cube

cuddle get cube cuddle get pods

in tutorial okay and you can see I have

three application components running

there this is actually a spring boot app

spring muda application and a vertex

application you can think of that is

like three application servers running

and here's one cool thing right out of

the gate

that's three JVM it's all running on

port 8080 on this laptop right now so if

you've ever tried doing that before it's

actually very hard or release painful

and you actually give up on it right you

basically put one on 8080 and one on

8080 one and one on 8080 - and then if

you have this tomcat running over here

and messes you up when you try to run

JBoss which messes you up and try to run

spring boot which messes you up when you

run vertex and so in this case I can run

everything on 8080 because it's handled

being handled within the criminais DS

environment for me okay so what else do

we have here oh a couple key tips that

you'll see in the documentation that I

give you again on the github repo I

tried to document all of these things

right so we basically talked about how

to set up mini cube to be your docker

environment so if we look here don't uh

this is actually a good tip to have okay

if I come over here mini cube docker env

look what it does here

it basically experts these for

environment variables one of which is

the host name and the port now this is

very important if you've done docker

before which most of you have this is

one that's going to bite you so just

mentally plug this in because I'm using

mini Cuban mini shift I can expose port

2376

on that virtual machine and treat it

like a regular docker Damon keep in mind

a dr. Damon doesn't have a login

requirement okay you would not have this

in a production setting or in a public

cloud setting you do not expose a docker

Damon on the public cloud because if you

do then people can come play with your

docker daemon right they can come over

and go oh what is what does that person

running oh they're running a lot of cool

stuff maybe I want to you know kill

those processes stop those processes

maybe I want to build my own image and

run my own image in your doctor in

vironment so don't expose to dr. Damon

unless you can know what you're doing so

what that means is in order to move

artifacts especially images from one

cluster to another whether docker

Damon's are not visible you have to have

another tool for that and there's lots

of tools right you can just find one

worst case scenarios you can

sport the docker image to a tar file put

it on a flash drive and sneakernet it

over if you had to okay because you can

actually do that it's just a tar and

move it if you had to but there's one

called scope EO that's fairly popular

one the Red Hat team worked on okay but

I can say docker images here and I can

interact so I'm actually messing around

now with the docker daemon running in my

mini cube same thing if I go back to

mini shift here if I go docker images

same thing applies okay I can actually

see I got a bunch of sto stuff running

in my in a much of SD are running in my

mini shift environment that's why I gave

a more memory but you can see I got a

bunch of stuff right in there lots of

docker images and there's the messaging

application I was showing you earlier to

write the that's the front end is this

guy over here and then the worker was

the message processor okay so there's a

bunch of stuff running there already and

you can see again another vertex

application another vertex application

all running there okay so cube cut will

get pods in to just demo to a MQ right

so there's the pods associated with that

okay so let's keep marching along here I

just want to show you that briefly to

kind of make sure you're still woken up

and paying attention but that's what

you're gonna do use these command line

tools and interact with your cluster

that's the good news is you don't have

to learn a ton of things you just have

to learn this one tool if you will okay

and then you can rack for your cluster

now micro-services are fundamental about

distributed computing as I mentioned

earlier if you have one big ear and

you're just pulling that every three

months this is overkill

okay this is just your take bringing a

shotgun to kill to kill a fly is kind of

the way to think of it right and that's

the exact phrase is it but the concept

is you want to be in a micro services

like architecture so you might use

something like a drop wizard which is

the people that came out came out very

early but the concept of fat jar so drop

wizard in verdicts came out really early

on many years ago with the concept of

java - jar and running a fat jar spring

buddha course is very popular well flies

swarm is now called throwing tail right

that's a micro profile invitation

there's this whole micro profile effort

there's actually a microphone file

session happening concurrent with this

one right now what ir so you might be

you know missing that but the concept is

we've been thinking about how to break

up big applications into smaller

applications for quite some time and

this little chart kind of gives us a

little history lesson if you don't mind

we've been thinking about how to break

up a monolithic waterfall big old team

into smaller units for quite some time

so we think we thought about extreme

programming XP back in 99 we have this

concept of continuous integration how do

we move a little faster instead of

integration every now and then let's

integrate all the time okay ideally we

integrate every day because we check our

code in every day and the automated

build runs and runs our tests the agile

manifesto came out in 2001 again how do

we take big teams and make them smaller

teams the cloud was born in 2006 as an

example so this check this changed the

rules on us it's like wait a second I

don't have to wait for the my IT

department for three weeks to get a

virtual machine I could spin up a

virtual machine with my credit card that

was a game-changer for us to software

developers we could then take the idea

in our head and immediately start trying

it out with a machine because think

about that for one second if you're a

software professional and we used to

tell you I'm sorry please file a ticket

two vice-presidents will sign off on it

and you'll get a virtual machine later

on we're telling you you can't have a

computer for so many weeks or so many

days you kind of need computers to do

your job right it's kind of like if we

told a homebuilder they couldn't have

hammers or nails or saws or things like

that

for so many days so this concept the

cloud was very powerful just to put that

in perspective Java 6 was born in 2009

the phrase DevOps was born in 2009 what

really got interesting that was Netflix

and drunk wizard in vertex

okay so Netflix moves to the cloud and

starts thinking about cloud native

architecture drop wizard and router

decks were born in 2011 they showed us

this concept of not building big o wars

and ears but little tiny fat jars and

running a little tiny application ok

Netflix of course then open sources what

they were doing with history Eureka and

ribbon in 2012 and the micro services

starts showing up with a thought works

radar in 2012

so here's we've been thinking about this

for many years over 6 years in some

cases

and then docker was born in 2013 it was

actually born based on a five-minute

demo and a so lightning talk at a Python

conference and from that five-minute

lightning talk the world was lit on fire

they showed a stalker build they showed

a stock a run and we lost our minds for

like oh my god that changes everything

we knew about Linux containers for many

years they've been out there for quite

some time but no one had ever made it

easy to use ok until we

dr. bill doctor run spring boot born in

2013 okay microservice is officially

defined by the thought works team in

2014 that really made it popular once we

had a formal definition for what that

was and kubernetes born in 2014 so

Caribbean itis has actually been out

there for quite some time too

as a matter of fact in 2015 based on

this announcement in 2014 we I worked

there's a presentation I have on YouTube

where you actually see us launch a

thousand containers live on stage in two

and a half minutes a thousand app

servers by the way in a thousand

application containers and then we

invite the audience to claim one we had

over a thousand people in the room and

they all claimed their own container by

uploading their own little flag a little

image that they painted on their phone

and actually we dropped that image on a

every one of those individual machines

and I say machine I mean pot okay

because from the developer standpoint

it's just like a computer so a thousand

and two and half minutes is an example

so this is an incredibly powerful

technology but what else happened in

2012

Netflix reinvented TV as well can you

imagine that

seriously we went from cable you know

people who've watched cable all the time

to people who were cable cutters so they

actually did a lot of innovative things

in that same time window so maybe as of

2015 you were thinking of using spring

with Eureka spring with history

exprience Ringwood config server as of

2016 because of kubernetes and

technologies like it we could eliminate

eureka we get service discovery for free

Whitaker Benes architecture we get load

balancing for free with the criminis

architecture where you get configuration

as part of the kubernetes architecture

we see all those things in this

presentation and then of course we still

hadn't Zipkin and Zul and hysterics

still out there as of 2016 but as of

2017 we move from Zipkin to open tracing

okay open tracing being a standardized

API around the tracing concept and in

iaeger from the uber team became more

popular and so lit Yaeger is the current

winner if you will in the space Zul was

still out there and history is still out

there

okay but as of 2018 we've it further

streamlining this further with sto so

sto allows me to eliminate let's say

histories in some cases not all cases

but you know you actually a circuit

breaking at the network level now

witnessed EO plus you get additional

tracing and additional monitoring and

metrics and to kind of make that point

where this is not a session on is TOS T

is rather advanced but I'll just show

you this right here

just so you get a feel for it this one

right here let's see here let me just

run my polar script so if you remember I

had the cube

let me type correctly QB cuddle get pods

- in tutorial I had these three pods it

helps if you put the L at cube cuddle

cube control there we go so customer

preference and recommendation you can

see it's basically making that three

little micro services three different

JVMs all being connected there together

and then I kind of see I have my

monitoring here with the workload

dashboard in this do so I can look at

the customer I can look at preference I

can look at recommendation here you can

see it's charting along okay I also have

the tracing built right in to let's look

at the most recent traces and you can

kind of see here there's the spans

associate with those traces you can see

how long each of these invocations take

you know five milliseconds you know

three milliseconds dead thing and you

can kind of see it marching along there

let's here we go okay

so that's one thing you get with STR out

of the box you get some instantaneous

tracing you get some monitoring and

metric gathering and then of course you

can do some really fancy stuff with

playing with the route rules which I

might get if we have time I'll show you

that too okay so that was that's really

the point of this slide is to say that

sto further augments your architecture

and then you might start looking at is

do to augment your kubernetes

architecture okay so let's get going

here one thing that's also important to

note is all this awesome Netflix stuff

and spring stuff was awesome for people

on the JVM and most people here are

probably on the JVM but is there anyone

who actually has workloads that are not

on the JVM right now like nodejs

python a lot of you okay so a lot of

python know jess c-sharp maybe some

c-sharp workloads things that nature so

the good news is this applies to any of

those workloads now you get load

balancing you get configuration you get

service discovery you get circuit

braking even if you're no Jess even if

you're Python even if it's a c-sharp

even if you're go it doesn't matter what

it is because it's now applied at the

infrastructure level you don't have to

have annotations in your code anymore

for service discovery you don't have

annotations and a jar file for client

side load balancing you don't have to

carry the jar file for circuit breaking

anymore on your pom.xml

because it's now at the infrastructure

level okay so all these concepts here

really matter the concepts of do you

define an API and how do you manage the

API through your application logic and

maybe have a swagger document etc and it

doesn't really matter what you build

that in but these other aspects of

micro-services architecture are things

like how do I discover another component

in the architecture how do I discover

another service how do I invoke it is it

elastic in other words if I call it as

they gonna failover if the one that I'm

talkin to you is not performing very

well can i scale out can I scale back

you know resiliency what happens if it's

too slow you know things of that nature

is there a pipeline driving my

application workload to production or

through at least the stages of dev you

know dev stage production kind thing

what is my authentication authorization

mechanisms my logging and monitoring and

my tracing all these things should be

considered as you enter into this world

where we're no longer dealing with one

ear and deployment every three months

we're dealing with twenty five things

and we want to play every week right

when you live at that scale deploy

rapidly you gotta start thinking about

all these items okay so installation is

can be the hardest part of learning

kubernetes you got to get your hands on

a criminai DS cluster and make it work

okay

and mini cube is really awesome I use it

a lot and mini shift I use every day

right so that so mini cube is nothing

more than a little tiny kubernetes

cluster running in a virtual machine I'm

running it here on VirtualBox I showed

you this earlier

I'm just running on VirtualBox but I

could use hyper-v if I'm running on

Windows I could use a VM if I'm running

on Linux you know you just have to pick

the right virtualization solution and

launch your VM but I'm also running mini

shift here okay and this is also running

sto and a bunch of other things so I

have that running here as well so you

can basically run either those pretty

easily assuming you have virtualization

support you can also use gke up at

Google very easily Amazon of course has

e KS now elastic kubernetes service a

measure has a KS as your kubernetes

service

there's OB chef comm there's another

thing called OC cluster up that requires

a doctor daemon this is also very

popular there's tons of options to run a

crew Benes cluster okay at this point

you can find one somewhere and run one

now again this prep is part of the

harder part to think about the

the way I would demo this normally and

we're not gonna do this here but you'd

kind of see I have a whole document that

walks you through how to set these

things up so what to download what you

know your prerequisites are like you can

see I do have docker installed because I

can do things like docker images here

and I am talking to that you know my

local docker Damon right so that is that

is mini shift docker env or mini cube

docker NV right depending on which one

you're on so just remember that trick

right there that's an important one but

again you won't find this in a

production cluster all right what else

we have here

Oh in the case of kubernetes mini cube

there's one thing OC project I'm on the

mini shipped environment so right look

right here I'm basically if I say OC

project it tells me I'm on the tutorial

project

namespaces you also notice I'm we're

using OC and cube cuddle interchangeably

because you can OC is just the OPF

command-line tool which is just you know

a superset of cube cuddle cube control

but if I come over here and look at

something else

let's see OC project sto system I can

switch namespaces and now if I say cue

cuddle get pods right I'm looking at the

pods there so the concept being at a

switch namespaces is something built

into the OC command line tool we refer

to this project in the case of mini cube

it's a little bit more cumbersome so

there's another tool called cube CTX I

highly recommend and cube NS that you

can download and that's recommended in

the document and you can say cuban s and

i want to switch to the demo namespace

and now I can say cube cuddle get pods

all right if I say Cuban s NS test and I

say keep code will get pods and kind of

say I'm dealing with the three different

pods here this is a go app node app

Python app and this other was a spring

boot app running against Postgres is an

example okay so you can jump back and

forth between your namespaces with cube

in s4 mini cube just remember that this

important tip because that one is

painful unless you know that trick okay

download everything set up your

environment set up your mini cube home

mini shift home the good news is if

there's a distinction in how many shift

our mini cube work I do call that out

okay but otherwise they work the same

all right the cube cut' on OC work

for the same so you kind of see how it

basically talked about how to set up

your memory how to set up your CPUs

again you can use VirtualBox a lot of

people are now starting like hyper kit

on Mac I still use VirtualBox because

it's available everywhere I can pace

they take this script if you will and

run it on Windows run on Linux running

on Mac okay we're VirtualBox and it's

kind of what I'm used to and there's

VirtualBox once you have it running ok

you can also look back at your fig you

can see right there there's the mini

cube config view let me go here back to

this one and mini shift config view okay

there's also mini shift dashboard and so

we we've already seen this already so

there's the main dashboard for mini

shift and the case of mini cube okay

they also have a dashboard we have one

there it helps if you spell it correctly

though and then you can kind of see what

that looks like okay so you can kind of

look at your different namespaces so we

saw earlier I had test and we had our

three pods there then nodejs the python

to go okay so you can see what image

decide two things of that nature so you

do have graphical consoles to interact

with the kubernetes cluster you're

dealing with also if you're dealing with

the kerbin at ease cluster hosted by a

third-party like a cloud provider

they're gonna have their own console

their own way to look into their console

so kind of just know that there's always

gonna be kind of a unique console per

criminai just cluster that you encounter

the only thing that's kind of the same

same is really the command-line tool

okay what else do we have here make sure

we cover our key ground well show you

the consoles get namespaces showed you

that already which talked about docker

env with that you knew docker images

showed you that oh this is actually

important one so how do you know if your

mini cube or mini mini shifter happy you

can SSH into the virtual machine so now

I'm inside that VM and I can go look at

my free memory I can look at my disk

space these things can be important so

DF - H free - H because if you're

running out of resources within that

little cluster because you can you might

have run way too many pods right let's

say you run fifteen JVMs that's probably

gonna eat up all your memory relatively

fast and actually how do you know you're

running out of resources as

because when you come over here and Duke

you cuddle get pods and you're looking

at your pods man I forget the hell look

at that you see how it's running here

you'll see pending instead okay pending

is like I'm trying to schedule it but I

can't find a server with enough resource

available yeah this thing is going to

need too much memory and you might see

pending that happens a lot for a

developer on their laptop I've seen that

a lot of students in my classes so just

keep it in mind so you might go check

out you know do I have enough memory to

have enough to space what's eating my

CPU right inside the VM alright so there

you know looks like I don't we made that

too small but if I had a bigger window

you could see what top shows can we see

it here not that one not top on the Mac

but top in the VM yeah it's not gonna

display properly for me made its you

small okay but you can look and see

what's using your CPU things of that

nature so knowing how to get into the VM

is important so that's mini shift or

mini cube Sh SSH alright and in the case

of open shift you do have to log into it

so opens you're just secure by default

mini SH you have to secure by default

that's a little bit different than a

regular kubernetes that's why they call

it out there but you can just come out

here now and just run something so if I

come up here and say cube cut I'll get

namespaces I want to be in the cube NS

default one right cube cut I'll get all

what's running here it does have a

single service and this is one that

comes out of the box so don't mess with

it okay but I'll leave that there and

what I want to do is run this cube cut

one command so what this basically the

run command which is not normal for you

to use but it's a little bit like your

docker run okay a little bit like your

docker run so normally you do docker

build docker run but in this case you're

basically running a built image this was

coming from the Google registry alright

you will see things like docker i/o for

the docker registry GCR i/o things like

that and then you basically say what

port you wanted to run on but look at

everything it created when I just did

that run command it missed they created

the deployment okay we're gonna learn

more about deployments it created a pod

and it created a replica set ok so just

that one command created these three

different artifacts which I could have

managed individually typically you just

deal with a deployment and it takes care

of the rest ok

and now we have that thing running and

so it's pretty straight forward to run

something but you got it also an extra

let's do this cube cuddle exact like t

let's actually go in to the pod so into

that container all right bin bash now

I'm inside it I think it's running on

8080 it's what we said right there we go

so it's a little engine next application

right there engine X application running

on 8080 and when I basically go inside

and say curl localhost because I can

because I'm inside that pod now so I'm

inside the machine therefore localhost

matters I can do other things like PS

right what else is running in here oh it

doesn't have PS running so it depends on

what you know what tools you have

available to you inside that Linux

machine right inside that Linux pot

let's get back out of it though and what

I want to do is expose the deployment

via a node port via service now we say

get cube cut' will get services alright

now I have this hello mini cube service

and notice this concept of node port so

by default when you're dealing won

immunity cube architecture there's no

built-in externalized router

right that's how we think of it so

basically you deal with this concept of

a node port this port is open on the VM

open on that open on that node so if I

say mini cube IP like that right that's

the IP address of the things like say 1

92168 and then 99 102 and then 32 6 5 7

there we go so that's the same

application that's running I just curled

it from an external standpoint so if I

come over here and bring it up in my

browser right just a so show that it's

is running here there we go alright so

that is what that application is doing

even gives us the hostname notice the

hostname there it actually is the same

host name as cube cuddle get pods right

see the host name right here - in other

words just a pot identifier alright so

you can basically say what is the idea

that virtual machine or sorry pod in

this case and then what is naming them a

name up for a hostname standpoint you'll

see that in a Java way also

okay we have our services you also

there's another way to get the URL for

it with mini cube here you can gonna

just run this command tell you what that

URL is for it but just know the real

secret sauce is this concept of the node

port okay and we're gonna drill down on

this a little bit more in a second but

basically this lets you know that your

environment is working pretty well all

right so I have setup I've installed you

can delete the service so I can just

wipe out that service now if I want to

get rid of this and actually free up

resources so I'm gonna delete it

I'm going to delete it and watch what

happens if when I delete this watch you

could all get pods okay I'm going to

delete the deployment so cube control

delete deployment hello mini cube and it

actually will tear down the pod okay

because the deployment as the unit says

what state would you like in this world

I want one replica of that pod running

right therefore it has the replica set

associated with it and once I delete the

deployment

it basically marks that pod for

termination and will eventually get

around to cleaning that up and and

therefore you get that resource back

right the memory and CPU associated with

it so you can kind of just watch that

happen and then if you have to you can

mini cube stop mini cube start so if you

actually want to turn that virtual

machine off stop and start works fine if

you can also wipe it out entirely and

recreate it okay I'm not gonna do that

now because I don't have time to wait on

it but keep that in mind there are two

directories you should be aware of one

called config dot JSON here in the mini

cube home and another one called dot

cube config these things can get a

little bit trashed at some times okay

meaning you've done you connected to

this cluster in this cluster and this

cluster and this cluster and ten

different clusters and you're like wow

I've got way too many things listed in

my config dot JSON and and and so over

time you might want to decide to clean

those things up a little bit let me see

if I have a more messy one I can show

you real fast here let's see cube CTX

over here like here's a good example

over here my open shipped environment

you can see all the different places

I've been connecting to and actually see

it basically has a reference to the

Amazon one you know the Google one the

azure one things like that so that's

what this in those files basically your

configuration in your context and you

might every now and then decide just to

wipe those out and start from a clean

slate

so that's a good tip as well because in

the early days of kubernetes they would

get a little bit bungled up and we would

tell people could just go wipe those

files and try again and then magic

happens

okay so that is really all there is to

kind of getting set up those set up can

be very hard all right we got to move

along a little faster here we got lot

more ground to cover

okay you guys still with me is this

worth your price of admission it will be

we'll get there okay so getting your

Kirby's architecture kubernetes cluster

setup is can be fairly hard especially

if you want to set it up across multiple

servers because by default you're going

to set up if you for a real production

environment somewhere around 6 to 12

servers because you need to have 3 for

your your quorum on your sed server your

master node needs to probably have three

of those that's pretty typical and then

you have so many worker nodes right like

I'm gonna have to run my apps across so

many workers in the case of mini cube

and mini shift and what you see me

happening here I'm running everything in

one big VM ok so the master as well as

the workers are all really one thing you

can do it that way too but that would

definitely not be recommended for

production typically you have three

masters and then you have so many worker

nodes based on your workload so you're

into the 6 range pretty fast you know

and beyond and so you just have to make

sure you set those up correctly ok but

building an image for a developer is

gonna be a fairly common task and so

here's the pattern I want you guys to

get in your head right away

ok I'm gonna build my application like

my node.js application with NPM I'm

going to build my Python application pip

I'm gonna build my ruby application with

Jim I'm gonna build my java application

what maven let's say and I'm gonna get a

nice executable fat jar or war or

whatever you might have as your artifact

and then you have to do the next thing

so I had my application I have my bore

file I have my java application I gotta

find a base image to run this on and

there are a number of places to find

base images docker hub is the most

popular one and most well-known and you

can go to docker hub and just search for

a base image do you think meets your

needs just but just be aware that you

know you don't really know the

Providence of that docker image you

don't know who worked on it you don't

know what things they put inside it you

don't know if they fix the Seavey's in

it so just got to keep that in mind

certainly the docker community tries to

keep those things up

dated but to some degree it might just

be what Joe or Fred or Sam or Mary

created on their own and they could have

stuck whatever they wanted to in there

okay something that runs and then you

won't even know okay and this is a Linux

machine so you can do really cool stuff

with it so just keep that in mind

there's also Quay io GC r io which is a

Google one and then Red Hat also

provides a certified set of container

images so you can start with like if you

want to run a java one or if you want to

run with Postgres you want to run what

node etc so figure out where your base

image is going to come from I highly

recommend for anybody going to

production with this kind of

architecture thinking about how to roll

your own base image that way you know

exactly what you're getting in your base

image you know exactly what the base

Linux operating system is what kernel

version what dependencies you need

installed there you can wipe out stuff

from the distribution you don't need we

don't need an FTP server

we don't need Firefox right you can make

sure those things are gone if that's

what was in your base image to begin

with and then you put the right version

of the JVM with the right dependencies

underneath it with the right node.js

runtime Python runtime etc so think

about building your own base image for

getting started purposes I'm just

showing you bases I'm mostly pull from

docker hub okay you didn't have to craft

your docker file where the from command

is the key element there to figure out

where you're gonna get that base image

from then you're going to build your

image and then you're gonna have two

other artifacts okay you're gonna have a

deployment yamo and a service channel

which is how you would launch that thing

into a kerbin at ease architecture so

two additional files now there's a lot

of effort in the space to remove the

need for those two files you're gonna

see lots of tools out there in the

ecosystem that have removed this remove

that and but in many cases they've

introduced yet another yamo file so in

this class I just try to show you the

basics for what you would have across

any kubernetes cluster you encounter

okay it doesn't matter where that

criminis cluster comes from or who what

vendors gave it to you the deployment

yamo and service animals should work for

you okay and then of course you have to

expose your URL that you've created out

to the world at large

now this is actually an important step

to understand because every kubernetes

cluster has a different form of load

balancer slash ingress some externalized

routing architecture and they're all

unique

okay so depending on what kubernetes

cluster you're talking to this concept

of exposing URL is going to be unique

for cluster biking a case of OpenShift

react

ship out of the box with one called H a

proxy you probably heard of H a proxy

before

that's the externalized loaded balancers

all you do is say I have one an

externalized URL outside the cluster and

you get one and you have one to use

right away but in the case of mini cube

doesn't have that out of the box right

you don't have an externalized URL

that's why you set node port trick to

access the actual service that I've

declared and in a case of GK or Amazon

you know you're going to get a different

solution for each one of those so just

keep that in mind number six is unique

per cluster okay you would do a docker

build and then you could do cube CTL run

like you saw me do earlier but there's a

lot of different tools like the

fabricate maven plug-in one thing cool

about the fabricate maven plug-in is you

don't have to have a docker file or

deployment or service llamó it will try

to figure those things out for you okay

now you may or may not want that you

might want more control but it is a nice

quick and dirty way to just try

something just throw it a criminal's

cluster just to see if your jar file

won't work so if you give it a fat jar

or war it just tries and sends it out

there so it's a maven plugin and they'll

try to deploy it but it will fail at

times because it's picking in the base

image that may not be appropriate for

your your your application

there's also Jib which came from the

Google team not too long ago and it also

is another maven plug-in that per that

generates a docker image for you so you

don't have to do a docker build with Jim

you just basically do a jib build and it

does the docker build for you

there's also concept of he'll he'll

charge that's very very popular again

you got a different set of gamal then a

deployment Yam on the service camel but

what a helmet art if you have three or

four different application components

that get deployed together right three

different deployments in the case of

kubernetes maybe you want to orchestrate

deployment 1 then 2 then 3 and help them

helps you with that the problem with

helmet relies on the service called

tiller and tiller has to operate as a

privileged service within your cluster

meaning it has more privileges than it

probably should and so that's one reason

we never recommend it in production

because you now have something with

elevated privileges which means if

someone gets into it they have elevated

privileges and you don't want us so I

want that when helm charts 3 though

tiller will be an optional element and

you can basically just use helm shirts

in the command line perspective there's

also a tool called compose takes your

docker compose yamo again the original

docker compose concept would allow you

to basically say oh this Apple

with this database with this network

setting you could compose those melt

will docker images into a you know say a

full stack well you can also just

consume that file and make kubernetes

llamo out of it okay catch is kind of

going away there's a project actually a

team that I work with had started but we

kind of given up on at this point it was

a way to shorten some of the yeah mole

down by a few notches because you'll see

that the yeah mole and kubernetes can be

a little verbose though it has a purpose

and there was a way to kind of shorten

that down and then there's a lot of

other things so the one that you'll also

see coming out right now is build a pod

ban and Kaneko this from the Google team

these two from the Red Hat team

different ways to build images without

docker right that's a very common

request it's like how do I eliminate do

cker from my repertoire and toolset

we've been working on that too so you

wouldn't have to use any docker at all

you would just use something like pod

man and another thing about this it

doesn't require doctor daemon which is

pretty even more awesome which because

dr. Damon's by themselves require their

own care and feeding like I mentioned

earlier when I went and SSH into the VM

I showed you to look at the disk space

because if you do a ton of builds you'll

sometimes see the docker daemon eats all

your disk space inside your VM and then

you got to go fix that manually

sometimes okay so lots of stuff going on

build packs is a new one by the way just

proposed by the Heroku team and the

Cloud Foundry team alright so your

docker file for a Java project we use

this one called fabricate a lot but you

can pick and choose all sorts of

different ones you in the case said the

base image that you pick though you then

have to know how to set my application

into that base image and the fabricate

one has this concept of slash

deployments you drop your fat jar into

the deployments directory and that's all

you're gonna do okay but I'll show you a

different one only because it is

somewhat more interesting to show you a

different one let me go flip over here

to my mini cube environment and actually

let's turn this polar off so we're not

wasting CPU cycles on that and we'll

come back to that in a second alright

let's go here and here and where I'm at

let's go hello and let's just go to my

spring boot application all right I'm

gonna bring out Visual Studio code here

this is my new favorite tool official

studio code just so you're aware of it

it's a free tool that you can download

it's not Visual Studio its Visual Studio

code it's a lightweight editor

but the Java language support is comes

from Red Hat now so that's why I work

with the team that does this so we've

had 9 million downloads of this tool the

Java support for vs code but let's go to

here ok so here's a little java

application right a little spring food

application pretty straightforward stuff

if we look at the pom.xml nothing

unusual here because it doesn't really

matter what the application is ok at

this point because anything runs in a

crib nineties architecture at least if

it runs on a Linux container being kinda

CEA's has spring boot starter web has

dev tools which is nice because you can

make code changes now and see what the

result of that is and then of course it

has a nice little rest controller end

point here and the one that we'll focus

on right now is this one this at the

root level and it basically is going to

return it looks like Aloha so let's see

if we can make that work may even spring

boot run ok let spring boot run come on

come on come on

it takes a few seconds to startup and

I'm just running this on localhost in

this case the Mac localhost and it does

say Aloha spring boot and notice it has

this number that is just a little

counter and increment that helps me know

that the JVM is still alive and well and

hasn't been recycled and aloha string

right here and unknown meaning it

doesn't know what hostname it's coming

from so right here it says system get

envy hostname and it puts down unknown

because it just doesn't know that right

now and if I want to change the loja to

something else

let's go Bonjour ok and actually I was

just in Turkey so let's go back to

Marhaba how about that save that and you

can see but dev tools it automatically

reloads for me back there and there's

Marhaba so let's say that's the java

application I want ok I I've tested it

now that's what my believe by business

wants so I can say maven come yeah clean

package and I can get the fat jar out of

that so made McLean package will give me

the fat jar standard kind of thing there

alright so it's in the target directory

there it is so I can say I can double

check that real quick so jute demo run

it let it run and there we go alright so

there it is more

but still fantastic and now I can build

remember I said we can do our docker

images right we can do a docker build -

t9 steps

awesome helps if you spell things

correctly and then my boot and let's

just call this version 1 version 1 I

don't think I have another let me double

check real quick docker images grep 9

steps I have some others out there but

not this one so yeah let's do this so I

have version 1 now and put the dot there

there's my docker build happened real

fast because I've already cached these

layers on this application so again this

is working off the docker daemon I have

inside my mini cube environment ok so if

I'd well let's just actually just run

this one again alright so there is the

my boot

v1 with the tag 14 seconds ago so that's

the one I just created and so I now have

that one but let's actually look at the

docker file this is the one that picked

up by default I'm just gonna use open

JDK version 8 coming from docker hub and

again I pre pulled it and if I use that

base image I define this environment

kind of thing I copy from Target over

here and then I say Java

- jar just like it did the command line

earlier Java des jar and I'm running

that Java micro service now okay that

spring boot up so it's very

straightforward and I'm exposing a DAT

ok I'm not exposing a you know anything

for jmx or anything like that just 8080

just to keep it very simple I had that

thing come built now so you could do

cube cuddle run and run it if you wanted

to but the right thing to do is actually

have a deployment yamo so I have some

deployment yeah moles here you can see I

have several them I have the my boot

deployment that's this guy right here ok

so let's look at it real quick my boot

deployment there you go

ok you can see this is what a deployment

Yamma looks like to get this to run in

kubernetes the first thing you do is

give it a name right this is I'm gonna

call it my boot to keep everything the

same and it has this label with my boot

so the deployment has a label I want one

my boot running ok the label selector is

also my boot so this is where it's a

little bit verbose it seems like I'm

saying my boot a lot that's because the

labels the the actual pods that get

created have labels - all right so this

is the pod label and then I and well the

pod labels here actually and then we

have the concept of the image that I'm

going to be using and the port that has

to expose so really what really matters

here is this concept of what image is

going to be run and what port what port

do you want to expose okay so I said

cube cuddle create wait first let's see

get namespaces or let's figure out what

namespace I'm in cube cuddle create

namespace let's create one real quick to

have a place to play let's call this my

dev ox just kind of make it different

cube in s my dev ox

just because you create it doesn't mean

you're in it so now I'm in it that's the

one thing this little bit tricky there

so now I mean it with cuban s so I can

say cube and let's just back up a

directory cube cuddle create chef Q

files my boot deployment dot yeah Mon

all right and let's come over here watch

cube CTO get all okay looks like nothing

is really running let's see if this

works as I hope it will there we go

so by the creation of deployment you can

see I got a deployment I've got a pod

all ready to go and if my window is a

little bit bigger you'd see I'd have a

replica set to there we go

so I basically have now created those

three objects just by that one

deployment yamo and now it's running a

crib Nettie's okay I do need to create a

service so cube cuddle create Jeff cube

files my boot service and now we have a

service and there you can see the

service is now in there okay and the

service has this node port exposed and

now I can do some fun stuff with it I

can say curl

I went let's do this mini cube IP

alright so curl one 92168 92169 in

kubernetes so it's really a couple

simple steps even if you do it all

manuals so this is the full manual way

that should work across any cluster no

fancy tooling at all I just took

that jar right got a docker image out of

it that's might be tricky if you're dr.

Damon it's not available to you you got

to get that image created and then you

basically run your deployment you have

it and now you can have a little fun

with this so let me come in here say

cute cuddle edit the deployment my boot

and I have cube cuddles set up to run

Visual Studio code as my editor by

default it's just like a VI environment

like you get with git so just keep that

in mind let's see here and I like that

trick by the way is cube editor right

there

so you basically define an environment

variable called cube editor and my case

I mapped it to code and this dash W says

the open a new window all right so that

means I can now use an editor of my

choice to do some editing here but let's

say I want 3 replicas I'm gonna hit save

close that watch what happens I now have

3 app servers running okay they're now

all running and so what's cool about

that let's see if this works for me well

I didn't want to do that sorry move it

you vest okay let's curl it curl it curl

it curl it notice look what happened

there with the zero okay because it's on

a different computer okay so this is the

hostname that Java sees right this is

the hostname that one maps to this one

right here okay that pod right there and

you kind of see it starts zero cuz it's

the first time anybody's interacted with

that JVM and if I come over here and run

a curl a couple more times there's

another zero okay that right there is

this one right here first time anyone

touches that JVM so that's why I

actually like having that little

environment variable in there because I

can kind of watch the load balancing

across the different different JVMs

okay and you kind of see it's kind of

random as far as this little balance and

go that's there in this case I went back

to the original one which is that guy

right there so again each of these JVM

is all running on 8080 all thinking

they're uniquely owning their machine

that's the beautiful part about the

Linux container the beautiful part about

running in and accruing at ease at scale

architecture I can declare it we say I

want a lot more of these things to run

okay so that level of elasticity is

pretty powerful all by itself if I want

to come back and say cue pedal edit

deployment my boot and I want to come up

here and change something else like we

could like we could add label

an example you can see there's the label

right there but let's say I just want to

go back to one okay might want to change

the type of rolling update you can kind

of see there's a bunch of other things

that were added for me that I didn't

have to worry about okay but let's do

the let's just make it one go back here

because I said one it takes two of them

down that's actually removing those two

things I'm no longer using the CPU and

memory associated with them but I still

have a working application oh I didn't

keep doing that let's go back to a curl

okay duh

there's my curl and that's the original

one that was running and I'm only

dealing with one now okay so that is

kind of a point of that pattern that I

mention of you basically have to have

your right you've got to build your

image you got to have your deployment

yeah more your service yeah mo and

you're ready to go okay so that's what

it takes there and then so building your

images can come in a lot of different

ways I use docker build primarily as a

way to build an image but you can use

something like fabricate or jib there's

an example of fabricating jib in the in

the github repo and I don't have an

example of helm charts though that's a

very popular option as well okay and I'm

still working on some examples of docker

and pod man I'm sorry bilder and pod man

but those were only run on Linux

distributions from Red Hat right now I'm

trying to get them to ported also the

Windows and Mac because that tends to be

where a developer lives right on a Mac

or a Windows machine

Kaneko comes from the Google team as I

mentioned earlier okay in the case that

the fabric game a plug-in you just do

something like see the setup command

right here and then fabricate deploy

we're just kind of cool all right

meaning you do basically no docker file

no yeah Milles just run and it'll try to

make the right assumptions that it can

all right now there's one important

element to know about java running in

docker this is an important tip that you

guys should walk out of here with and

that is by default the JVM does not

respect the container constraints placed

upon it the JVM was built long before

virtual machines became popular okay it

was built long before Linux containers

became popular and were even known to

exist the JVM assumed that it was going

to be installed on a nice Ford core 64

megabyte machine you know back in the

late 90s and it

gonna own the whole machine that's what

the JVM was built to do but it's called

Java Virtual Machine for reason it's

it's own VM right so that's one thing

that's a bit of a gotcha in Java you

just have to be aware of it and just

have to be aware how to work around it

okay so by default if you just simply

say user constraint like a memory

constraint here at docker run or or use

it within cuban at ease it will blow out

its memory by default because it assumes

it has access to all the memory so let's

try to let's have a little fun with that

okay

let's kind of let's go poke at this guy

all right so we still have our one pod

running right there

okay and actually let's let's streamline

this a little bit so get pods all right

so there's our pod running okay and if I

come over and curl it I have another

endpoint on it called recess resources

look how much memory and cores it thinks

it has okay let's go look at the code

real quick and this is this resources is

just simply doing runtime get runtime

max memory that's how much heap it

thinks it has and available processors

so it thinks it has access to all the

CPUs that I've given it because I've

only given it to that's defined here

right there to two processors and it's

basically that one 1.3 megabytes or

right there yeah it's based on the fact

that um I'm sorry that's 1.3 gigabytes

it's basically a you know about 25% of

the total memory so that's how much you

have available for heap so it's that's

as default calculation right it's the

standard thing for Java basically say

okay how much memory do I think I have

and how many available processors and

then we have another method that it

called consume okay and what consume

does is it basically uses an immutable

string and just tries to use up to 80%

of the memory and thinks it has that's

all and so it's pretty straightforward

what's happening here but let me do this

let's do this

while while true okay do curl grab this

one no not that one this one all right

okay make it easier for myself I'll just

copy it out of here so like I said I do

have all these things documented is it

not that one no no

here we go right I'll buy all true where

did I put that one Wow

true there we go so let's just do this

I'm gonna just copy and paste from here

there okay so there's the spree boot app

that we're talking to you right there

and what I want to do is just grab this

little curl command I'll just come down

this other bottom window here alright so

there it is

and this resources let's double check

that there's how much memory it is and

let's just call the consume method now

on it now this is actually well now

think about it it's actually okay

because I'm not actually applied the

container constraint yet so I forgot a

step there let's do that real quick to

do an event intent yep okay notice

there's a there's two different docker

files here I'm going to ignore this for

now but I'm going to come over here to

look at my cube files again all right

there's my boot

resources my boot deployment resources

to kind of let you see what that looks

like my boot deployment resources what

it is it simply has a resource request

and a resource limit so it's very much

like what the deployment you saw earlier

but basically how much memory does it

want up front and is there a node

available in the scheduler in the

kubernetes cluster to basically schedule

that request requested load and then

what is the overall limit and this is

actually a hard limit ok so I can come

over here and say queue deployments look

at that one I can say cube cut oh

replace we're gonna replace that

deployment that we had earlier and my

boot deployment resources replace it get

pods didn't it you'll see that pod

getting recycled alright and we have a

new one up and running let's loop

against it again alright so there it is

you notice again it went to 0 because

basically on the concept that I changed

the deployment and of course terminates

the old pod and recreates the new pod

under the new request right so basically

this pod now has those container

constraints let's see here

and then see what happens there okay so

when I try to basically say use up to

80% of the memory it thinks it has

available if you watch carefully it went

200 M killed then their words and

there's a new pod that was born and

notice it started back at 0 so now this

is an important element understand

kubernetes by default is trying to keep

the desired state that you've requested

running at all times I said I want one

replica of this image running somewhere

in the cluster and it basically

restarted that pod for me and here's one

things kind of cool about this I've

showed just a lot of different people at

this point I had one senior manager talk

to me about this and said this is

awesome my developers write really bad

code uses all the memory and this means

that restart automatically yeah and

actually when you see the live this

program readiness probe you can actually

be very clever about your readiness

probe like in one case I have an example

where I show how we basically ensure

that all the in memory session state is

replicated to the new pod before the old

pod is torn down therefore you can have

full in memory state like traditional

JSF JSP session state like we used to

use back for our shopping carts back in

the day and it's all good okay because

you basically determine when the rolling

update can occur right with all

readiness probe but this means this for

all those programs you wrote in deployed

in production where it eats its thread

pool never you know you forget the

finally block and you don't put the

thread back in the thread pool or the

connection back in the connection pool

and the application is still running the

JVM is still up but it won't respond at

all this will fix that too it'll restart

it for you automatically so let's see

here we're running this guy running here

let's actually hit it again again look

at the city sources it still thinks it's

got 1.3 gigs of ram and two cores so

even though the constraints were applied

the JVM is completely oblivious to it by

default and if I hit consume again it

blows up the JVM and it goes om killed

out a memory killed because the real

point of this is let's go here cube code

we'll get pods okay is that this guy

cube total exact IT pod identifier been

bash all right and actually I'm gonna

look back at my notes here to make

we do this correctly as this step we're

in step three we're moving fast here

some step three over that one up okay so

notice I basically have exact into it

just like you see there and I can also

do things like psdf based on this one

you can kind of see there's the jvm

process that's running I can say Java -

version ok notice I'm interacting with

it inside of it vitam ISO curl localhost

colon 8080 and now that's 26 you know

it's 26 you get skipped here and it did

because I burned that number right

internally but I can see my java virtual

version there I can see I'm on one 181

as far as the micro number goes and then

I can look at other things like I can

look at and see what operating system

this is running okay it's running a

debian one based on the image I grabbed

okay I can look at free memory that's

also good to know alright so looks like

you notice and here's actually an

important point free is also wrong so

it's one thing to blame with JVM for

being wrong but a linux utility is also

wrong it doesn't have four point four

gigs of available memory to it it was

constrained to 400 Meg okay so don't use

free as your as your way to say wait a

second things aren't quite right here so

go here and if you look at this is where

C group stores its internal constraints

right so realistically it's not

kubernetes that's doing this and it's

not docker doing this either it's

actually C groups that it's doing this

right it's a Linux thing that's in

saying how much what is your actual real

memory constraint and so when I set that

up let's actually go it's this limit and

bytes right here okay right there so you

can kind of see that's how many bytes

that has access to that's basically the

4400 Meg I gave it inside the deployment

ya know so if I come up here to two

didn't intend end in toontown

queue files I wanted to show you that

yeah mol file one more time so we're all

aware of it but if you look here I

basically said that's my deployment like

I had earlier the difference is I have

these constraints you're limited to one

core and your limit of the foreigner

Megan's around I'm right and that file

you see right there right here inside

the actual container so that's an

important thing to understand that JVM

is not picking the

by default however there is a way to

make it pick it up by default and that

is if I come over here and use

experimental options wait did I get

these two commands all right Java that

pasted incorrectly there we go

notice now the calculation is 112 mega

RAM so no words just going back to that

quarter calculation about 25 percent or

so like it had earlier but instead of

using all the memory on the VM it's

using just the memory that was

constrained to within the actual

container constraint now you might be

thinking and I have heard this a lot

well we're just not gonna put that in my

deployment yamo that's why the

developers like I'm gonna skip that step

they don't worry about this JVM thing

blowing up your operations team we'll

put it back okay because they're not

gonna give you unlimited room to run

your application they're not gonna let

you give unlimited memory and unlimited

CPUs all on that host because often when

you deploy a criminalities node you're

not deploying it to little tiny

Raspberry Pi so you can you're deploying

it to a big honkin machine I'm running

it at a public cloud provider or in a

big data center server in other words

that machine may have 128 gigs of RAM

and 16 cores and you should not eat all

of it for your one little application so

they're gonna actually apply these

container constraints if not in the

deployment yamo in the namespace itself

in which case you're still gonna have to

abide by them alright so just being

aware of this is important because by

default it does blow up okay let's see

here and actually let's see what happens

here yeah I don't know if you noticed I

was actually inside that pod and I just

ran consumed on it which means it got

killed and it threw me out

so get cube cut' Oh get pods it's here

alright so that guy was oh um killed now

you can also see the crash to back off

is another thing you'll see a lot in a

criminai nice architecture in this case

is simply basically saying back off for

a second without trying to restart that

okay it's going to take a few minutes to

restart or a few seconds to restart

notice the number of restarts that's an

important number for you to understand

as well why is it restarting you should

be kind of looking into that and if I

come over here and get

the pods cube cuddle described pod okay

so the describe command is important one

for you I can kind of come over here and

look at it and it'll tell me submit your

information about it right there's the

resource constraints that I applied

through deployment Yambol one what was

the last time it restarted due to OEM

kill okay yet you can get some

information about why that pod is

misbehaving okay and there's the port

number and things like that an image

name okay so why why did it die you can

also describe your deployment and

deployment my boot and and see some

information about it as well again the

deployment provides the template right

so the this they call it the pod spec

template and so there's cailli the image

name the port number again those

constraints you can kind of see what it

is there but this helps you know did the

cube cuddle create a chef or applied a

chef or use the apply verb did it

actually take effect over on the actual

object running in memory okay so on that

deployment artifact alright let's keep

going here you guys still okay and we

have we warned you out yet all right we

got lot more to go okay so we kind of

showed you a little about this already

you basically can SSH into your

container and poke around this is a very

powerful capability if you're trying to

figure out why is it not behaving

correctly as a matter of fact I want to

get into that actual machine right that

into that actual container and as you

saw me do earlier I can basically just

run Java from the command line and see

what's going on like what version of the

JVM is it really running because I don't

know where I pulled that image from how

is it set up I can look at the logs

directly associated with that

application server I can maybe figure

out why it can't find the database is

supposed to connect to can I ping it you

know things like that so it gives you a

nice tool for basically understanding

why is our things not running correctly

okay if you can kind of get in there so

it's the exec command exec - IT and then

you'll see me use like a bin bash you

can also run commands directly right

like you can see here you can kind of

run this concept here basically looking

at that limit of bytes from Penn State a

container constraint standpoint and so

that's a very powerful way to go figure

out what's going on inside the

actual container all right there's a lot

of other things you can do in there

but one thing I'd cost you to think

about is don't don't do this in a

production environment it's easy for you

to take down the pod once you're inside

the pod and you know where you might

start a process that basically uses the

rest of the available memory and of

course it gets killed again as an

example you don't really want to start

to JVMs inside that container you want

to hopefully get the one that you wanted

running running and so there's things

you don't really want to mess around in

there from a production setting

standpoint but for a development I use

it all the time like why is that

application not running it's an example

we'll go ahead and just kind of we kind

of did that rather quickly but let's go

ahead and talk about logs next because I

think logging gets to be interesting

okay cube cradle get pods if I come over

here you can kind of see there's my pod

that's running cube cuddle logs and

let's look at the logs associated but so

you can kind of see there's the logs the

spring boot logging you can see the

converse that are my rest controller

health okay you're gonna see a slash

health slash consume slash resources

these things have all been called that

is the log dissociative of the soak cube

cuddle logs that's pretty

straightforward by itself but there are

some other tips you should be aware of

okay

one is this concept of - pee so if you

have a failing pod for some reason the

pods just not coming up okay and that's

gonna be a common thing it can't connect

to its database and therefore it keeps

failing over it can't basically for some

reason run something in particular - P

says look at the last failed pod and

pull the logs from it okay last failed

pod meaning I want to see what's failing

and understand that better there is also

a couple of other great tools that I

like using a lot okay now one thing to

understand is you notice I do

system.out.print line up there anything

that goes to standard out is available

to the logs so always log to standard

out if you log to a file then you have

to come up with another way to get that

file exported to make it visible

if you log to standard out that means

anything that you have running your

company's cluster that does the log

aggregation for instance a lot of people

use efk elastic search fluent D Cabana

right it'll grab all those logs from

standard out and aggregate them for you

in a central location or

ALK logstash instead of flu and D but

fluent D by is the default in the

kubernetes ecosystem but you can kind of

see I can say cute catalogs but I can

also use this thing called cube tail

which is nicely tailing the logs but

there's also stern and kale and I like

Stern a lot and that's one of my

favorites right now so I can wit stern I

can say Stern my boot and I can

basically and now monitoring my boot ok

so Stern my boot and if I come over here

and run my I guess I can let's bring

this consume again alright

ok we killed it and notice that said my

boot killed ok we killed it but it

should be coming back to life

get pods alright see it says om killed

but it should come back online come on

come on restart that pod and so it

should be restarting ok so it's trying

to get events this is another good tip

get events basically will show you what

is happening from the kubernetes cluster

standpoint you can kind of see it's

trying to restart that container and

let's see alright

trying to try to notice also the 0 of 1

so basically this is an important thing

to understand also is I have 0 available

and ready that's an important statistic

to look at if you're dealing with an sto

environment where there's two containers

in that pod you'll see two of - alright

or one of - and until your two of - you

know you not ready to go so it looks

like it brought my pod back up finally

ok my pod came back up and notice Stern

starts logging automatically because

Stern is just looking for anything that

matches the my boot phrase which is kind

of Awesome you can also do something

like this with kale okay so I'm gonna

show you kale and and you can basically

say cube in s let me see what names my

salmon

so kick kale my dev ox is it as a dash

in I sometimes forget what it is there

we go okay

and let's try consumed there we go I

killed it again

cute cuddle get pods all right so again

I killed it one more time

but notice what kale kale allows you to

kind of look at all the logs across the

entire namespace so if you're dealing

with a lot of micro services where I

might have four or five or six things

calling each other I can use stern or

kale as a way to say let me look at the

logs across everything because I'm not

sure where the problem is so I really

like those two tools so those are good

tips for you guys to be aware of Stern

and kale and you can see it's trying to

bring my little pod back on line there

and I'm curious to see how long it will

take one thing one thing you'll notice

too especially on a system that's kind

of overloaded like mine is right now it

takes a little bit longer for those

subsequent restarts if you are really

abusing it like I am right now and

beating it up so you can see it's taking

a little bit longer on that crash back

off it should have access to the image

because we didn't change anything about

that but you will sometimes see that it

can't find the container image in this

case it did write a container a majority

present on the machine and then let's

see here Oh about that window and there

we go

and you can see it's the back office

basically basically it tries and then as

if it can't get it going it kind of

backs off for a second or two and gives

it another to go so it's still trying to

get that pod we started we'll come back

to in a second see if it's happen all

right so the logs are very powerful just

be aware that you can use stern kale

great little tools for that and then

there's also one called cube tail as

well as just cube kind of logs but

remember this - Pete last failed pod

also at kale you can do a look back look

back over the last hour show me all the

logs over the last hour which is awesome

you know you can kind of do things like

that

okay environment variables in config

maps who told you we got a lot of things

to cover here all right so the one key

thing to understand as I mentioned that

you don't have to have a configuration

type solution per se because there's one

that's built into kubernetes this may or

may not meet your needs

but it is nice if you think of 12 factor

apps right the concept of the app config

is separate from the application code

therefore if you move it from

development to production you know

things like your database connection

string will change you know that your

JMS Broker identifier will change the

user ID and password or whatever it

might be those kinds of things can be

externalized into less

say properties files or into some other

externalized reference point that you

can then update and also store in

version control right that's kind of the

idea so you have two things within

criminales out-of-the-box one is just

environment variables that you can

manipulate and the other is this thing

called a config map okay so let's show

you that one I'm just gonna open this up

and again all this isn't github

everything I'm showing I've documented

and I've tried to make it you know as

clear as possible that so you can follow

along and do this as homework I'm hoping

does anyone gonna try this is homework

at all you think only one of you I'm not

done my job if only a few you think this

is worth the homework okay you gotta be

thinking okay this is kind of fun stuff

I want to try it on my own all right so

let's go over here to that little spring

boot app I had earlier notice that it

has this thing called configure okay and

I have this concept of a get database

environment get property I want the DB

connection when the message broker I

want this thing called greeting I want

to thing called love

and I just use that configuration

parameter to call it it looks like my

pot is running again that's good cube

cuddle get services what was that node

port okay good node port notice stays is

basically saying so 192 168 oh I've

already forgotten already what was the

IP address mini cube IP so I use these

tools lot else 99 102 for this one and

then third one nine seven two yep and

then that was called configure right so

right now it basically says default

default default because I'm not said

anything at all okay there's no

environment variable set for any of

those things at this point in time and

so I can set them using another command

you can kind of see right here and

actually let's do this real quick let's

scale things up it might be more fun we

scale things up okay I'm just kind of

picking and choosing there but watch

what happens when I say I want two of

those you can kind of see there's a new

one being born right there okay you can

actually scale these things up and then

of course you can take them down no big

deal there configure we showed you that

now let's actually set Wendy's

environment

and also watch what happens I'm gonna

set the environment variable and notice

the two are killed and the new ones are

being born so that's an important thing

to understand when it comes to changing

your environment it assumes you're gonna

recycle the pod

it assumes you're gonna restart that JVM

and that's an important element to

understand because you might not want to

do this in a production environment all

that often right because you take and

everything down but if you notice love

equals Aloha now okay so if I want to

come up here and basically call

something else like greeting and I want

to call this attend howdy

okay you'll notice that again it's going

to create new pods again I had two

replicas so two are coming up and then

it's gonna have to tear down the old one

and so if we come over here and now look

at the curl command one more time

there's howdy greeting and love Aloha so

if I come on over and look at the cube

cut it at a deployment cube code

illiterate planet my boot

remember district as well there's the

environment variable set inside it okay

so I can basically determine that I want

to set those environment variables right

up from the get-go in the deployment

yamo as I deploy it or I can manually in

from an imperative standpoint add them

on the fly as you saw me do here this

would not again not be what you want a

different production environment you

would just simply have these things

let's say externalized into their own

file and therefore you just edit the one

file for production versus staging

versus Deb versus something else if

that's what you wanted so that's really

where a config map comes into play okay

and if you want to unset these variables

you can kind of just come into here and

see that minus sign so I can come over

here and let's let's close this guy

because he's waiting for him to close

okay I can copy

there we go unset love again you see it

tearing it down onset greeting and again

you'll see it tear it down okay

so just you're gonna have to be

cognizant that but there's this concept

of config map okay and you notice it

says create cm mic

Figg from environment file config some

properties so let's go look at config

let's bring up the editor alright some

properties there I have greeting and

love setup as just a little key value

pairs

I'll have this other one called other

properties but may be a database

connection message broker and basically

your config map basically will load the

configuration from those property files

now there's a lot of different ways to

mess with your config map you can kind

of go crazy with it have a lot fun with

it but you can see it's a pretty

straightforward command I can say create

fig map some properties ok get cm now I

have a config map out there cube cuddle

and notice also when I'm using cube

cuddle cube control and get right I can

say get pods get cm get deployment get

service basically all these guys are

just objects that I'm interacting with

writings they get services right and so

get cm a couple of things you should be

aware of I can also say I want to see

the yamo version of that object so

notice I did the oh yeah Mille there and

I can look inside it I can also export

it as JSON alright and then all let's

see a fear to tip today yeah well I set

the right command and if you actually

use the - export you can now have an

artifact that you can basically put in

your version control so in other words

you can mess around with this thing and

decide okay I got it where I wanted to

be finally messing around with and then

export the yeah Mille from it as well so

that might be a way to kind of get

started if you're kind of still new to

figure out what the yam will should be

and as that might be a nice thing for

you but so if we look at our get cm

let's see what is described you for me

to describe cm my config alright there

we go so we have we have our greeting

which is you jumbo if your Swahili you

down in Kenya or the east coast of

Africa and love is amor okay but let's

go look at our little application so

mini cube IP and add cube cuddle gets

service that was our node port and I

said curl right 192 lists happiness

room and colon and then thirty one nine

seven two okay configure and you notice

it did not apply yet

all right that's an important thing to

understand because just because I

created the config map does not mean my

deployment as aware of the config map so

I want to kind of show you that because

that can be very confusing it has gotten

me multiple times right it's like uh

okay I'd made it config map how come

that configuration it's not showing up

and so you actually have to have a

different deployment Yambol okay let's

go over here so we looked at the

resource one let's look at the

configuration one we have again notice

we have the constraints we had earlier

okay we have the port number and the

image that we had earlier but now we can

basically say get your environment from

the config map so this means you can

separate you can change the config map

separate from the deployment okay

so I want to come over here now and say

a cube cuddle watch what happens also my

replicas here is one that's important to

note because right now I have replica

set it to but I'm gonna overlay cube

cuddle replace now chef cute files my

boot deployment and this one was called

configuration I'm gonna replace that one

it's gonna kill the two old pods based

on the previous deployment and build new

pods and in this case it's only gonna

give me one okay

because it's only give me one because I

said one replica and let's see here I'm

gonna go back to curl so I did my curl

command notice I didn't change this port

number the node port cuz that's tied to

the service even though the pods I've

been recycling that's tied to the

service so if we look here at the

service by the way cube cut' all get

services this is an important thing to

understand all right you can see there

it is there's the cluster IP associated

with it there's a note port we've been

looking at that but let's do this

described service my boot look right

here look at the endpoints okay you can

depending on the number of replicas you

have I'm actually let's pin up another

replica cube cuddle edit deployment my

boot alright let's kind of let's go at

three here close that down ok cube

cuddle get pods

there should be three of them coming up

all right describe and let's just look

at one of these pods real quick didn't

attend oh there we go all right look

here all right see the IP address of the

pod okay let me come back over here and

describe my service one more time notice

there's three IP addresses they're all

on 8080 and here is eleven for one of

those pods and that's this guy here in

the middle so the concept of endpoints

meaning the services automatically

picking up pods to meet its met label

selector and including it behind that

load balancer so that's kind of where

how the magic happens is this concept of

the endpoints and they're basically for

every pod that meets the label selection

that shows up okay so in this look the

label selectors this app my boot if I

come over here and say cube cube cuddle

get pods

show labels at my boot alright the fact

that they have a matching label is what

it means to be part of that label

selector which is part of that service

ok so now we got those guys up and

running we have all of them and we can

look at the let's go back and look at

the configure command again okay you

kind of see that we have and if you

notice I have that that's a different

pod from the one I did here ok and you

can see it still has that same

configuration so everybody now has the

same configuration and that's just one

nice way to kind of deal with

configuration is the concept that config

Maps ok are we doing on time all right

we only have about one hour left

you guys still with me we're getting to

the good stuff now ok what's that has

this been fun so far ok but let's show

you the kind of the real fun stuff and

we're gonna get to more the hard stuff

at this point so I mentioned earlier we

got things like sto and we got you know

you know criminals running ever

but you gotta still understand things

like service balancing service discovery

and load balancing and then the live is

program readiness probes to really

understand how criminate is operating so

by default service discovery is just

based on the name DNS entry so I can

basically call it customer call it

producer in this case call it HR call it

whatever I want I just refer to that

string and that's all I have to do so if

I'm using rest template from spring I

simply just refer to it by name so you

don't have to have a service discovery

mechanism like Eureka or zookeeper or

something else and do a lookup you don't

have to okay you can because you might

have services to live completely outside

your kubernetes cluster that's when you

might have to have some form of zoo

keeper or Eureka or some other service

discovery mechanism and for most of us

we work in large organizations there is

some form of service discovery solution

that you guys have already worked hard

on and you still want to use I

appreciate that but my default you get

this for free within the cluster okay so

the concept that you can refer to my

name is an important element but because

you can refer to it by name you can kind

of do some really fun stuff so if I look

over here at this Java code let's let's

close someone whose things down so don't

get too confused here yeah let's close

that down okay

and you can kind of see it has this

thing called calling another and it's

gonna call my node the service my node

in a space called your space and so if

you do go cross namespace you do have to

say the name of the namespace otherwise

you can just call it by the service name

so I'm going to deploy on nodejs service

and have this java application call that

nodejs service okay again I have all the

instructions listed here too let's go

back here over here and for this

environment that's kind of set up here

for service discovery and you can kind

of see we're gonna create a new

namespace right here I'm just gonna come

down to this window here create a new

namespace just it helps if you're in the

right directory there we go

alright and if I come over here and look

at nodejs I have this little node.js

application if I come over here in the

MPM start curl localhost 8004 it right

as hello nodejs and you can kind of see

the one two three four okay no big deal

there

I can basically test it locally let's

just weigh so much you and I can do a

docker build on now so I'm going to do a

darker build do my remember the pattern

is right you got to get your base image

you got do your docker build I can say

docker images now grab nine steps okay

you kind of see this is the my node v1 I

just created right here just just where

is it my node latest Oh what did I use

if you want okay why is this 14 hours

ago I'm not sure

sometimes that time does show up kind of

oddly okay so I had that I actually by

the way because the darker daemon is

exposed I can't say docker run - I T

let's double check to make sure that I

have what I think I have okay just in

case is something right a messed up here

nine steps awesome my node v1 alright

we're now going to run it there and

remember my MIDI cube IP address again

this is running inside the mini cube

p.m. okay and so I can say curl I can

talk to this guy

eighty Oh nope 8080 alright so 8080 oh

it's on 8,000 it would help if I

remember that okay 8,000 here we go

there and curl it on 8,000 there we go

so there it is that there's my little

node.js application and now I have to do

the same pattern as we saw with Java I

needed to have the deployment yeah Mille

and service yeah Mille so kind of see

here's my deployment yeah Mille and

we'll add it to your space cube look at

my Cuban s though I'm still in my dev ox

all right there is your space but I'm

not actually using it yet so let's go

here alright we're gonna do our

deployment and we're gonna do our

service ok cube cuddle get pods - and

your space alright so there's my node

running and if I do get services okay

there's the node port for its

curl 1 i 2 1 6 8 9 9 1 o 2 and 3 1 4 1 1

all right there's the hello nodejs

okay now it's running as a pod okay but

because of that I can now talk to it

okay because now we have no GS running

in theory I could go back to my java

application now and talk that in our GS

application and if you look at the code

is again very straightforward my node

your space and so we put my node that's

cube CTL get services my node ok and

it's in your space that's really the

trick to it so curl get cube cuddle get

services in my box okay so curl 1 I 2 1

6 8 1 6 8 . 9 9 1 o 2 3 1 so I'm

actually calling my boot calling

another's that method I think if I got

that right

calling another alright so I'm actually

talking to the spring boot application

but it's actually returning from nodejs

let's look at stern I should do this

kale in your space

I don't know if I have any logging I

don't have any logging inside that

node.js application but you basically

I'm calling through the java application

into the node.js application so stern my

boot let's see if that's just there we

go so you can kind of see it for the

spring boot application is calling into

nodejs alright so that's one thing nice

about service discovery it's very

straightforward you don't worry about it

it's kind of thing built-in again if you

do go outside the cluster though you can

actually set up a service without

endpoints you can set up a service that

basically is a proxy if you will for

some external service outside the

cluster so you can do that sort of thing

too but by default if it's the service

within the cluster it's automatically

available through DNS ok lots of fun

stuff here kind of walks you through

that but let's keep moving we got more

cool things to show you ok the

I haven't read in this program so this

is where the really the magic of

kubernetes really starts to happen and I

know it's kind of hard to read right

here but let's kind of look at this lil

bit more you have this concept of

liveness program readiness probe do you

put on your deployment yamo file and i

word you basically defy as the developer

or architect who's deploying this thing

what are the checks that make sure that

I am good or not good so they are two

separate checks and ready comes after

live so live goes first if you're not

live meaning you don't return a valid

like 200 to live or valid answer to live

kubernetes assumes you're dead that

you're just a walking zombie it shoots

you in the head and starts you someplace

else so you always want to be alive okay

and if you notice what I did here is I

mapped the liveness probe to 8080 my

spring boot my fat jar application my

vertex application my micro profile

application path route in other words I

not only want the virtual machine right

the container up and running the machine

I want my JVM up and running I'm not

alive until my JVM is up that's how I

like to think of it so I try to

basically map liveness probes to

something in the JVM that the JVM can

respond to you can also do things like

let's say this is old C++ application

that doesn't speak HTTP and as a matter

of fact it's old-school C++ or

old-school COBOL that happens to run on

Linux and what it assumes is that a file

shows up in the file system it reads

that file does this big processing on it

and puts out a new file you can have

your liveness probe map to look for the

file and in other words your tests can

be look for a file showing up in this

directory of this format and return a

good we're up so your line is Pro can be

almost anything you will see examples

throughout the Coronas documentation

that just uses files does the file exist

or not exist and that's live but if

you're not alive that's the first test

it shoots you and starts you again okay

so just keep that in mind so you always

want to return a good answer to live as

quickly as possible you can see that it

does wait so long for you to return it

pulls you every 5 seconds it waits 10

seconds on the initial you know

deployment polls you every 5 seconds and

you know waits about two seconds for you

to respond if you don't respond as soon

as you're dead ok

the rate of this probe is a little bit

different in that it also is being

pulled on a regular basis it assumes

you're lie

but ready means you're ready to receive

load you're ready to receive traffic

through the service okay

and this is a very important one to get

right because I haven't showed you this

but if you're if you're watching

carefully if I am actually interacting

with all those pods as they're dying and

coming back to life my users are getting

error messages my users are getting 500

threes in many cases because those pods

are being recycled so much my servers

are going down and coming back to life

what I want is a situation where I can

do updates to my pods with zero downtime

for my users they see no errors and the

lot and the readiness probe is the magic

if you are ready meaning I return of 200

for ready that means I have my JVM is up

I've I've done you know I make sure my

spring boot framework is up or vertex

framework or micro profile framework it

doesn't really matter you know that

everything is running properly because

it's calling your method your Java code

your node.js code your Python code and

more importantly you might have

connected to your database you might

have warmed up your caches you might

have calculated you know whatever you

got to calculate and you've you know

you're ready to receive load so this is

a very powerful concept and so I've used

this trick before for doing things like

failing over session state from one pod

to another because what happens is when

you're doing the rolling updating

kubernetes and when you're bringing up

the new pod it won't tear down the old

pod until it knows your new pod is ready

okay once your new pot is ready it tears

down the old pod and things fell over

but before that it waits to you're ready

so it's important that you understand

these guys so let me let me just let me

try to show you right now how does looks

let's see cube cuddle describe

deployment my boot let's go back and

pick up my boot again okay

I notice right now there's the config

map we showed you earlier there's life

and you know they request and the CPU

and all those things okay but there's no

live and readiness probe we've not

applied it yet so let's do this let's do

to turn no not that one wrong window not

watch while do I not have it there

there we go let's do my little loop

again so there it's looping and CSS whoo

jumbo now because we changed the

configuration okay so they were looping

against it let's watch our pods

how many pods I have running I got three

pods running now let's let's just mess

around with a little bit I'm just making

this up on the fly by the way if you're

wondering what's going on let's go here

I'm gonna mess around with it

notice I had three replicas I'm going to

tear it down to one and close watch what

happens there two of them are getting

killed okay I'm still looping the good

news is my loop didn't see any errors in

this case because it basically was bound

to that first one automatically so

that's good okay

because there depending on how when you

mess with these things when you don't

have to live this from Rinna's probe you

might see errors so let's actually have

a little fun with this okay let's go

back to hello spring boot okay let's

look at this code and it does have the

huge jumbo let's see from spring boot

I'm just gonna make a name change here

put my name in just gonna make a

difference

okay darker images crap nine steps

alright my boot v1 darker bill - t9

steps

awesome my boot v2 wait before we do

that we're gonna first wave in clean

package it does help when you compile

your code and build the fat chart so one

thing I always recommend to people is

you actually test it real quick you know

before you throw in a production right

you should always test your code so

let's test this code real quick boot

demo okay curl localhost:8080 alright

there it says from burn right okay

that's good that's what I wanted and

let's go docker build now docker bail -

t 9 steps awesome and my boot B - I'm

gonna have the second docker image out

there now

and nine steps okay a cube cuddle get

pods all right so there's the one pod

it's still based obviously on the first

image not based on the second image but

we can update that so cube cuddle edit

deployment my boot listen this works for

me there are other ways to do this but

I'm just going to try to hack it to have

a little fun with it here my boot B -

I'm just changed the designation there

so notice what happens I'm getting error

messages because it's tearing down the

old one as it brings up the new one okay

and notice also it says one running one

four one and there's still a lot of

error messages that's because I don't

have the readiness probe but I was able

to roll that image to production rather

rapidly but I didn't have outages right

my users would've saw errors and we

don't necessarily want that right you

want to you want to make it so they

don't wear so I can roll back to v1 okay

there's other commands by the way for

doing this sort of thing I'm just kind

of doing it by hacking on the deployment

so we'll go back to v1 again you notice

it's tearing down the old one bringing

up the new one again it thinks the new

one is ready when in fact it wasn't

ready now it's ready it takes a second

or two for that JVM to come to life

that's kind of the idea so here's what

we're gonna do next we're going to

basically replace our deployment with

one that has the proper probes in it

okay cube CTO replace - F cube files

replace that okay again you're gonna see

our pods redeploy but notice now it's

taking a little time before it goes one

for one it's the legging a little time

because it's actually running the

readiness probe that's in that java

virtual machine that is actually my code

now we should be good to go alright so

we're back on version one but it says

from spring boot and you can see it's

running right there and as a matter of

fact let's do this cube cut I'll get

edit deployment

my boot all right let's just actually

kind of crank this up little bit let's

actually have three replicas just to

kind of see how that changes all right

you know where it should spawn three new

containers here tree new pods again it

says it's running but zero for one the

one for one is what matters that's when

it passes its readiness check readiness

probe now it's one for one and then you

see it load balancing now one two three

and the system right here is this one

right here

okay and there's this new one right here

that's this guy right here okay so

that's the two new pods brought to life

and let's go back now and change our

image name again so now we're gonna do a

rolling update notice that or one down

immediately

it's bringing up these new guys it does

say pending I might be out of resources

so that could be part of a challenge for

me so that's often what pending means

like you might be out of resources and

make that available to run let's see if

we get it to roll over anyway if not I

can just tear down a couple pods and

let's let's do that

just to be on the safe side let's go to

replicas too because you know you have

to have more than three to boots a

rolling update properly let's see if we

can get it to spin over I might just not

be enough didn't get pods because I have

a lot of other things running here - in

your space yeah so let's delete that no

Jasper could delete the name space your

space free up some resources okay

maybe that'll give me a little bit more

room there we go because I get a little

more I remove this one set of pods and I

have a little bit more room there may be

enough to get that guy up again we're

looking at the running 0 for 1 and will

it come up there we go and if and

basically rolled over and again it

rolled over with no error message so

that is another element besides the

service in the pod separation I would

say this is another element that really

makes kubernetes shy

the fact that you have control as the

developer / architect to determine when

you are ready is a massive massive win

and we had lots of ways we did this with

old app server technology you know we

had our own solutions for this sort of

clustering and an old upscale app

servers but the fact that this works on

anything no Jess Python Java whatever

it's kind of Awesome okay and you guys

actually get a chance to see me have to

debug a problem there the pending that

tells me I've ran out of resources

someplace let me try to clean up because

I actually have a bunch of other stuff

running here

cute cuddle get pods all namespaces and

there we go I've got a bunch of things

running - all right so that little VM I

started running out of resources

okay let's kind of look at that code

it's calling slash health ok slash

health basically is returning you know

200 I'm okay as long as it returns 200

and you can put whatever you want to in

here like in this case you can see

there's a little logic is commented out

that says oh every you know if you're

less than 5

you know return a and unavailable you

know maybe randomly returned some errors

just to see if your system is behaving

correctly and so if it's not healthy

it's out of the load balancing pool if

it is healthy it's in the load balancing

pool pretty straightforward ok make

sense all right fun stuff there let's

see let's go look I just want to go look

at that file again real quick and that

is and all I've done if you notice I've

kind of went from the base deployment

let's just walk through those again

we're quick the base deployment had

basically nothing more than my name the

image name and the port number and then

we kind of went up to one with resource

constraints right there then we went to

one that was configuration based right

config map as another example and then

we went to another one that had to live

and run this probe so we're kind of just

building up our deployment yamo with the

capabilities we want to see and you kind

of see this maps to slash health this

map to the route and as long as those

return to hundreds you're good to go ok

at any point they stop returning to

hundreds then you got a node that you're

dealing with that now

there's a known call canary we might be

to get to here in a second what else I

think that's mostly it most yet so when

- this this little example alright let's

get back in here

okay we're showing all our nine steps I

think we're doing okay on time all right

rolling updates Bluegreen deployment you

just actually saw a rolling update

that's kind of what I simulated right

there the concept of having a life in

readiness probe in place means you can

change the image identifier and there's

lots of commands to change the image

identify err I just did the Edit on the

deployment Yambol but you could also

there's other commands to do a rollout

or rollback and you'll see it go through

that rolling update process again you

have to have more than enough available

memory because you saw it actually got

stuck on me there because it was trying

to use more memory than I had available

but it will try to keep your state that

you require intact during a rolling

update but you can also do some really

fun stuff for like a Bluegreen

deployment or a canary deployment let's

go look at that file real quick again

everything's documented out here on

github hopefully you guys have checked

that out I also have little polling

tools and things like that like there's

a little poll my node my boot okay right

so instead of remembering the watch

commander might get a poll and actually

it looks for my space and I actually put

that in my dev ox let's go here oh-ho oh

my boot and it's basically looking for

my space but I call that my dev ox and

so that's going to be a problem there we

go all right so there's some other

little tools and stuff like that I've

added to this that you can kind of go

use and if you're dealing with mini

shift right there's a slight difference

between mini shift versus mini cube not

much right mini shift IP versus mini

cube IP and that's why I had the two

different Polar's there but let's look

at this one called deployment techniques

which is step eight okay all right let's

go I deleted your space earlier let's

clean things up a little bit get pods

okay let's actually cube cuddle edit

deployment let's kind of trim some fat

here so I have enough room to play

around here my boot and replicas one

okay all right we should see one of

those pods going away now all right good

get that down to one

and cube cuddle well actually let's do

this

okay we're not gonna worry about that

right now so once this go back to your

space which is what I deleted this is

what I've documented here so let's go

ahead and let's go and bring nodejs back

online

how about that cube cuddle create

namespace your space so you can create

namespaces via the yamo or you just

create them from an imperative command

standpoint namespaces again that is if

your systems administrator gives you

that privilege okay so your space and go

back to know Jess which is running over

here okay I think I already have my

docker image though should that

shouldn't have been touched

docker image should still be out here my

node v1 all right that's the v1 okay

so cube create chef my node just killed

sorry I keep those here my node

deployment okay let's get that up and

let's go ahead and just switch over to

your space all right cube cut all get

Potts

all right no get Oh am I not in your

space

oh did I put that in the wrong place

let's see get cube and s my dev ox yep I

put in the wrong place

oh well no worries so I added it to this

other namespace that shouldn't be a

problem we'll have to just make you and

make that work okay so we have our our

different pods let's look at our

deployments so I have the my boot and my

and my node living in the same place now

looks like we're right there cube cuddle

create - F cube files service so my node

service alright my node and cube could

I'll get services we have it right here

this is my node right here that's the

one so curl when I to one six eight dot

99 one

- and then 3:01 Drew's of 197 Dido it

helps if you get the nine in there there

we go so here's the no Jessica man her

no GS application so Roche let's have a

little fun with this okay we have that

set in place

okay we're just build a version two of

it bring this up so by default it says

hello we'll come over here make us a

Bonjour

all right then let's double check that I

did that correctly before we deploy it

and it's 8,000 and now it says Bonjour

fantastic so I'll do a docker build - t

nine steps awesome my node B - we're

gonna make a second docker image right

and there's my node v1 and my node B -

five seconds go alright so that is the

my node V 2 so we have that out there

now let's look at the deployment file

for that I just want to double check

something okay you can kind of see

there's the mind node deployment and

there's the my node deployment new

notice the old one is v1 the new one is

v2 you see the difference there and

that's important because we're going to

do is have a little fun with this thing

so we have our two images now I'm gonna

Mesa cailli make my new deployment let's

do this watch cube CT I'll get pods okay

there's the node Jessup and running bus

alongside the spring but--one create -

Jeff come on cube files my ode

deployment new-new alright second

deployment now coming online notice

there's a new pod now called my node

knew if I basically come over here and

do this where's my little polar pull my

node

I want to fix this real quick because

it's gonna have the wrong namespace in

it I have my demux okay let's just run

this polar my node can't see there hello

hello notices it only says hello okay

and that's because while we have two

pods we have one service we have we have

one service all right that's the my node

service and it's basically got you know

on that note port ever do cube cut I'll

describe my service service my node

remember earlier we talked about the

endpoints right that's pointing to the

one pod that it basically maps to

because if I come over here sake you've

got will get pods show labels notice the

labels are slightly different I have my

node right here and my node new so that

second deployment gave me a different

label therefore it's not part of the

load balancer at this point in time but

I've deployed both okay I've deployed

one versus the other

and let's see if we can kind of show you

this real quick so this is kind of Sept

of a blue green we'll just walk through

these slides real fast I have both blue

and green my node my node new the

concept is real simple

I have checked into something to my

source code repository I've done my

build you saw me do the quick build

there if you will I did my docker build

but it moves from deployment to QA to

staging and it lands on the available

slot so blue and green are actually

arbitrary colors some people have seen

people use different colors here you can

pick whatever colors you want if you

don't I mean it just simply means we

have both things running simultaneously

side by side but our users based on the

load balancer are only seeing the old

one they're not seeing the new at all so

I get to roll that to production and

then I can decide okay now flip the

router over and if anything fails flip

it right back so that concept of being

able to go back and forth is huge it

doesn't mean you have to have double the

resources right you have to have at

least two chunks of memory and CPU

available to run these two workloads

these are very very lightweight nodejs

workloads in this case but it does mean

you're not protected should anything

fail okay you can go back to the

previous one

so we have a lot of content on Bluegreen

deployment let's kind of let's go ahead

and run this guy here okay so I got

those two guys out there and we're

curling we can go in there and you can

go check them out but let's so good and

patch our service what I'm gonna do is

update

remember our describe service see the

selector says app mine oh right here

we're gonna just patch that I didn't

want to copy that all right we're just

gonna patch that and now it's Bonjour

okay if I go look at my describe again

app my node new ok if I come back here

and patch it again that's my green let's

say I didn't really want French and go

back hello so your marketing department

might say you rolled that out too fast I

didn't want that Bonjour thing in

production say fine it's gone it takes

about that level of effort

so the Bluegreen deployment is something

I want everyone to be practicing because

in communities at least it is incredibly

simple okay it is super easy if you had

big IP ef5 routers you could do this

sort of thing if you have some really

fancy skills with a CH a proxy engine X

and all those things you could do it

also but in this case it's just kind of

built into the architecture oh you got

to just flip your match your label

selector to whatever any new let's

settle Able's you want okay to kind of

make that point a little bit more though

the concept of the label selector this

if I can do this real quick because this

actually set my set up last night

thinking it'd be fun to show people is

it test yeah cube CTL get pods I have a

go a note and a Python all running here

okay and I have a service a single

service running on that port right there

so 109 102 2 and 3205 8 8 all right

there we go

so notice that's going between node and

python and python and python node seems

to like those two a lot and go its load

balancing across all three of those

right now

and so it doesn't care what the pod is

implemented as

this is routing traffic accordingly and

the trick to it is cube coddled get

services there's the service cube cuddle

describe service my service look at the

load basically says the selectors app my

pods and you know you see in criminal

and people use app equal a lot just but

it's an arbitrary string you could call

it BER equal Sutter you could call it

you know dev ox equal cool it doesn't

matter what the string is long as you

know what it is and it says app mypods

and I say cube pedal get pods show

labels all right and you can see app my

pots so as long as the pot has carries

the label it shows up inside the load

balancer that is the magic of the

surface and even if it is completely

different implementations go code no J's

code or Python code in this case and all

these are part of this project yeah get

a project that I've kind of set up for

you guys here so if we look here right

you can kind of see there's a go example

a micro profile example nodejs Python

spring boot spring movement fabricate

spring put a jib and there's a vertex

one also right because it doesn't matter

what the payload is all these same

properties apply that is kind of

awesomeness of kubernetes used to be

that only Javad people could play in

this category and had all these awesome

tools now everybody can play in this

category ok alright so that's kind of

your near built-in sorry your blue-green

deployment there is the concept of the

built-in canary and I'm trying to decide

I don't want to run we're going to skip

running it for now because it is kind of

involved and it works kind of oddly and

it's related to the readiness probe that

I showed you earlier so in this case you

can see it's all documented you can go

into the health method and actually add

something really bad until like system

exit ok something you should never add

but let's say a really bad programmer

was hacking away and really pissed off

as a matter fact they quit today but

there check that code in and it's

rolling through the automated deployment

pipeline and because you guys are all

awesome with Bluegreen deployments and a

fully automated practicing you know see

ICD it literally will go into production

as they walk out the door let's say you

have that level of automation right in

production it still won't actually

deploy because the health and readiness

probes are is going to say you know what

I'm going to try to rolling update you

if it can't pass

health probe here and system.exit won't

it basically won't deploy you you'll see

it continually trying to deploy but it

won't deploy okay so that concept is not

be refer to that as a canary deployment

within the context of kubernetes so

canoe turbine is is looking at the

liveness programmer and his probe to try

to get your pod to start if any given

point and won't start it just won't turn

the load balancer over to you so i

people to think of that as a basic

canary deployment well show you a much

more interesting canary deployment inis

do though right so that's kind of basics

of what you get out of the box with with

kubernetes by default okay and then you

kind of just you know have a lot fun

with it you kind of see you'll see crash

loopback off things like that

and just gonna hack on it and have fun

there's also the ability to have many

deployments you in the case of blue

green I had the planet for blue

employment for green you can also deploy

a canary deployment if you will they're

a little bit different from that and

again you're just patching the you're

just patching the deployment so that it

applies to appropriate labels pretty

straightforward stuff okay all that is

certainly part of part of this world but

in the case sto makes this vastly more

interesting vastly easier and much more

much more powerful let's kind of double

check all this okay all right we showed

you that blue green walked you through

that again it's all documented you guys

have homework now the new step nine is

databases the old step nine was

debugging debugging is relatively

straightforward assume you get the right

port open up you just gotta open up the

right port for Java debugger and then

you're fine okay and the fabric eight

main plug-in actually has fabricate so

maven fabricate : deploy for deployment

and maven fabricate : debug for setting

up that same pod restarting it with the

debug port open it doesn't always work

consistently for me though and I'll tell

you that and oddly enough this is a Red

Hat tool right so I know the guy who

works on it I haven't had enough time to

sit down and pound on it forgot why it

doesn't always work for me consistently

and spend time with the engineer on that

topic but there's the the thing that

came back from all my classes where

people were very curious about databases

okay how do I run a database as a pod

how do I run a database in this nice

ephemeral you know this cool cloud

native way under kubernetes and the good

news is criminais was always built from

the get

to think about this use case so unlike

12 factor you can still be cloud native

and stateful you don't have to be

stateless it just had to have the right

infrastructure doubled managed state

okay

and so database runs perfectly fine

inside kubernetes as long as you know

how to treat it properly and so there's

actually a team out there called crunchy

DB that does a lot in the space the

basically makes Postgres run as a quick

kubernetes native thing you basically

have these four steps you should think

about you notice the bladder two or the

ones you've already familiar with right

you have to have a deployment you have

to have a service no big deal there the

thing that's unique now is you have to

have a persistent volume and a

persistent volume claim that's the two

additional things so if you want to talk

to the file system inside your

application or in this case a database

talking to the file system you have to

have a PVC or persistent volume claim

meaning you're telling kubernetes I need

persistent storage and a volume that

lives and breathes forevermore and is

not ephemeral because by default if you

just start writing to the local file

system inside your container you're

writing to the local file system in that

container and when you saw how us

killing those pods so often you would

have lost all that data in the container

local file system one you're gonna run

out a space and two it gets recycled all

the time that is the point of the pod

the pod comes and goes all the time if

you want something external to the pod

separate from the pod PVC pv alright

so the PV is typically set up by your

systems administrator they're gonna say

okay I'm running Ceph or Gluster or so

in FS or whatever file system that they

want inside there inside their data

center inside their cluster and then the

PV basically identifies what those

volumes are and the PVC is I want one

now and therefore if my pod comes up it

lays claim to it if it's available and

then once it's available you can start

writing to it okay so if we look at the

database example

does walk you through that just briefly

I already have it up and running

alright you have this concept of the

persistent volume and persistent volume

claim let's see where did I leave that

guy here I have these things in demo I

think and then the duck you get all okay

yep so I basically what I have here is I

have Postgres running as a

as well as the little application

talking to that Postgres okay so you see

there's a Postgres pod and a my spring

boot pod that actually speaks through

you know speaks into it that actually

talks does no big deal there now notice

the pv and the PVC don't show up here

they kind of just don't really have a

namespace associated with them so get pv

all right there's the Postgres pv that

was created notice that it's been

claimed by Proust Krispies BC and cue

cuddle get PVC all right there it is

okay and post Chris PV that's basically

how it's mounted so if we go and look at

the code for this all right well let's

bring up the the the java code is pretty

straightforward nothing unusual here you

know pretty standard stuff and actually

I picked this up from another blog but

look at the resources that's part that's

interesting I have a local properties

where it has to talk to a local host

Postgres but the regular application

properties talks to a Postgres process

see they're from certain local host

versus Postgres and the only real

distinction is that there is a service

and you meant we might get services

called Postgres so that service

discovery you mentioned earlier still

works even if it's not and what you

think of as a normal pod well we showed

you so far it doesn't have to be a java

nodejs app it could be a database it

doesn't matter okay so post Chris's

name's a service and I'm basically

saying connect to that in my application

that properties file and you can kind of

see have a user ID and password this is

all pretty kind of just hacked up kind

of thing but it's pretty straightforward

here's my docker file nothing weird

there don't you know we've seen that

before and then what we want to do is

deploy that application we've already

deployed it okay so let's actually bring

this up and show you what that

deployment looks like so my boot data

where to go here's the deployment and

you kind of see what's set up there okay

my boot deployment but let's look at the

Postgres deployment

right here and see this one actually has

a bunch of things in it the basically

defines the database the user and

password again you can have those

externalizes in the config map or

something of that nature but basically

look here basically maps this progress

PBC to the PVC the persistent volume

claim so that concept that basically

saying I need communities I need access

to disk kubernetes is like okay let me

see if I find available disk uh-huh you

got it we give it to you otherwise your

pod let's start ok your pod will fail if

it's trying to start if it says I need

disk terminates to send me just to give

it you don't get it so that's it's just

like what you saw earlier with other

resources except in this case it is a

volume ok so that's what I did is I did

the Postgres deployment so we did the

dr. pol we looked at the PVS

we ran the post Chris PV we're on post

Chris PVC notice you can see and see

when you first build the purpose to

volume there's no claim to it as soon as

you add the PVC there's now a claim to

it

ok you kind of see the magic is

specifically the storage class that's

how it knows what to map so if I go over

here to the PV storage class my storage

PVC storage class my storage so whenever

you're creating a PVC you simply just

have to have the right storage class

identifier again typically the system's

administrative person will actually

figure this out and then you can kind of

see how it maps in this took me a little

while to figure out by the way I was

like I can't quite figure out how to map

it properly but you can kind of see with

the PV in the case of mini cube I'm

mapping at the data slash data and my

Postgres data and actually I'm curious

about something here does something I

forgot to check last night when I was

playing around with this some more my

Postgres data

oh I all right let me in there am I in

the wrong directory again ok data my

Postgres data

all right there's my Postgres right if

you're familiar Postgres you're like

yeah that looks like a post Chris that's

the files it drops out there on the file

system so literally basically is my post

chris is up and running

it is now talking to that local

disk and that piece of disk is

guaranteed to be available based on my

systems administrators setting up the

write volumes okay but against F cluster

all kinds of different options in the

storage space your cloud provider if you

deploy on a criminalities provided by an

Amazon or Google etc etc they're also

going to tell you what the storage

volumes ought to be if you look at you

know something from the Google team they

will show you exactly they'll tell you

here's this command here's how you mount

the volume and then you just have to

have the right PVC to access it okay so

it's pretty straightforward once you get

the the rhythm of it no big deal there

okay let's do this kind of show you a

couple of things here you've gotta get

pods

there's my post-course pod kind of show

you it is a real live running Postgres

cube CTL port forward and it's five four

three two we're gonna map five four

three two out so the port forward is a

very powerful tool one these you should

be aware of basically means that the

port within the pod is now visible at

localhost so if I do that if I've done

this correctly and I don't know if I

have or not we'll find out

I should be to get to it using PG admin

let's see databases my database schemas

so this is the database that's living

underneath that java application and

using hibernate right it's using

carbonate creator update it's just the

one I've lost track of where I left that

one let's leave here exit go here I use

that code trick a lot you notice okay so

application properties

hybrid a post sequel dialect DDL update

okay so that's how we got the schema

there to begin with

and you kind of see there's this scheme

on right here and if I come over to say

tools query tool I don't know if I have

any records let's see

sexpot from questions go away there

alright so I do have one little question

in there ok so there's a whole java

application talking to his post curse

database with the Postgres database

being you know map to disk that is now

available in kubernetes so stateful

applications not a problem you just have

to make sure that the application

is you know setup accordingly in this

case Postgres is something that people

are running in production now on

kubernetes backbones the team and the

team that we work with a lot is called

crunchy DB I don't know if I have their

icon here anybody yeah these guys

they're kind of the enterprise-e

Postgres sequel folks that basically

work to optimize it on kubernetes back

in in this case though I'm just using a

standard Postgres that I downloaded from

docker hub so no big deal there if I

look at the ployment for that cific

figure where my deployments went so if I

look at the deployment for Post Chris

where's the image name yeah so the aims

name is just Postgres 10.10.5.3 okay so

10.5 was pulled from docker hub so it's

not nothing unusual about it but it has

been kind of configured now to work this

way okay couple make sure let's double

check couple things here yeah connect to

it no problem you also have access to PL

sequel inside it that's important thing

to know to pods exec ID they're the pod

identifier and P sequel okay but it

you're gonna have the right piece equal

command it's right here there's two

environment variables those environment

variables you saw earlier in the

deployment definition there we go so you

know there's my databases associated

Postgres so you can actually get inside

that now why would you do this sorry

that's quit is it quit no queue there it

is the why would you do this sort of

thing I don't know about you guys but

I've when I develop an application and

often most applications talk to

relational database so I'm Everson I

know not everyone's going to Mongo yet

right a Redis most people so use a

relational database with you know select

splat that kind of thing what happens is

the database that one developer has is

sometimes unique for that developer

right there they have their post Chris

on their laptop you have your post Chris

on your laptop they have another post

course on their laptop the schemas might

be different but certainly the

inside of this different right the

anonymize data you pulled out uh

production and you've dropped off for

developers to use you know if you've run

a bunch of tests and things like that

now the data is slightly different in

this case I can actually have a team

cluster a server where everybody can

talk to that same Postgres if I want or

you know if I'm running my own local

cluster or my own namespace I can least

use the same exact image associated that

Postgres or my sequel or whatever might

be whatever database at this point you

could shoot you should be a sequel

server the spy to sequel server runs on

Linux I and then I can pull it in and I

can interact with it right in other

words I don't have to set up Postgres on

my machine anymore or my sequel on my

machine or a sequel server or whatever

on my machine I can basically have it

running in the cluster so there's that

kind of win from a developer standpoint

to basically have that concept of a

database to interact with again I think

I documented all this pretty well okay

if you wanna you gonna check that out so

it does have a little endpoint available

to it get pods right so my boot right

there

so cube CTO get get Services and this is

all part of the application that you

have to play with right so curl one

92168 92169 I dog I need to walk my dog

in we're gonna basis she's about 20

pounds has really short legs so it's a

question and answer simple a little

application right

how can help for things so that walks

you through all of that okay oh there's

one thing at the very in here should

mention the world and criminales is

moving to this thing called operators so

this document walks you through

everything again fully manual work

should work on any kind of kubernetes

environment but there's a new concept

called an operator an operator is a

different kind of deployment a different

kind of controller so you're gonna hear

the term controller a lot in this world

and what it does is the operator has

intelligence in it and the operator can

be based on typically three different

things one it's all go code all written

and go and therefore that's probably

some you want to try as a developer you

know unless you're really interested in

learning go and it uses the kubernetes

api directly but it's typically written

in go code or it might be an ansible

playbook or a helmet art those are

another two common ways to

operator but the operator has

intelligence inside it to not only

deploy everything correctly like make

sure the PV and PVC and all that are

right and ready to go but if you do a

rolling update of Postgres itself go

from 10.5 to 10.6 the operator can

ensure that you have zero downtime in

that or if you want to clone a database

a production database that's another

thing that the operator would have so

the operator if you think about it is

really taking the place of a human

operations person it's trying to apply

intelligence that's scale across the

cluster so if you have 14 different

Postgres databases and they all need to

move from 10.5 10.6 the operator can

help you do that right that's kind of

the crunchy day be crunchy DB operator

and you're gonna see a lot more about

operators for instance the sto

installation I have running right now

was deployed via an operator and that's

the first time I've deployed just do via

operator like in the last week so

operators are very new and you were

seeing them applied to everything now so

like if you're interested in Kafka

running Kafka and cribben at ease

there's a project out there that gives

you an operator right call stream Z

gives you the operator to basically

deploy Kafka at scale and it knows how

to do things like ensure we have

zookeeper running and sure we have Kafka

running we have to have three of each

right production ready and and again if

you have to do an update it has update

logic built-in - that's kind of neat so

same techniques you saw everything done

manually but the operator kind of does

it more an automated fashion so just be

aware that operators are kind of coming

okay all right so we still have a few

minutes left you guys are pricing and

really let me see you let's check the

raffle here and see if anyone actually

is responding to our raffle oh there's a

low look at that okay so this is not the

contest yet let's see if who would be

the winner here okay fantastic so that

looks like it meets the criteria we'll

come back to that in a second so we do

have a few of you now in the raffle

alright so a number of you there

fantastic we're gonna give away the two

Chromebooks in a second but we have a

few minutes left okay

a couple of things to show you in the

bonus category and I'm moving rather

fast I realize that but we're trying to

show you as much as possible let's walk

through through a little bit of Sto

there's a deeper dive session on sto

with racing I think later today

now he's doing came native later today I

forget who's doing SDO maybe is doing a

steal tomorrow I don't have Kay Nader

running right now but we could show you

a little bit of that kay nativist kind

of thing that comes next after sto but

sto also is another Greek nautical term

mainsail so you hear all these Greek

terms within the communities ecosystem

sail and sto is one of them

there's a great online learning

experience that my team has been working

on so if you don't want to install

anything but you want to put your hands

on sto you can go to learn to UM ship

comm so I service mesh and try it just

try the things I'm going to show you

right now we also have a full github

tutorial get a sto tutorial where much

like you see all the nine steps code

examples this is even better because

it's not just me by myself it's a whole

team of people working on the tutorial

and it's basically a huge book to walk

you through all the different issues use

cases that we can think of and we

continue to add to it all the time we've

been working on that for solid year so

do check those out the concept of the

service mesh though is very powerful the

search mash is a piece of infrastructure

the sits on top of kubernetes you guys

have now seen most of them awesomeness

of kubernetes right with the deployment

and the service the fact that pods are

separate from the service the deployment

makes a declarative understanding of

what should be run and makes everything

run you got rolling updates you knew

Bluegreen deployments you can do all

this awesome stuff right criminales by

itself is awesome but what if I had SDO

to it and so sto allows me to some

really clever routing logic as one

example and it also gives me some other

things out of the box like distributed

tracing and other telemetry information

I showed you this earlier okay so here's

my graph on I'm running over here

running against my application and I had

Jaeger running over here I can look at

the tracing you know so there's things

that come out of the box and that are

kind of neat already so now let me flip

flip over here and so let's start

bipolar again okay so I'm going to just

run you through a quick demo of sto and

that way you get a feel for some of the

capabilities but again if you want to

try it it is a bit of a bear this is

fairly advanced just keep that in mind

if you're not comfortable with

kubernetes yet you got to get

comfortable kubernetes and then you can

add sto to it because this teo is

advanced all right but it gives you some

advanced capabilities you can shinned

earlier you can run all these exercises

then I'll show you some quick examples

of without having to install anything

though

and so you can watch it here okay so

let's do this yeah yeah let's do this

all right I got my little simple

application customer preference

recommendation let's let's kind of show

you what that is again real quick chef

dashboard I'm gonna just bring up the

console here and this is running there's

sto system you can see sto system is

running this is just sto itself ok the

key pieces to it or pilot which is where

you identify your rules and setup your

route rules and things like that there's

the there's also two components which

make up the telemetry thing the Jager

collector elasticsearch has to work for

that one graph fauna is obviously part

of the telemetry capture as well but

there is two others stat Steve and

telemetry so these two components right

here are collecting data okay and this

other guy here pilot is more like

pushing data into the service mesh now

let's show you how this is set up get

pods okay let me get into the right

namespace OSI project tutorial alright

now I do cube cut' all get pods notice

here it has two by two okay that's one

thing that's already distinct about this

before I have shown you lots of pods

already running but they're always one

by one and that's because there's a

sidecar container running and these guys

so if I describe pod and describe it

let's look here there are some

additional goodies inside this guy here

let's see if we can find it look at this

so this there's sto Metapod name where'd

that come from you know there's a stats

address Zipkin these are not things I

put in my code okay these are not my

deployment yeah moles are just like what

you expect to see this is new stuff

envoy is the binary to the sto proxy so

when you basically set up for Sto you

get this additional sidecar bolt-on

added as another container alongside

your container and what it does is the

manipulates IP

tables to intercept all inbound and

outbound traffic network traffic and you

might be thinking why would you want to

do that well there's some really crazy

stuff you can do one is you can log

everything and capture all the telemetry

data get all the tracing data and you

get all that for free and that's what I

was showing you with those Jaeger and

Griffin ax

user interfaces okay right so you get

this stuff now see there we got it we're

getting our tracing based on my polar

you know we got some interesting stuff

there we should be getting all non 500s

right we're 100% uptime that's good

right I can see how long different

traces take and there's other tools

there's another tool that I don't have

running right now that you can interact

with call Kali that we've been working

on so the service graphing exactly you

can kind of see where all the

transactions are going and how they're

performing so you get that stuff kind of

just out of the box by having the the

sidecar and checked it properly but then

you can do some fun stuff with it okay

so let's do this and I think I call this

an example all right so I have

recommendation v1 preference V one

customer latest and let's go here let's

look at the application for

recommendation here's my code and let's

just make this real obvious let's call

this v2 dev ox make it big and obvious

all right I'm also gonna change the

logger where's the logging data here we

go

Deb box alright just make those changes

this is a little vertex application by

the way you kind of see it has a

declarative router basically sets up

like nodejs

if you're familiar with a node.js

programming model then you'll really

like vertex but it runs on the JVM it's

a reactive framework for the reactive

toolkit for the JVM really awesome thing

and it's lightning fast it's why I like

about it

so swim-a been clean package let's

compile that guy so ultra lightweight

ultra fast so there we go compile

compile compile ok we should get a fat

jar there we go Java dash jar target

recommendation right our server is

already up localhost 8080 Aereo so v2

dev ox so that's going to be my new code

change

I want to push this into my production

environment now so docker images I

showed you this earlier let's double

check it real quick example okay so I

don't have a v2 so it's docker build -

t9 steps

awesome recommendation v2 we're gonna do

our docker image build there what did I

do it something wrong there what I type

darker built you guys didn't catch that

typo

come on now there's only a few hundred

of you here you someone's gonna caught

that typo

all right so docker run - I T - PE 80 80

80 80 example I just want to double

check it this way - you notice I do that

right I try to run it local local normal

localhost and I also run it from the

docker daemon recommendation just to be

sure that I did make the change that I

think I made all right there girl does

not accept and did a recommendation B -

what I misspelled oh is that what I

called it

oh man that's from the other thing so

it's a 9 steps yeah did do that all

right we can we can make that work let's

go fix this real quick this is why you

check see exactly why because now I've

messed things up we'll have to go fix it

all right

but good let's see here the mini shift I

switched gears rice which wrote a mini

shift and the kind of store me off a

little bit now we're gonna curl this one

okay and and actually we don't have a

service for yet but this work okay

8080 alright so there it is alright so

there it is running as a docker

container and so I now need to run this

as a new deployment okay cube CTO get

deployments kind of like I showed you

earlier with the blue versus green I

want to set this up as another

deployable entity okay and let me make

sure I do that right and this is again

all documented so I'm going to just go

pick it up by the documentation so I get

it exactly right so again we try to

document everything so let's go here

don't duh and this is setting up for

simple routing create the v2

recommendation

and you notice I have this deployment

yamo out here that I'm gonna react with

but I did mate I did mess up the name of

the image so we're gonna fix that real

quick Denton Denton Denton Denton let's

do this you can see there's all these

different aspects of what sto lets you

do like rate limiting policy enforcement

you know things of that nature

but what I want to do is not the ISTE of

files let's go and recommendation let's

go into urban IDs and this is deployment

v2 okay and where's the image name right

here nine steps all right so yeah we got

that right now so that's the deployment

I'm going to use to get a second

deployment out there so right now I have

I have the deployment of customer

preference we one recommendation B and

recommendation b1 I need to get a

deployment of recommendation v2

recommendation v2 okay and I'm gonna

manually do the the ISTE Oh inject let's

do this one all right no such file I'm

in the wrong directory all right

recommendation here we go all right

there and we don't need these anymore

okay hacking away here like we go going

crazy

QC they'll get pods all right we've got

a recommendation v2 coming online notice

this is one of two because only one of

those containers is ready it has to have

both containers ready before it's part

of the good thing here come on come on

two of to two of to do or two and notice

already it's part of our it has the

proper label so it's all part of our

load balancer so you kind of get that

for free right we showed you that

already if you deploy a pod and behind a

service that's all I'm at the part of

the load balancer but let's have a

little fun with it now I don't I want

everything to be version one

okay so even though I've deployed it I

don't want any of my users to see it so

it's still out there so I've now

basically deployed this canary

deployment but no user can see it it's

still running so this is a little like

your Bluegreen deployment but now you

can get a little more sophisticated in

the case of blue green or canary the

old-school way and I had 50/50 load

balancing because I have two pods if I

have four pods I get 25 25 25 25

that's just what you get out of the box

for free in the case of this do you can

actually say 1% goes to a new pod okay

in this case we'll say 25% goes to the

new pod you can pick your percentage

approximate as a rad and so random

allocation you can pick your percentage

regardless of your pod count so that's

one thing that's over and above and

unique about what sto provides on the

box is an example and if I again if my

marketing team says nope we don't want

that go back to version 1 only I can run

a script and I'm back to version 1 so

it's still running the pod out there and

it's kind of cool but you can kind of

basically see how much you want to

change it also I can do this thing here

like basically map it to Safari okay so

let's see here can't poll I want to get

my let's do this

okay so mini shift IP I want to see what

that is all right so I should better do

this girl and oh wait I already have

this thing running or I mean I've

already loaded in the browser let's go

here and look real quick all right yeah

okay so this is version 1 here on this

Firefox and in verse my safari hat we've

showed you this little demo earlier but

let's doing this one now

here's my Safari and this v2 okay so

this is an important point and we won't

have a lot of more time left

but the concept is I can now deploy a

canary deployment and only a certain

subset of users can see it you can use

the HTTP headers to determine who sees

what pod and that's another component of

Ischia as well so there's a lot more

capability that we don't have time to

show you sto by itself can take three

hours of a presentation like the one you

just saw but this concept meaning I can

basically say only logged in users see

the new version only beta testers see

the new version only users running in

News

or Canada see the new version because

you can pick whatever age to be header

that you can set and decide exactly how

to route traffic based on that so that's

again something that we didn't have in

our base kubernetes that we now have

something like sto in addition to the

limiter setting circuit breaking right

all kinds of other things that you see

within the context of ISTE oh okay we

are going to run a time though so just

be aware that you have those links in

this document right it's do tutorial

it's do intro that walks you through the

is do capability there's a presentation

on K native later a highly encourage and

check it out because it kind of goes

even to the next level right where

brings service capabilities to this curb

entities cluster of yours we're just

kind of awesome there's also the eclipse

a project that is basically a kubernetes

native ide running in kubernetes

and therefore you don't install anything

on your laptop all you have to do is

have a browser because the whole idea

runs inside the browser runs the side of

kubernetes pod and as you interact with

it including going into the terminal

you're interacting with the linux

machine on the kubernetes cluster so

it's a pretty powerful thing and that's

why I have these Chromebooks to give

away okay so here's what we're gonna do

I have two Chromebooks to give away

we're gonna basically see what this

result what kind of returns Kosho up

someone basically say limit to two

results for devoxx and birth and these

are our two winners let's look here

alright let's see does yep okay I've got

a nice image there you don't have to say

nice things by the way I was you could

say horrible things too all right

fantastic

all right those are our two winners so

if those are two IDs we are out of time

but I'm available for questions after

the session I'll be here the rest

afternoon I got to go help get some

stuff head up set up a Red Hat

if you have other things you want to see

from me they'll feel free to let me know

I'm always interested in your feedback

please do use the application to rate

this session and say this guy talks too

fast or I didn't understand anything he

said or I can't believe I wasted three

hours this morning but at a minimum

hopefully I got you pumped up and ready

for DevOps that cool thank you

Comments