An image is a lightweight, stand-alone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and config files.
A container is a runtime instance of an image—what the image becomes in memory when actually executed. It runs completely isolated from the host environment by default, only accessing host files and ports if configured to do so.
Containers run apps natively on the host machine’s kernel. They have better performance characteristics than virtual machines that only get virtual access to host resources through a hypervisor. Containers can get native access, each one running in a discrete process, taking no more memory than any other executable.
Source:https://docs.docker.com/engine/getstarted/step_four/#step-3-learn-about-the-build-process
Build the app
We are ready to build the app. Make sure you are still at the top level of your new directory. Here’s what
ls
should show:$ ls
Dockerfile app.py requirements.txt
Now run the build command. This creates a Docker image, which we’re going to tag using
-t
so it has a friendly name.docker build -t friendlyhello .
Where is your built image? It’s in your machine’s local Docker image registry:
$ docker images
REPOSITORY TAG IMAGE ID
friendlyhello latest 326387cea398
Tip: You can use the commandsdocker images
or the newerdocker image ls
list images. They give you the same output.
Run the app
Run the app, mapping your machine’s port 4000 to the container’s published port 80 using
-p
:docker run -p 4000:80 friendlyhello
You should see a message that Python is serving your app at
http://0.0.0.0:80
. But that message is coming from inside the container, which doesn’t know you mapped port 80 of that container to 4000, making the correct URL http://localhost:4000
.
Go to that URL in a web browser to see the display content served up on a web page, including “Hello World” text, the container ID, and the Redis error message.
Note: If you are using Docker Toolbox on Windows 7, use the Docker Machine IP instead oflocalhost
. For example, http://192.168.99.100:4000/. To find the IP address, use the commanddocker-machine ip
.
You can also use the
curl
command in a shell to view the same content.$ curl http://localhost:4000
<h3>Hello World!</h3><b>Hostname:</b> 8fc990912a14<br/><b>Visits:</b> <i>cannot connect to Redis, counter disabled</i>
This port remapping of
4000:80
is to demonstrate the difference between what you EXPOSE
within the Dockerfile
, and what you publish
using docker run -p
. In later steps, we’ll just map port 80 on the host to port 80 in the container and use http://localhost
.
Hit
CTRL+C
in your terminal to quit.On Windows, explicitly stop the containerOn Windows systems,CTRL+C
does not stop the container. So, first typeCTRL+C
to get the prompt back (or open another shell), then typedocker container ls
to list the running containers, followed bydocker container stop <Container NAME or ID>
to stop the container. Otherwise, you’ll get an error response from the daemon when you try to re-run the container in the next step.
Now let’s run the app in the background, in detached mode:
docker run -d -p 4000:80 friendlyhello
You get the long container ID for your app and then are kicked back to your terminal. Your container is running in the background. You can also see the abbreviated container ID with
docker container ls
(and both work interchangeably when running commands):$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED
1fa4ab2cf395 friendlyhello "python app.py" 28 seconds ago
You’ll see that
CONTAINER ID
matches what’s on http://localhost:4000
.
Now use
docker container stop
to end the process, using the CONTAINER ID
, like so:docker container stop 1fa4ab2cf395
Share your image
To demonstrate the portability of what we just created, let’s upload our built image and run it somewhere else. After all, you’ll need to learn how to push to registries when you want to deploy containers to production.
A registry is a collection of repositories, and a repository is a collection of images—sort of like a GitHub repository, except the code is already built. An account on a registry can create many repositories. The
docker
CLI uses Docker’s public registry by default.Note: We’ll be using Docker’s public registry here just because it’s free and pre-configured, but there are many public ones to choose from, and you can even set up your own private registry using Docker Trusted Registry.
Log in with your Docker ID
If you don’t have a Docker account, sign up for one at cloud.docker.com. Make note of your username.
Log in to the Docker public registry on your local machine.
$ docker login
Tag the image
The notation for associating a local image with a repository on a registry is
username/repository:tag
. The tag is optional, but recommended, since it is the mechanism that registries use to give Docker images a version. Give the repository and tag meaningful names for the context, such as get-started:part2
. This will put the image in the get-started
repository and tag it as part2
.
Now, put it all together to tag the image. Run
docker tag image
with your username, repository, and tag names so that the image will upload to your desired destination. The syntax of the command is:docker tag image username/repository:tag
For example:
docker tag friendlyhello john/get-started:part2
Run docker images to see your newly tagged image. (You can also use
docker image ls
.)$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
friendlyhello latest d9e555c53008 3 minutes ago 195MB
john/get-started part2 d9e555c53008 3 minutes ago 195MB
python 2.7-slim 1c7128a655f6 5 days ago 183MB
...
Publish the image
Upload your tagged image to the repository:
docker push username/repository:tag
Once complete, the results of this upload are publicly available. If you log in to Docker Hub, you will see the new image there, with its pull command.
Pull and run the image from the remote repository
From now on, you can use
docker run
and run your app on any machine with this command:docker run -p 4000:80 username/repository:tag
If the image isn’t available locally on the machine, Docker will pull it from the repository.
$ docker run -p 4000:80 john/get-started:part2
Unable to find image 'john/get-started:part2' locally
part2: Pulling from john/get-started
10a267c67f42: Already exists
f68a39a6a5e4: Already exists
9beaffc0cf19: Already exists
3c1fe835fb6b: Already exists
4c9f1fa8fcb8: Already exists
ee7d8f576a14: Already exists
fbccdcced46e: Already exists
Digest: sha256:0601c866aab2adcc6498200efd0f754037e909e5fd42069adeff72d1e2439068
Status: Downloaded newer image for john/get-started:part2
* Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
Note: If you don’t specify the:tag
portion of these commands, the tag of:latest
will be assumed, both when you build and when you run images. Docker will use the last version of the image that ran without a tag specified (not necessarily the most recent image).
No matter where
docker run
executes, it pulls your image, along with Python and all the dependencies from requirements.txt
, and runs your code. It all travels together in a neat little package, and the host machine doesn’t have to install anything but Docker to run it.Get Started, Part 3: Services
Estimated reading time: 8 minutes
Prerequisites
-
-
Get Docker Compose. On Docker for Mac and Docker for Windows it’s pre-installed, so you’re good-to-go. On Linux systems you will need to install it directly. On pre Windows 10 systems without Hyper-V, use Docker Toolbox.
-
Read the orientation in Part 1.
-
Learn how to create containers in Part 2.
-
Make sure you have published the
friendlyhello
image you created by pushing it to a registry. We’ll use that shared image here.
-
Be sure your image works as a deployed container. Run this command, slotting in your info for
username
, repo
, and tag
: docker run -p 80:80 username/repo:tag
, then visit http://localhost/
.
Get Docker Compose. On Docker for Mac and Docker for Windows it’s pre-installed, so you’re good-to-go. On Linux systems you will need to install it directly. On pre Windows 10 systems without Hyper-V, use Docker Toolbox.
Read the orientation in Part 1.
Learn how to create containers in Part 2.
Make sure you have published the
friendlyhello
image you created by pushing it to a registry. We’ll use that shared image here.
Be sure your image works as a deployed container. Run this command, slotting in your info for
username
, repo
, and tag
: docker run -p 80:80 username/repo:tag
, then visit http://localhost/
.Introduction
In part 3, we scale our application and enable load-balancing. To do this, we must go one level up in the hierarchy of a distributed application: the service.
- Stack
- Services (you are here)
- Container (covered in part 2)
About services
In a distributed application, different pieces of the app are called “services.” For example, if you imagine a video sharing site, it probably includes a service for storing application data in a database, a service for video transcoding in the background after a user uploads something, a service for the front-end, and so on.
Services are really just “containers in production.” A service only runs one image, but it codifies the way that image runs—what ports it should use, how many replicas of the container should run so the service has the capacity it needs, and so on. Scaling a service changes the number of container instances running that piece of software, assigning more computing resources to the service in the process.
Luckily it’s very easy to define, run, and scale services with the Docker platform – just write a docker-compose.yml
file.
docker-compose.yml
file.
Your first docker-compose.yml
file
A docker-compose.yml
file is a YAML file that defines how Docker containers should behave in production.
docker-compose.yml
file is a YAML file that defines how Docker containers should behave in production.
docker-compose.yml
Save this file as docker-compose.yml
wherever you want. Be sure you have pushed the image you created in Part 2 to a registry, and update this .yml
by replacing username/repo:tag
with your image details.
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: username/repo:tag
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
This docker-compose.yml
file tells Docker to do the following:
-
Pull the image we uploaded in step 2 from the registry.
-
Run 5 instances of that image as a service called
web
, limiting each one to use, at most, 10% of the CPU (across all cores), and 50MB of RAM.
-
Immediately restart containers if one fails.
-
Map port 80 on the host to
web
’s port 80.
-
Instruct
web
’s containers to share port 80 via a load-balanced network called webnet
. (Internally, the containers themselves will publish to web
’s port 80 at an ephemeral port.)
-
Define the
webnet
network with the default settings (which is a load-balanced overlay network).
docker-compose.yml
wherever you want. Be sure you have pushed the image you created in Part 2 to a registry, and update this .yml
by replacing username/repo:tag
with your image details.version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: username/repo:tag
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
docker-compose.yml
file tells Docker to do the following:
Pull the image we uploaded in step 2 from the registry.
Run 5 instances of that image as a service called
web
, limiting each one to use, at most, 10% of the CPU (across all cores), and 50MB of RAM.
Immediately restart containers if one fails.
Map port 80 on the host to
web
’s port 80.
Instruct
web
’s containers to share port 80 via a load-balanced network called webnet
. (Internally, the containers themselves will publish to web
’s port 80 at an ephemeral port.)
Define the
webnet
network with the default settings (which is a load-balanced overlay network).Run your new load-balanced app
Before we can use the docker stack deploy
command we’ll first run:
docker swarm init
Note: We’ll get into the meaning of that command in part 4. If you don’t run docker swarm init
you’ll get an error that “this node is not a swarm manager.”
Now let’s run it. You have to give your app a name. Here, it is set to getstartedlab
:
docker stack deploy -c docker-compose.yml getstartedlab
Our single service stack is running 5 container instances of our deployed image on one host. Let’s investigate.
Get the service ID for the one service in our application:
docker service ls
You’ll see output for the web
service, prepended with your app name. If you named it the same as shown in this example, the name will be getstartedlab_web
. The service ID is listed as well, along with the number of replicas, image name, and exposed ports.
A single container running in a service is called a task. Tasks are given unique IDs that numerically increment, up to the number of replicas
you defined in docker-compose.yml
. List the tasks for your service:
docker service ps getstartedlab_web
Tasks also show up if you just list all the containers on your system, though that will not be filtered by service:
docker container ls -q
You can run curl -4 http://localhost
several times in a row, or go to that URL in your browser and hit refresh a few times.
Either way, you’ll see the container ID change, demonstrating the load-balancing; with each request, one of the 5 tasks is chosen, in a round-robin fashion, to respond. The container IDs will match your output from the previous command (docker container ls -q
).
Running Windows 10?
Windows 10 PowerShell should already have curl
available, but if not you can grab a Linux terminal emulater like Git BASH, or download wget for Windows which is very similar.
Slow response times?
Depending on your environment’s networking configuration, it may take up to 30 seconds for the containers to respond to HTTP requests. This is not indicative of Docker or swarm performance, but rather an unmet Redis dependency that we will address later in the tutorial. For now, the visitor counter isn’t working for the same reason; we haven’t yet added a service to persist data.
docker stack deploy
command we’ll first run:docker swarm init
Note: We’ll get into the meaning of that command in part 4. If you don’t run
docker swarm init
you’ll get an error that “this node is not a swarm manager.”getstartedlab
:docker stack deploy -c docker-compose.yml getstartedlab
docker service ls
web
service, prepended with your app name. If you named it the same as shown in this example, the name will be getstartedlab_web
. The service ID is listed as well, along with the number of replicas, image name, and exposed ports.replicas
you defined in docker-compose.yml
. List the tasks for your service:docker service ps getstartedlab_web
docker container ls -q
curl -4 http://localhost
several times in a row, or go to that URL in your browser and hit refresh a few times.docker container ls -q
).
Running Windows 10?
Windows 10 PowerShell should already have
curl
available, but if not you can grab a Linux terminal emulater like Git BASH, or download wget for Windows which is very similar.
Slow response times?
Depending on your environment’s networking configuration, it may take up to 30 seconds for the containers to respond to HTTP requests. This is not indicative of Docker or swarm performance, but rather an unmet Redis dependency that we will address later in the tutorial. For now, the visitor counter isn’t working for the same reason; we haven’t yet added a service to persist data.
Scale the app
You can scale the app by changing the replicas
value in docker-compose.yml
, saving the change, and re-running the docker stack deploy
command:
docker stack deploy -c docker-compose.yml getstartedlab
Docker will do an in-place update, no need to tear the stack down first or kill any containers.
Now, re-run docker container ls -q
to see the deployed instances reconfigured. If you scaled up the replicas, more tasks, and hence, more containers, are started.
replicas
value in docker-compose.yml
, saving the change, and re-running the docker stack deploy
command:docker stack deploy -c docker-compose.yml getstartedlab
docker container ls -q
to see the deployed instances reconfigured. If you scaled up the replicas, more tasks, and hence, more containers, are started.Take down the app and the swarm
-
Take the app down with
docker stack rm
:
docker stack rm getstartedlab
-
Take down the swarm.
docker swarm leave --force
It’s as easy as that to stand up and scale your app with Docker. You’ve taken a huge step towards learning how to run containers in production. Up next, you will learn how to run this app as a bonafide swarm on a cluster of Docker machines.
Note: Compose files like this are used to define applications with Docker, and can be uploaded to cloud providers using Docker Cloud, or on any hardware or cloud provider you choose with Docker Enterprise Edition.
Take the app down with
docker stack rm
:docker stack rm getstartedlab
Take down the swarm.
docker swarm leave --force
Note: Compose files like this are used to define applications with Docker, and can be uploaded to cloud providers using Docker Cloud, or on any hardware or cloud provider you choose with Docker Enterprise Edition.
Recap and cheat sheet (optional)
To recap, while typing docker run
is simple enough, the true implementation of a container in production is running it as a service. Services codify a container’s behavior in a Compose file, and this file can be used to scale, limit, and redeploy our app. Changes to the service can be applied in place, as it runs, using the same command that launched the service: docker stack deploy
.
Some commands to explore at this stage:
docker stack ls # List stacks or apps
docker stack deploy -c <composefile> <appname> # Run the specified Compose file
docker service ls # List running services associated with an app
docker service ps <service> # List tasks associated with an app
docker inspect <task or container> # Inspect task or container
docker container ls -q # List container IDs
docker stack rm <appname> # Tear down an application
docker swarm leave --force # Take down a single node swarm from the manager
docker run
is simple enough, the true implementation of a container in production is running it as a service. Services codify a container’s behavior in a Compose file, and this file can be used to scale, limit, and redeploy our app. Changes to the service can be applied in place, as it runs, using the same command that launched the service: docker stack deploy
.docker stack ls # List stacks or apps
docker stack deploy -c <composefile> <appname> # Run the specified Compose file
docker service ls # List running services associated with an app
docker service ps <service> # List tasks associated with an app
docker inspect <task or container> # Inspect task or container
docker container ls -q # List container IDs
docker stack rm <appname> # Tear down an application
docker swarm leave --force # Take down a single node swarm from the manager
Thanks for providing your information!
ReplyDeleteDocker and Kubernetes Training
Docker Training
Docker Online Training
Kubernetes Online Training
Docker Training in Hyderabad
Thanks for providing your useful information
ReplyDeleteFull Stack Training in Chennai | Certification | Online Training Course| Full Stack Training in Bangalore | Certification | Online Training Course | Full Stack Training in Hyderabad | Certification | Online Training Course | Full Stack Developer Training in Chennai | Mean Stack Developer Training in Chennai | Full Stack Training | Certification | Full Stack Online Training Course