Run docker images locally with minikube

Building docker images locally and running them on minikube locally

I’d like to share 2 tricks with you for locally testing docker images.

This post is docker focused.

Trick 1:

docker-compose

Pre requisites:

Lean on docker-compose for your local building and tagging of images.

When you think docker-compose you’re probably thinking that you can run your images locally as containers and test them locally.

However docker-compose can also be very useful to build and tag images locally:

Example:

Create a file called: Dockerfile

Add the following contents to the file:

1
2
FROM nginx:latest
EXPOSE 80

That’s it, we’ll test using this simple nginx image.

Create a file called: docker-compose.yaml

Add the following contents to the file:

1
2
3
4
5
6
7
version: "3.9"
services:
  nginx:
    image: localtest:v0.0.1
    build: .
    ports:
      - "80:80"

Run with docker-compose

1
$ docker-compose up -d

You can check that your container is running:

1
$ docker ps

Now check your images

1
$ docker images

You should now see your image built and tagged and available locally:

1
2
3
REPOSITORY   TAG       IMAGE ID         CREATED          SIZE
localtest       v0.0.1    a1dcd6663272   xxx        133MB
nginx           latest    6084105296a9   xxx             133MB

Now you can view this in your browser:

Go to: http://localhost:80

Trick 2:

minikube

Pre requisites:

Running this locally built image on minikube.

Let’s get your local environment ready to run the image on minikube.

Make sure your minikube is running:

1
$ minikube status

Run this command

1
$ eval $(minikube docker-env)

Run the container

1
$ kubectl run localtest --image=localtest:v0.0.1 --image-pull-policy=Never

View pods:

1
$ kubectl get pods

You should see your pod creating and running:

1
2
NAME        READY   STATUS              RESTARTS   AGE
localtest   0/1     ContainerCreating   0          4s
1
2
NAME        READY   STATUS    RESTARTS   AGE
localtest   1/1     Running   0          27s

If you don’t see that, don’t forget to check you ran “eval $(minikube docker-env)”.

Can you create a deployment.yaml file and run it? Sure! Just add the imagePullPolicy as Never:

Create a file called: deployment.yaml

Add the following contents to the file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: localtest
  name: localtest
spec:
  replicas: 1
  selector:
    matchLabels:
      app: localtest
  template:
    metadata:
      labels:
        app: localtest
    spec:
      containers:
      - image: localtest:v0.0.1
        name: localtest
        imagePullPolicy: Never
        ports:
        - containerPort: 80

Create the deployment on minikube (remember to check you’re connected to your minikube cluster):

1
$ kubectl apply -f deployment.yaml
1
$ kubectl get deployment,pod
1
2
3
4
5
6
NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/localtest   1/1     1            1           63s

NAME                             READY   STATUS    RESTARTS   AGE
pod/localtest                    1/1     Running   0          14m
pod/localtest-55888c9fc7-j8mkx   1/1     Running   0          63s

Your pod will have a different name to “localtest-6c755dd5d-m4g5l“, remember to copy your pod and replace this value with your pod’s value.

You can test your newly deployed container:

1
$ kubectl port-forward localtest-6c755dd5d-m4g5l 8080:80

Except this time we’ve portforwarded to port 8080

Go to: http://localhost:8080

(This was a bonus tip ^ you can test pods with port-forward without a service).

References:

Some other references

https://minikube.sigs.k8s.io/docs/commands/docker-env/

https://kubernetes.io/docs/concepts/containers/images/#updating-images

https://medium.com/bb-tutorials-and-thoughts/how-to-use-own-local-doker-images-with-minikube-2c1ed0b0968

Kubernetes and Kong with a Kong Dashboard local

I quickly threw this together just to see if I could get it working on my local machine using docker for mac and kubernetes.

It’s pretty rough, but just putting it here in case anyone needs the same info I pulled together.

This is for local testing with NodePort, not for production or cloud use.
I also used postgres.

Kong kubernetes setup documentation here:

https://docs.konghq.com/install/kubernetes/

Steps to set up kong locally using kubernetes and docker for mac

Enable kubernetes with docker for mac

  • Click on docker preferences
  • Click on the Kubernetes tab
  • Select enable kubernetes checkbox and click on the kubernetes radio button

Note: Make sure kubernetes has access to internet, if it does not start up, check internet connection. If you run on a VPN that has strict security firewalls, that might be preventing kubernetes from installing.

Update type to NodePort

In order for kong to run locally you need to update the type from LoadBalancer to NodePort.

Also make sure the kong version you are using is supported by the kong dashboard image. At the time of writing this only kong version under 0.14 are supported. So I updated the version of kong to 0.13 in the yaml scripts.

Updated kong tag to 0.13

Yaml files

Grab the yaml files from here:

https://github.com/CariZa/kubernetes-kong-with-dashboard

Commands:

1
2
3
4
5
kubectl create -f postgres.yaml    

kubectl create -f kong_postgres.yaml

kubectl create -f kong_migration_postgres.yaml

Check the service ip for

1
kubectl get svc

Copy the ip of the kong-admin service and paste it in kong_dashboard.yml as an “args” value, eg:

When you run “$ kubectl get service” you might get this response:

1
2
3
4
    NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
    ...
    kong-admin         NodePort    10.101.71.20     <none>        8001:30916/TCP   46m
    ...

What you want to take is the CLUSTER-IP and the first part of the PORT(S)

1
10.97.55.180:8001

You will add it in the kong_dashboard.yaml file by the args list around line 34:

1
args: ["start", "--kong-url", "http://10.101.71.20:8001"]

Then create the kong-dashboard:

1
kubectl create -f kong_dashboard.yml

To check if your dashboard runs correctly check the logs.

First get the full pod name for kong-dashboard:

1
kubectl get pods

It will be something like **kong-dashboard-86dfddcfdf-qgnhl**

Then check the logs:

1
kubectl logs [pod-name]

eg

1
kubectl logs kong-dashboard-86dfddcfdf-qgnhl

You should see

1
2
3
4
5
    Connecting to Kong on http://10.101.71.20:8001 ...
    Connected to Kong on http://10.101.71.20:8001.
    Kong version is 0.13.1
    Starting Kong Dashboard on port 8080
    Kong Dashboard has started on port 8080

If you only see

1
    Connecting to Kong on http://10.101.71.20:8001

It might still be starting up or your internal kong-admin url could be incorrect. Remember the url is the kubernetes internal url.

Test the dashboard works

You should be able to access your kong-dashboard using the service port:

1
kubectl get service

Grab the port by the kong-dashboard service, it will be the second port:

1
2
3
4
NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
...
kong-dashboard     NodePort    10.97.55.180     <none>        8080:30719/TCP   1h
...

In this case the port value is 30719

So the url will be:

http://localhost:30719

Note

This is for local testing with NodePort, not for production or cloud use.

Screenshots

This is what I a see on my side at the date of publication:

I added a test api entry, pointed to a service I was running on kubernetes:

This is the settings I added for the test:

I got the url but checking the service value:

1
kubernetes get service

I get the values:

1
hello-kubernetes   NodePort    10.106.125.184   <none>        8080:30281/TCP   22h

I used “10.106.125.184” and “8080” as the “upsteam_url”

And I could then access the internal route using the kong-proxy ip and the path I set “/test”

Eg:

http://localhost:32246/test

localhost:32246 -> Kong Proxy url
/test -> The path I told the kong_dashboard to use and redirect to the internal url “10.106.125.184:8080”

Docker tutorial for user-defined networks

How to get docker containers to communicate without using –link

A quick docker tutorial for user-defined networks to help you transition from using –link to user-defined networks.

If you have been trying to get docker containers to communicate with each other and you are investigating using the –link option, you made have come across this warning message:

Warning: The –link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, we recommend that you use user-defined networks to facilitate communication between two containers instead of using –link. One feature that user-defined networks do not support that you can do with –link is sharing environmental variables between containers. However, you can use other mechanisms such as volumes to share environment variables between containers in a more controlled way.

Source: https://docs.docker.com/network/links/#communication-across-links

Here is a quick way to get docker containers to communicate using docker networks.

Why should containers communicate?

The long term plan is to microservice your monolithic projects. Make them smaller, more testable, more reusable and more maintainable.

Once you have your collection of microservices, you might want to test that they can use other microservices, or communicate amongst each other.

In my case, I often spin up new tools to play around with in isolation. And I need those tools to communicate with other tools. This is where the docker network comes in handy.

Networking Like a Docker Boss

A better way to get containers to communicate is to create a user generated network.

The user being you, and the network will be a default bridge network.

The command to create a docker network:

1
$ docker network create [yournetworknamehere]

So you add in the name you would like to give your network, eg “mynet”.

1
$ docker network create mynet

Check your network is create by typing in:

1
$ docker networks ls

You should see something like this:

NETWORK ID NAME DRIVER SCOPE
851fb69ba4ca bridge bridge local
fc3d1eddc10f host host local
f7151c7835b8 mynet bridge local
9ba12ad3dcea none null local

You will see your new network “mynet” has been added and by default it is a bridge network.

That’s all we need to get containers communicating to each other.

Inspect the docker network

Check what is currently on the network by running

1
$ docker network inspect mynet

And have a look at the sections that says:

1
2
3
"Containers": {

}

If you just created your network, you should see an empty section called “Containers” near the bottom of the inspect response. This indicates that currently there are no containers on this network.

Add containers to your custom user network

Check which containers you need to communicate with each other by doing the docker ps command:

1
$ docker ps

Get the ids or names of the containers from that list.

In my case I needed a container running jenkins to be able to communicate with a container running artifactory:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAME
16ec3d7051dd docker.bintray.io/jfrog/artifactory-oss:latest “/entrypoint-artifact” 42 hours ago Up 19 hours 0.0.0.0:8081->8081/tcp artifactory
2387b9d5e4df jenkins/jenkins:lts “/sbin/tini — /usr/l” 4 days ago Up 4 days 0.0.0.0:8080->8080/tcp, 0.0.0.0:50000->50000/tcp tender_perlman

So I took the ids of the two containers: 16ec3d7051dd and 2387b9d5e4df.

Then you need to add those containers to your newly created custom user network:

1
$ docker network connect mynet 16ec3d7051dd
1
$ docker network connect mynet 2387b9d5e4df

The syntax is:

$ docker network connect [yournetworkname] [yourcontainerid]

Inspect the docker network again

If you run your inspect command again:

1
$ docker network inspect mynet

You should now see your newly connected containers:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
...
"Containers": {
"16ec3d7051ddbe58f6984d83e4d099390efa22fafd44d70bd843fb99d75dcd0f": {
"Name": "artifactory",
"EndpointID": "3630e109771441c422fc99a616f0888463a02ea3afc21ab5b60719cdd2b08729",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"2387b9d5e4dfb1073c8db90052fb9a4692fa227c55441163b08de64eddc27955": {
"Name": "tender_perlman",
"EndpointID": "fe4d002c5497339cc9117a7a3d997a1e57fedb171a93b25d4fa34c34788cfa3a",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
}
},
...

Now you have achieved your goal to get your docker containers to communicate.

Use curl to check your containers can communicate

In the response from the inspect command you should see your containers each have an “IPv4Address”.

Copy just the IP.

You can now use the docker exec command to test you can ping the other container:

1
$ docker exec -it  16ec3d7051dd ping 172.17.0.3

And you should see:

PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: icmp_seq=0 ttl=64 time=0.122 ms
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.122 ms
64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.107 ms

Yay!

If you don’t see that, just check your details, use the correct container id/name. Use the correct ip (which you get by copying the IPv4Address from the “docker network inspect mynet” command you ran).

That was a quick overview on one of the ways to get docker containers to communicate.

 

WordPress and docker – Using phpmyadmin with docker and wordpress and mysql

This is a bonus post. I got curious about running a phpmyadmin instance and realised it was easy to get running.

If you’ve gone through my previous posts:

Or perhaps you already have an instance of wordpress running and a database that has persistence set up. Now you want to add a mysql administration tool into the mix.

For this I did not go out of my way to write a full docker-compose.yml file. I only wanted a phpmyadmin instance running locally for testing any wordpress development.

To do this I ran:

1
2
3
4
$ docker run
    --name myadmin
    --network dockerwordpress_default -d
    --link dockerwordpress_db_1:db -p 8080:80 phpmyadmin/phpmyadmin

The steps involved

The important part is making sure you link to your db and that you run the phpmyadmin container on the same network as the db.

First see what the name of your db container is:

1
$ docker ps

You should something like this:

1
2
3
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
81ba5a434e49 wordpress:latest "docker-entrypoint..." 23 hours ago Up 23 hours 0.0.0.0:8000-&gt;80/tcp dockerwordpress_wordpress_1
549f46a13b6b mysql:5.7 "docker-entrypoint..." 23 hours ago Up 23 hours 3306/tcp dockerwordpress_db_1

Your mysql container is the important one for this post. Copy the name, in my case it is “dockerwordpress_db_1”.

Check your networks:

1
$ docker network ls

You should see something like this:

1
2
3
4
5
NETWORK ID NAME DRIVER SCOPE
1db6773f4c13 bridge bridge local
938a9daa793d dockerwordpress_default bridge local
d8e0c5970cdb host host local
40e39b778c65 none null local

In my case I would use “dockerwordpress_default” as the network for my phpmyadmin container.

Using docker run

Now you know enough to run the command that I showed at the start of the post:

1
2
3
4
$ docker run
    --name myadmin
    --network dockerwordpress_default -d
    --link dockerwordpress_db_1:db -p 8080:80 phpmyadmin/phpmyadmin

Explanations of the important parts:

–link dockerwordpress_db_1:db

dockerwordpress_db_1:db this is pointing to the network “dockerwordpress_db_1” and then linking to the mysql database service which you would have started in this post: Running wordpress and mysql using docker compose

–network dockerwordpress_default

dockerwordpress_default You got this when you ran docker network ls above.

Viewing phpmyadmin

If your phpmyadmin container started up successfully you should now be able to connect to it on port 8080.

If your docker is linked to your localhost:

http://localhost:8080

Alternativelyl find your ip by running:

1
$ docker-machine ip

Then

http://yourdckerip:8080

Login to phpmyadmin using the details you used in the docker configurations:

Eg

1
2
MYSQL_USER=wordpress
MYSQL_PASSWORD=wordpress

WordPress and docker – Developing wordpress themes using docker volumes

As promised, here is part 2 of my docker & wordpress posts. Here is how you could develop wordpress themes using docker.

I’m going to quickly run through how to use docker to do wordpress theme development and not loose any changes when you stop docker of switch off your computer.

In order to make sure data persists even when your docker container is no longer running, you need to set up a volume.

You can read the official docker docs here: https://docs.docker.com/engine/admin/volumes/volumes/

I tweaked the docker-compose file below to map the container’s “wp-content” folder to your current working directory you have your docker-compose.yml file.

version: '3'

services:
   db:
     image: mysql:5.7
     volumes:
       - ./db_data:/var/lib/mysql
     restart: always
     env_file: 
       - ./.env

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     volumes:
       - ./wp-content:/var/www/html/wp-content
     ports:
       - "8000:80"
     restart: always
     env_file: 
       - .env

volumes:
    db_data:
    wp-content:

You can view your wordpress site in your browser by going to:

1
http://localhost:8000

or if you need to use an ip check your docker-machine ip by running:

1
$ docker-machine ip

See the previous post for how to set up the .env file.

You should see two directories listed in the same directory you have your docker-compose.yml file in.

1
2
db_data
wp-content

This is not 100% ideal. For simplicity I’ve set up these folders in the same directory. I will explain in more detail why at the end of the post.

Creating your wordpress theme with docker volume

Now that you have a persistent volume set up you can start to tweak the theme.

You will find the themes in the ./wp-content/themes folder.

In the admin (go to http://localhost:8000/wp-admin, or http://yourmachineip:8000/wp-admin) you will see listed the themes in the Appearance – Themes section.

You can easily delete themes you don’t want, and refresh and see they will no longer be listed in the Themes section in the admin.

You can add new themes and make changes to existing themes and just refresh and you will see the changes in the browser.

A better way to handle volumes:

So I mentioned above, you would not want your volumes necessarily within the same folder as your container files that you have now containerized using docker.

For one, you will want to make your final work into an image with a tag and run the image on a production ready server.

Docker-compose is for development, and should not be your final go live strategy.

Your volumes you should also either point to another container or a safe secure place on your computer / server.

If we’re thinking big picture, and thinking about the deployment part. You will have your docker-compose files separate from your projects, but maybe for simplicity sake you decide to keep them with your project files, that’s fine, but make sure you consciously think about why you want them there, what are the benefits of that structure (I’m just hypothetically asking)?

You should always have your volumes in a safe place, perhaps a dedicated server space that has recovery tactics in place like regular backups, mirrored, clustered. There are many ways to tackle secure, fail safe voluming. Make sure if you are planning a project that is for a client project that will go into the world, that you have planned ahead the entire deployment ecosystem.

A badly managed volume becomes a single point of failure, and one of the main 101s of cloud infrastructure and proper devops thinking should be to remove as many single points of failure as possible.

For the purpose of this blog post though, I’ve kept all of that out. My aim is to allow you to test the concept of wordpres and voluming with as little effort as possible.

I will create a follow up post on how to create a docker image of your code, push your tweaked code into an image, and then you can run from a fully packaged image rather than from a folder.

You’re now dabbling in the realm of containers, so you should be thinking in “image” and moving away from the thought process of “I have x amount of files to push to a server”, but like I say, I’ll touch on that in a follow up post.

Enjoy 🙂 May you experience lots of success in your containerizing.

Just a note on some struggles I had:

In order to test the volumes for this post, I delete the volumes a lot and restarted my computer. I shut down the containers and started them. I really tried to break the volumes. For the most part the volumes persisted well, I had to purposely delete the volume and all the files in order to stop it from persisting.

While I did all that my environment started to lag and I noticed the wordpress container sometimes started up before the database container, and then if I tried to load the wordpress site it did not show up. Since the wordpress container ran before the database was properly configured, I had to rerun the wordpress container to get it to connect with the database container again.

 

Docker and WordPress: Running wordpress and mysql using docker compose

Curious about how to work with docker and wordpress? Here is a quick walk through on how to use docker to run wordpress and mysql using docker compose.

In the next post I’ll show how you can can volume the themes and edit themes in your development process. And in a follow up post I will show how to run your own created wordpress image in kubernetes and in minikube. 🙂

You can get a fairly generic yml file from the official docker website:

https://docs.docker.com/compose/wordpress/

I took the configurations and made a few small tweaks:

docker-compose.yml

version: '3'

services:
   db:
     image: mysql:5.7
     volumes:
       - ./db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: mysql_root_password
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: wordpress
     env_file: ./.env

volumes:
    db_data:

The above configuration will set up wordpress and mysql with the variable values in the “environment” sections. If will map wordpress from the interal port 80 to external port 8000. It also creates a volume mount from the internal location of  “/var/lib/mysql” to the external location of “./db_data”. This will help with persistent data storage (you won’t loose your data if your container stops).

By “internal” I mean within the container’s environment and by “external” I mean the environment running the docker container, in my case this would be my mac.

Then run the docker compose script:

1
$ docker-compose up -d

Check the docker images are running:

1
$ docker ps

You can then view the wordpress install by checking your docker ip and then pointing to the external port (in this case it will be 8000).

Get your docker’s ip:

1
$ docker-machine ip

The first time you run this, you will be asked to fill in some details about your wordpress blog. The next time you run it, it should all be prepopulated (provided you haven’t deleted your “db_data” folder).

You can also test that the data is persistent by delete the docker containers and images and then downloading the images again and running the docker-compose script. If everything starts up the same way you left it, then your persistent data is working.

I made improvements to the script by pulling out the environment variables into a .env file. This will help make the containers for customizable. I’ve commented out the environment variables in the docker-compose file so you can see what they were.

docker-compose.yaml (v2)

version: '3'

services:
   db:
     image: mysql:5.7
     volumes:
       - ./db_data:/var/lib/mysql
     restart: always
     env_file: ./.env

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     ports:
       - "8000:80"
     restart: always
     env_file: ./.env

volumes:
    db_data:

Your .env file:

MYSQL_ROOT_PASSWORD=mysql_root_password
MYSQL_DATABASE=wordpress
MYSQL_USER=wordpress
MYSQL_PASSWORD=wordpress

WORDPRESS_DB_HOST=db:3306
WORDPRESS_DB_USER=wordpress
WORDPRESS_DB_PASSWORD=wordpress

To test

Check if your .env variables imported correctly by running the docker exec command:

1
docker exec -it CONTANER-ID bash

Then inside the container you can run:

1
echo $WORDPRESS_DB_USER

If that returns just an empty line, you need to just check the env_file settings and the contents of the file.

Check your container id by running:

1
docker ps

Some useful scripts

Stopping all containers (this will stop ALL, so use with caution):

1
$ docker stop $(docker ps -aq)

Removing all containers (use with caution, this will remove all containers):

1
$ docker rm $(docker ps -aq)

Deleting all images (use with caution! this will remove all images):

1
$ docker rmi $(docker images -q)

Github

View this code on github:
https://github.com/CariZa/DockerComposeWordpress

Tips to get docker set up on Ubuntu

So I recently tried setting up docker on ubuntu and ran into a few hurdles, below is the checklist of what needed to be setup or installed on Ubuntu in order to run a docker project.

Make sure these are installed/Setup

docker
docker-compose
virtualbox
docker-machine

Setting up the docker-machine default ip

Then using docker machine you need to setup an IP to run in a web browser.

There is more thorough documentation on the docker website. The summary of what I found worked:

Run the following commands:

docker-machine create –driver virtualbox default
docker-machine env default
eval $(docker-machine env default)

Get the ip address of default

docker-machine ip default

That ip is what you will be using in your browser to view the docker files.

When you run your docker container:

docker-compose up

Then you will be provided with a port number. Eg 127.0.0.1:8000

Then you use the ip above and that port number.

See the other docker posts for solutions to some of the errors I ran into and fixed.

Setting up a new django project with Docker and getting gunicorn error “No module named…”

Just putting this here. I don’t understand docker fully yet. I have just managed to get it working for the first time. The solution I found for the “No module named…” error was to run the following command:

docker run -p 5000:5000 registry:latest