There is a pretty common issue happend before docker, "Its working on my machine", basically we developers tend to develop application on various platform based on our development machine it could be linux, windows, mac etc, but when going to production it may be different, for eg: most of the production servers are of linux, and if u develop ur application in local with windows, then there needs to be a ton of work u need to do in order to make it work correctly on production system.
Adding to that,we developers tend to share the codebases with our peers, in this scenario one can have
windows, one can have mac/linux etc, in these cases its difficult to make our application locally work across
all environments/os's without doing any complex changes.
Also if ur application depends on services like redis, postgres, mysql, mongo etccc, then whichever peers
u are sharing the code, all those needs to download these, which is ok for some, but for some its an unwanted
or bloated softwares that they need to be installed just for ur application to work.
With docker, now setting up our development environment will become easy across all platform and all environment. will see below on how really docker solves it.
To solve all the above mentioned problems, docker is the one stop solution to these.
Docker is basically a container that runs a container image.
what is container image? => container image/image is kinda artifact or a in simple terms a zip file(technically this
is not) which contains all our details and dependencies about our application.
For Eg: consider u have a restraunt backend application in nodejs, sql and redis. for this application to run
u need an OS, application code, node runtime, build files, redis, sql etc, now the container image will
typically have the files into that artifact probably the underlying OS that it needs to run, all the dependencies
(i mean node_modules), if possible build and compiled code etc
what is container? => Now in order to run the above container image, i mean basically u need to do something
with the created container image right, for this u need a container, where u can just throw that image and tell that
container ok run these steps etc, this process is basically what container does.
How basically container runs => Container takes in the container image and executes the steps defined in that
and also takes all the artifact from that(kinda unzip), for container to run that image, every container needs
something called container run time, similar to how u want node run time to run node code.
Docker is nothing but a product, its not a software or a protocol, there are lot of container run times available
like podman, containerd etc, u can use this and create ur own docker.
Only thing u need is u need a service to parse and create container images, and then parse that image
and run in an isolated container using the containerd.
so basically containers are concept, Docker is one of the mainstream/popular product that
uses containers
Basically a docker is a product, containers are the real concept that we dealing with.
Containers are basically linux kernel namespaces, cgroups etc, containers run
only on linux not on anyother platform(coz it uses linux features like namespaces, cgroups etc), thats when u use
docker in mac or windows u need to install something called docker-desktop, which will internally download a linux
vm and then runs docker on top of it.
Linux namespaces are nothing but a shell that runs in a separate environment apart from our host environment.
for EG: linux namespace can be created using unshare, sudo unshare --mount --uts --pid bash => this will basically
start a separate shell where it has its own mount, its own process(pid), its own uts(its own domain name,hostname
etc).
In core the docker runs this command at its lower deep level, may be managed by containerd or runc (not sure), but
on the whole this is what container is and this how docker uses the container.
Basically all ur process that is running in our linux system will be inside a single namespace,
to verify this you can run readlink /proc/$$/ns/pid -> $$(this gets ur current PID from any shell)
the above command can get u the namespace that has been linked to the PID.
if u run the above command from the host and the container the namespace will be different.
This is why there is a misconception that docker is native to linux(in some way or another this is true),
since in linux only we have namespaces(unshare).
In mac if u want docker you need to install docker desktop or u need to install a linux vm and then install
docker.
MAC => docker-desktop -> linux vm -> dockercli -> dockerd -> containerd -> runc -> linux namespaces
Linux(ubuntu, fedora etc) => dockercli -> dockerd -> containerd -> runc -> linux namespaces
So in linux u just needs to install docker(which inturn installs dockerd, containerd, runc)
As i mentioned above
docker cli -> This is where we will make give commands like docker pull, docker run etc, all these commands
are sent to dockerd(engine) via unix socket REST call.
For EG: docker ps -> converts to curl --unix-socket /var/run/docker.sock http://localhost/containers
dockerd -> This is the docker daemon that everyone talks about, this is responsible for taking the docker
commands and converting to GRPC and then send those request to containerd.
containerd: container runtime responsible for the actual lifecycle of containers like pulling
images, creating snapshots, spawning containers, handling exec/logging/etc.
runc: This is the actual runtime that creates the container process by making Linux system calls like
unshare
, and sets up the namespace and cgroup isolation.