Investigating mesos

As a side project I have started looking at mesos and the mesosphere ecosystem. This blog post documents the first steps to take to get a mesos deployment running locally via bosh-lite.

What is mesos?

From the mesos website:

Apache Mesos abstracts CPU, memory, storage, and other compute resources away from machines (physical or virtual), enabling fault-tolerant and elastic distributed systems to easily be built and run effectively.

In practice it is a platform for scheduling workloads across a number of nodes that offer compute resources.

As a platform it doesn’t make scheduling decisions itself. Instead it allows registering frameworks that get offered resources which then make the scheduling decisions for the particular kinds of workloads the framework is designed to manage.

You can find an architectural overview here.

Running mesos locally

The easiest way to get a complete mesos cluster running locally is by deploying it onto bosh-lite.
cloudfoundry-community/mesos-boshrelease has up to date versions (as of Sep. 2016) of mesos, zookeeper and marathon (a framework for orchastrating containers on mesos).

Make sure your bosh-lite is running and you have uploaded a recent stemcell.

git clone https://github.com/cloudfoundry-community/mesos-boshrelease
cd mesos-boshrelease
bosh target 192.168.50.4:25555
bosh upload release releases/mesos/mesos-6.yml
bosh update cloud-config templates/warden-cloud.yml
bosh deployment templates/deployment.yml
bosh status --uuid | pbcopy
vi templates/deployment.yml

before deploying you must paste the director-uuid into the deployment.yml file.

director_uuid: <%= '<director-uuid>' %>

then continue with the deployment


bosh -n deploy

This is a good time to have a coffee break. During the deployment process mesos will be compiled which can take a long time! >25minutes on my MBP 3.1Ghz 16GB.

Welcome back! Hope the deployment went smoothly!

% bosh vms
Acting as user 'admin' on 'Bosh Lite Director'
Deployment 'mesos-deployment'
Director task 563
Task 563 done
+-------------------------------------------------------+---------+----+---------+-------------+
| VM                                                    | State   | AZ | VM Type | IPs         |
+-------------------------------------------------------+---------+----+---------+-------------+
| marathon/0 (5b25e15f-5d25-4c37-b887-c0036ca0a8b4)     | running | z1 | medium  | 10.244.10.5 |
| marathon/1 (d50dc3a5-6572-4a42-9868-9bf25ee3d2bd)     | running | z2 | medium  | 10.244.11.5 |
| marathon/2 (86262046-f74e-4cca-a24f-1bd74a36f633)     | running | z3 | medium  | 10.244.12.5 |
| mesos-agent/0 (1bdf8491-7504-4ab1-8084-908bd10b950d)  | running | z1 | medium  | 10.244.10.4 |
| mesos-agent/1 (413a3989-570e-4fd3-8bf4-9f73a953bca8)  | running | z2 | medium  | 10.244.11.4 |
| mesos-agent/2 (dc1b927a-49eb-4a7b-bfab-3545f30f9aa3)  | running | z3 | medium  | 10.244.12.4 |
| mesos-master/0 (366ddf6f-0e91-444b-bc0c-41384461929e) | running | z1 | medium  | 10.244.10.3 |
| mesos-master/1 (55b89a73-8645-45e0-bce8-d1e869d79cab) | running | z2 | medium  | 10.244.11.3 |
| mesos-master/2 (a6da5a7e-7cb7-4797-9b32-754e01620cc2) | running | z3 | medium  | 10.244.12.3 |
| zookeeper/0 (afb7ae65-33bd-4c43-9b5e-2787946b2066)    | running | z1 | medium  | 10.244.10.2 |
| zookeeper/1 (485d6645-b521-4add-9d53-51801a7e1061)    | running | z2 | medium  | 10.244.11.2 |
| zookeeper/2 (e3c34158-af90-40a0-a12a-4eadf0fa70db)    | running | z3 | medium  | 10.244.12.2 |
+-------------------------------------------------------+---------+----+---------+-------------+
VMs total: 12

Looks like we have:

  • 3 zookeeper vms. These are used as an HA key-value store for leader election and persisting state.
  • 3 mesos-master vms for HA.
  • 3 mesos-agent vms for running workloads.
  • 3 marathon vms for HA orchestrating of container workloads.

Interacting with mesos

In order to access the network that the bosh-lite garden containers are on be sure to run

bin/add-route
``` from the root of the bosh-lite repo before proceeding.
We can curl any of the mesos-master nodes to get information about the running cluster.

% curl -L 10.244.10.3:5050/state | jq
(omitted)

We use `-L` to follow redirects to the current leader.

open http://10.244.10.3:5050

Will show us the default mesos ui. (Due to [a bug](https://issues.apache.org/jira/browse/MESOS-5911) redirection to the leader will not work in a browser so you may have to visit `10.244.11.3:5050` or `10.244.12.3:5050` instead.)
Clicking on [agents](http://10.244.10.3:5050/#/agents) will show a list of registered agents.
Clicking on [frameworks](http://10.244.10.3:5050/#/frameworks) will show you that 1 framework (marathon) is registered.
You can find the [web-ui](http://10.244.11.5:8080/ui/#/apps) for marathon by clicking on the framework id and then the link next to `Web UI`. Username and password for http basic-auth when accessing the marathon ui are `marathon` and `marathon`.
## Deploying a container via Marathon
To wrap up this introduction to mesos we will run an actual workload via marathon. For this we will deploy an alternative web-ui for mesos.

git clone https://github.com/Capgemini/mesos-ui
cd mesos-ui
vi marathon.json

Edit the `ZOOKEEPER_ADDRESS` entry in `marathon.json` to point to our local deployment:
"ZOOKEEPER_ADDRESS": "10.244.10.2:2181,10.244.11.2:2181,10.244.12.2:2181"
Then:

curl -X POST -HContent-Type:application/json -d @marathon.json marathon:[email protected]:8080/v2/apps

to have marathon deploy the container.
Go back to the marathon ui to watch the deployment process (can take a few minutes due to the docker image being downloaded).
Once its deployed you can find the newly deployed mesos-ui by clicking through the marathon ui: applications -> mesos-ui -> instance-id -> endpoints
Congratulations you have run your first workload on a locally running mesos-cluster.

Spread the word

twitter icon facebook icon linkedin icon