How to launch an instance on OpenStack (II): OpenStack CLI


The OpenStack project is a libre software cloud computing platform for private and public clouds, which aims to be simple to implement, massively scalable, and feature rich. OpenStack provides an Infrastructure as a Service (IaaS) solution through a set of interrelated services. Each service offers an application programming interface (API) that facilitates this integration.

Users getting started with OpenStack can find useful this and related posts, where different ways of how to launch and instance on OpenStack are given. In the previous post, how to do about this using OpenStack Dashboard (Horizon) was shown. In this post using the same scenario and starting point, identical result will be achieved, but using OpenStack command line interface (cli) instead Horizon. Both new users that need a deeper understanding and sysadmins facing for the first time with OpenStack can find this post useful.

Scenario

The procedure followed is quite general, but it is appropriate to note specific parameters used:

  • OpenStack Grizzly (2013.1) deployed on Debian Wheezy with gplhost repositories, but there should be no major differences with others ditros or later OpenStack relases.
  • OpenStack Quantum with OpenvSwitch plugin in a “Per tenant routers with private networks” setup:
    • Router has IP 10.0.0.1, which is the default gateway for all instances. The router has ability to access public networks.
    • Floating IP network 172.22.196.0/22
    • When an instance is launched a fixed IP from 10.0.0.0/24 subnet is assigned.
  • username: bisharron
  • tenant name: proy-bisharron
  • Authentication url: http://172.22.222.1:5000/v2.0/

The starting point is illustrated in following figure, where the router connected to external and internal networks is represented:

initial network

Software

OpenStack python novaclient must be propperly installed on client computer. If you are using Debian or a Debian derived distribution this is as simple as do:

# apt-get install python-novaclient

Setting environment variables

Every time nova client is used, authentication information must be provided as parameters in client request. A typical example of nova request and the corresponding response is:

$ nova --os-username bisharron --os-password asdasd --os-tenant-name proy-bisharron --os-auth-url http://172.22.222.1:5000/v2.0/ secgroup-list
+---------+-------------+
| Name    | Description |
+---------+-------------+
| default | default     |
+---------+-------------+

Fortunately, these authentication parameters can be defined as environment variables:

$ export OS_USERNAME=bisharron
$ export OS_PASSWORD=asdasd
$ export OS_TENANT_NAME=proy-bisharron
$ export OS_AUTH_URL=http://172.22.222.1:5000/v2.0

And now, the client request is quite simpler:

$ nova secgroup-list
+---------+-------------+
| Name    | Description |
+---------+-------------+
| default | default     |
+---------+-------------+

It is common to store these environment variables in a file properly protected invoked by ~/.bashrc. Another approach is to use the openrc.sh shell script, downloadable from OpenStack Dashboard, that looks like this:

#!/bin/bash

# With the addition of Keystone, to use an openstack cloud you should
# authenticate against keystone, which returns a **Token** and **Service
# Catalog**.  The catalog contains the endpoint for all services the
# user/tenant has access to - including nova, glance, keystone, swift.
#
# *NOTE*: Using the 2.0 *auth api* does not mean that compute api is 2.0.  We
# will use the 1.1 *compute api*
export OS_AUTH_URL=http://172.22.222.1:5000/v2.0

# With the addition of Keystone we have standardized on the term **tenant**
# as the entity that owns the resources.
export OS_TENANT_ID=065717b748b44c48b90b0692f72337c0
export OS_TENANT_NAME="proy-bisharron"

# In addition to the owning entity (tenant), openstack stores the entity
# performing the action as the **user**.
export OS_USERNAME=bisharron

# With Keystone you pass the keystone password.
echo "Please enter your OpenStack Password: "
read -s OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT

Each time it is invoked, it asks for a password, making unnecessary to store a password in a plain text file.

Create ssh keypair

For security reasons, images used in OpenStack do not usually contain a password defined for any user, only publickey ssh is generally allowed. When an instance is spawned, the ssh public key is injected into it and only the holder of the corresponding private key is able to access to the instance.

A ssh keypair can be created directly with nova command and the resulting private key redirected to a file (usually the ssh privates keys are stored in ~/.ssh directory):

$ nova keypair-add openstack-bisharron > ~/.ssh/openstack-bisharron.pem

Private key must be readable only by owner:

$ chmod 600 ~/.ssh/openstack-bisharron.pem

Allocate Floating IP to project

Floating IPs (elastic IPs in Amazon EC2 terminology) allow instances to talk to an external host or access to the instances from an external network. A floating IP can be allocated to a project before or after launching an instance, we’ll do it before.

We need to know available floating IP pools:

$ nova floating-ip-pool-list
+---------+
| name    |
+---------+
| ext_net |
+---------+

Now, we can request a floating IP to ext_net pool and allocate it to project:

$ nova floating-ip-create ext_net
+---------------+-------------+----------+---------+
| Ip            | Instance Id | Fixed Ip | Pool    |
+---------------+-------------+----------+---------+
| 172.22.196.59 | None        | None     | ext_net |
+---------------+-------------+----------+---------+

Floating IP 172.22.196.59 is now allocated to project, but it is not yet associated with any instance.

Launch an instance

First, we need to know available images (ID parameter is needed to boot a new instance):

$ nova image-list
+--------------------------------------+------------------------+--------+--------+
| ID                                   | Name                   | Status | Server |
+--------------------------------------+------------------------+--------+--------+
| f1bcf888-65d5-4dba-9634-62a1a9a4c8ca | Bitnami WordPress      | ACTIVE |        |
| 275bb737-73df-4e6e-bd46-604e026868ea | Bitnami owncloud       | ACTIVE |        |
| 59a97abf-82ee-4e5d-b6ea-587dc311a315 | Debian wheezy          | ACTIVE |        |
| 374f36a9-9e84-42a9-8be8-0f99b66f61a2 | OpenBSD - de pruebas   | ACTIVE |        |
| 64b0aea2-dda3-437b-9c8f-565b4aa053e9 | Ubuntu 12.04 LTS       | ACTIVE |        |
| de91a58a-be75-48b1-879c-68c8eb338f02 | Windows 7 x64          | ACTIVE |        |
| 9f9389c8-0e84-4f08-9131-ffbb22a77c30 | Windows Server 2012 SE | ACTIVE |        |
| 1d01a097-0893-4c00-8f3d-3181303bfb1f | cirros-0.3.1-x86_64    | ACTIVE |        |
+--------------------------------------+------------------------+--------+--------+

Secondly, available flavors are requested:

$ nova flavor-list
+--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| ID                                   | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | extra_specs |
+--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| 1                                    | m1.tiny   | 512       | 0    | 0         |      | 1     | 1.0         | True      | {}          |
| 2                                    | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      | {}          |
| 3                                    | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      | {}          |
| 4                                    | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      | {}          |
| 5                                    | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      | {}          |
+--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+

It’s necessary to connect the new instance at least to one network, so it is required to know network IDs:

$ nova net-list
+--------------------------------------+--------------------------+------+
| ID                                   | Label                    | CIDR |
+--------------------------------------+--------------------------+------+
| c1ea332e-642f-4f75-866c-1bc86d4cb20a | ext_net                  | None |
| c4d32fa8-35bd-4d37-b86c-4857b5c57da4 | red interna de bisharron | None |
+--------------------------------------+--------------------------+------+

Once all needed parameters are known, it is possible to boot a new instance:

$ nova boot --flavor 1 --image 59a97abf-82ee-4e5d-b6ea-587dc311a315 --key-name openstack-bisharron --nic net-id=c4d32fa8-35bd-4d37-b86c-4857b5c57da4 debian-test
+-----------------------------+--------------------------------------+
| Property                    | Value                                |
+-----------------------------+--------------------------------------+
| status                      | BUILD                                |
| updated                     | 2013-11-17T08:33:36Z                 |
| OS-EXT-STS:task_state       | scheduling                           |
| key_name                    | openstack-bisharron                  |
| image                       | Debian wheezy                        |
| hostId                      |                                      |
| OS-EXT-STS:vm_state         | building                             |
| flavor                      | m1.tiny                              |
| id                          | 49884676-813e-4a54-ba68-71844426b910 |
| security_groups             | [{u'name': u'default'}]              |
| user_id                     | ec41dc5218e7452d9e9735b63b5d6c7c     |
| name                        | debian-test                          |
| adminPass                   | S7SRwFVqyVDi                         |
| tenant_id                   | 065717b748b44c48b90b0692f72337c0     |
| created                     | 2013-11-17T08:33:35Z                 |
| OS-DCF:diskConfig           | MANUAL                               |
| metadata                    | {}                                   |
| accessIPv4                  |                                      |
| accessIPv6                  |                                      |
| progress                    | 0                                    |
| OS-EXT-STS:power_state      | 0                                    |
| OS-EXT-AZ:availability_zone | nova                                 |
| config_drive                |                                      |
+-----------------------------+--------------------------------------+

A complete list of parameters used with “nova boot” can be obtained with help:

$ nova help boot
usage: nova boot [--flavor <flavor>] [--image <image>]
                 [--image-with <key=value>] [--num-instances <number>]
                 [--meta <key=value>] [--file <dst-path=src-path>]
                 [--key-name <key-name>] [--user-data <user-data>]
                 [--availability-zone <availability-zone>]
                 [--security-groups <security-groups>]
                 [--block-device-mapping <dev-name=mapping>]
                 [--hint <key=value>]
                 [--nic <net-id=net-uuid,v4-fixed-ip=ip-addr,port-id=port-uuid>]
                 [--config-drive <value>] [--poll]
                 <name>

Boot a new server.

Positional arguments:
  <name>                Name for the new server

Optional arguments:
  --flavor <flavor>     Flavor ID (see 'nova flavor-list').
  --image <image>       Image ID (see 'nova image-list').
  --image-with <key=value>
                        Image metadata property (see 'nova image-show').
  --num-instances <number>
                        boot multi instances at a time
  --meta <key=value>    Record arbitrary key/value metadata to /meta.js on the
                        new server. Can be specified multiple times.
  --file <dst-path=src-path>
                        Store arbitrary files from <src-path> locally to <dst-
                        path> on the new server. You may store up to 5 files.
  --key-name <key-name>
                        Key name of keypair that should be created earlier
                        with the command keypair-add
  --user-data <user-data>
                        user data file to pass to be exposed by the metadata
                        server.
  --availability-zone <availability-zone>
                        The availability zone for instance placement.
  --security-groups <security-groups>
                        Comma separated list of security group names.
  --block-device-mapping <dev-name=mapping>
                        Block device mapping in the format <dev-
                        name>=<id>:<type>:<size(GB)>:<delete-on-terminate>.
  --hint <key=value>    Send arbitrary key/value pairs to the scheduler for
                        custom use.
  --nic <net-id=net-uuid,v4-fixed-ip=ip-addr,port-id=port-uuid>
                        Create a NIC on the server. Specify option multiple
                        times to create multiple NICs. net-id: attach NIC to
                        network with this UUID (optional) v4-fixed-ip: IPv4
                        fixed address for NIC (optional). port-id: attach NIC
                        to port with this UUID (optional)
  --config-drive <value>
                        Enable config drive
  --poll                Blocks while instance builds so progress can be
                        reported.

After a few seconds the instance is active and a fixed IP 10.0.0.2 has been assigned to it:

$ nova show 49884676-813e-4a54-ba68-71844426b910
+----------------------------------+----------------------------------------------------------+
| Property                         | Value                                                    |
+----------------------------------+----------------------------------------------------------+
| status                           | ACTIVE                                                   |
| updated                          | 2013-11-17T08:33:47Z                                     |
| OS-EXT-STS:task_state            | None                                                     |
| key_name                         | openstack-bisharron                                      |
| image                            | Debian wheezy (59a97abf-82ee-4e5d-b6ea-587dc311a315)     |
| hostId                           | 5b9e3d82653b6cd00fd355efa2fe3e79abc742c9155e73c6f7940ce9 |
| OS-EXT-STS:vm_state              | active                                                   |
| red interna de bisharron network | 10.0.0.2                                                 |
| flavor                           | m1.tiny (1)                                              |
| id                               | 49884676-813e-4a54-ba68-71844426b910                     |
| security_groups                  | [{u'name': u'default'}]                                  |
| user_id                          | ec41dc5218e7452d9e9735b63b5d6c7c                         |
| name                             | debian-test                                              |
| created                          | 2013-11-17T08:33:35Z                                     |
| tenant_id                        | 065717b748b44c48b90b0692f72337c0                         |
| OS-DCF:diskConfig                | MANUAL                                                   |
| metadata                         | {}                                                       |
| accessIPv4                       |                                                          |
| accessIPv6                       |                                                          |
| progress                         | 0                                                        |
| OS-EXT-STS:power_state           | 1                                                        |
| OS-EXT-AZ:availability_zone      | nova                                                     |
| config_drive                     |                                                          |
+----------------------------------+----------------------------------------------------------+

We just need to associate a floating IP to be able to access to the instance:

$ nova add-floating-ip 49884676-813e-4a54-ba68-71844426b910 172.22.196.59
$ nova floating-ip-list
+---------------+--------------------------------------+----------+---------+
| Ip            | Instance Id                          | Fixed Ip | Pool    |
+---------------+--------------------------------------+----------+---------+
| 172.22.196.59 | 49884676-813e-4a54-ba68-71844426b910 | 10.0.0.2 | ext_net |
+---------------+--------------------------------------+----------+---------+

Security Group rules

Instance is launched and floating IP associated so it should be possible to access via ssh, but it’s not yet possible due to default firewall behavior. Incoming connections must be explicitly allowed as rules in a security group.

No security group was specified during instance launch, so default one was assigned. This security group has no rules defined:

$ nova secgroup-list
+---------+-------------+
| Name    | Description |
+---------+-------------+
| default | default     |
+---------+-------------+
$ nova secgroup-list-rules default

Add a rule to allow incoming ssh connections (22/tcp):

$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 22        | 22      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

Add a rule to allow all incoming icmp connections:

$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

Access to the launched instance

Now we can ping to the instance:

$ ping 172.22.196.59
PING 172.22.196.59 (172.22.196.59) 56(84) bytes of data.
64 bytes from 172.22.196.59: icmp_req=1 ttl=63 time=441 ms
64 bytes from 172.22.196.59: icmp_req=2 ttl=63 time=0.602 ms
64 bytes from 172.22.196.59: icmp_req=3 ttl=63 time=0.686 ms
^C
--- 172.22.196.59 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.602/147.616/441.562/207.851 ms

And use the ssh command to make a secure connection to the instance (specifying the private key to use):

$ ssh -i ~/.ssh/openstack-bisharron.pem debian@172.22.196.59
The authenticity of host '172.22.196.59 (172.22.196.59)' can't be established.
ECDSA key fingerprint is a7:cc:3a:1d:2b:8d:f4:ad:e7:a6:45:c6:a5:61:85:9b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.22.196.59' (ECDSA) to the list of known hosts.
Linux debian.example.com 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
debian@debian-test:~$

References

Related posts

, ,

  1. #1 por vadanmehta el 17-11-14 - 11:29 pm

    ih.. thanks very much for this post… how we get tenant Id ?//

    Me gusta

    • #2 por albertomolina el 18-11-14 - 9:28 pm

      Hi vadanmehta,

      Tenant id, tenant name, username and password are parameters given by cloud administrator. You, as normal user, need to know them in order to use OpenStack nova. If your user can login via Horizon (Web dashboard), these parameters can be easily obtained from “Access & Security > API Access > Download OpenStack RC File”

      Me gusta

  1. Instalando OpenStack en mi portatil | PLEDIN 2.0

Responder

Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión / Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión / Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión / Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión / Cambiar )

Conectando a %s

A %d blogueros les gusta esto: