Chi sono

Ho 35 anni, laureato in Matematica, faccio il programmatore e vivo in R6. Mi impegno nella diffusione delle nuove tecnologie: l'informatica può far partecipare più cittadini alle scelte dell'amministrazione, ed aiutare le persone a controllare l'operato delle istituzioni, interagendo con esse. In questo blog discutiamo di come portare partecipazione e trasparenza grazie all'innovazione, nella nostra circoscrizione e nella nostra città. Benvenuti!

soft-anti-affinity on openstack newton: you’ll always can tell

An OpenStack ServerGroup allows setting placing policies on vms. You can for example:

– span them on different nodes (anti-affinity)
– concentrate them on the same nodes (affinity)

The anti-affinity policy is quite rigid though: you needed as many compute nodes (hypervisors) as virtual machines in the group. EG.

if your ServerGroup contains 10 vms, you need 10 hypervisors. This may be quite limiting if eg. you want to have many small vms.

If you run this code, the last node will be in error

COMPUTE_NODES=7 # number of compute nodes
openstack server group create sg_anti_affinity \
--policy anti-affinity
UUID=$(openstack server group show sg_anti_affinity -f value -c id)
for i in $(seq $((COMPUTE_NODES+1)); do
nova boot server-$i \
--image RHEL-7.3 --flavor m1.small \
--nic net-id=d3490da5-dbda-4f2a-952b-3ba90ee10e67 \
--hint group=$UUID
done

You can workaround this migrating more servers on existing nodes and leaving some compute free:


# colocate server-1..3 to leave compute-01..03 free.
nova --debug live-migration --force server-1 compute-04
nova --debug live-migration --force server-2 compute-05
nova --debug live-migration --force server-3 compute-06

Nova 2.15+ supports soft-anti-affinity, which does a best-effort spanning. To use it you have to specify the exact microversion to nova api


# This works
openstack server group create server-group-soft \
--policy soft-anti-affinity \
--os-compute-api-version 2.15

# This does not
openstack server group create server-group-soft \
--policy soft-anti-affinity

Further infos:

– https://bugzilla.redhat.com/show_bug.cgi?id=1447798
– https://docs.openstack.org/nova/latest/reference/api-microversion-history#id13

Share

agosto 10 2017 | Linux | No Comments » | 0 Views

Note of caution when using python-requests and certifi

certifi is a set of trusted mozilla certificates. If


pip install certifi

The good ol’ python-requests boldly ignores our local

cp local-ca.pem /etc/pki/ca-trust/source/anchors
update-ca-trust

and start raising a whole bunch of SSL exceptions even though we have

echo | openssl   s_client -showcerts -connect server:443  |sed -ne '/BEGIN/,/END/ p'  | openssl verify 
..
stdin: OK

Further infos here:

– https://bugs.launchpad.net/python-openstackclient/+bug/1634861
– https://medium.com/@george.shuklin/ssl-certificate-abyss-in-openstack-5c3c4a92eb5a

Share

agosto 02 2017 | Linux | No Comments » | 0 Views

Fixing cinder volumes stuck in detaching state

When Kubernetes automagically attaches and detaches cinder volumes from your servers, you can end in weird situations like:

– a volume is in `detaching` state
– but is still attached to the server

eg.

# cinder list | grep $SERVER_ID
| $CINDER_ID | detaching | kubernetes-dynamic-pvc-xxx | ...| $SERVER_ID |

On your server
# ssh kube-node lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 30G 0 disk
...
vdc 253:32 0 10G 0 disk <---- STILL ATTACHED, but not mounted

As the volume is actually attached, you can just tell cinder the true. He'll trust you ;)

# cinder reset-state --state in-use $CINDER_ID

And finally re-detach the volume

# nova volume-detach $SERVER_ID $CINDER_ID

Now it's clean!

# cinder list | grep $SERVER_ID

Share

luglio 29 2017 | Linux | No Comments » | 0 Views

Beware to the excluded – ansible yum module

If you update a excluded rpm with ansible yum module

- yum: name="http://foo-1.2.rpm" state=present

nothing will be done and yum will be successful anyway.

You should ensure that, after the installation, the package version is correct in some other way.

Share

luglio 28 2017 | Linux | No Comments » | 0 Views

There’s heat on the nova: recovering from a faulty flavor

While updating a stack with

o stack update foo --existing

I suddenly noticed a bunch of errors. A flavor disk size had been reduced, and now I had 3 machines in a ServerGroup in ERROR state.


Flavor's disk is too small for requested image. Flavor disk is [30 GiB], image is [60 GiB]

Sadly fixing the flavor (recreating with the same ID) and updating like the following didn’t do the job


o stack check foo
o stack update --existing foo

I then check the server status

o server show server-0
...
vm_task: None <--- ... flavor: the_correct_one

After reading this page
https://wiki.openstack.org/wiki/CrashUp/Recover_From_Nova_Uncontrolled_Operations
I found the vm_task was not trying to replace the root volume.

I then reset the state of the first vm and - after the attempt was successful - I did the same with the others.


nova reset-state --active server-0
nova stop server-0
nova start server-0
# wait
ssh server-0 # ok :D

Before running any update, I need to check if the hosts are going to be RESIZEd or not. This command should tell, but inspection of nested resources is still not supported

o stack update --dry-run foo --existing -n # error!

Using the old client works, but on my environment I got a "Bad Request"

heat stack-update --dry-run foo -x -n

As a workaround I run it directly on the nested stacks.

heat stack-update --dry-run -x -n foo-servergroup-xyz

Share

luglio 26 2017 | Linux | No Comments » | 0 Views

The magic of support (tools)

Red Hat provides a very nice tool for managing support cases from remote machines.

Once you have installed and configured redhat-support-tool (eg. set a proxy) you can

# upload files to the support
redhat-support-tool addattachment /var/tmp/sosreport.tar.gz -c $CASE 

# download files or patches to local machines
redhat-support-tool getattachment -c $CASE -u $UUID -d /home/support

You can even embed this into ansible playbooks used to reproduce cases.

Share

luglio 24 2017 | Linux | No Comments » | 0 Views

OpenShift labels

OpenShift use labels for managing resources.

Show pods with labels

# oc get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
hawkular-cassandra-1-y18we 1/1 Running 0 7h metrics-infra=hawkular-cassandra,name=hawkular-cassandra-1,type=hawkular-cassandra
hawkular-metrics-zw29l 1/1 Running 0 7h metrics-infra=hawkular-metrics,name=hawkular-metrics
heapster-0wslk 1/1 Running 0 8h metrics-infra=heapster,name=heapster
metrics-deployer-tr83o 0/1 Error 0 49m component=deployer,metrics-infra=deployer,provider=openshift

We can filter in various ways with “-l”


# oc get pods -l metrics-infra # all resources with metrics-infra key

Or being with key=value

# oc get pods -l metrics-infra=heapster

We can use `get` or `delete`.

Share

maggio 25 2017 | Linux | No Comments » | 0 Views

The joys of recursshion

Enjoy 😉

#
# Enjoy ssh -F ssh_config target
#
Host bastion
  IdentityFile ./id_ecdsa_shift
  User cloud-user

Host target
  IdentityFile ./id_ecdsa_shift
  ProxyCommand ssh -F ssh_config cloud-user@bastion    -W %h:%p
  User cloud-user

Share

maggio 03 2017 | Linux | No Comments » | 0 Views

New environment variables in JBOSS s2i

New OpenShift Images don’t require custom settings.xml or assembly file to use an HTTP PROXY for maven builds.

Just use:

HTTP_PROXY_HOST
HTTP_PROXY_PORT

Further info here, on the official docs.

Share

maggio 02 2017 | Linux | No Comments » | 0 Views

Finalizing openshift-on.openstack installation

After installing openshift-on-openstack, you have to finalize some steps:

– creating registry and routers.

You may need to customize those steps, so:


oadm registry -o yaml > registry.yaml
oadm router -o yaml > router.yaml

vim router.yaml registry.yaml

In router.yaml use persistent storage!


volumeMounts:
- mountPath: /registry
name: registry-storage

...
# Modify the `registry-storage` volume, replacing
# the empty dir with a persistent volume claim
volumes:
- name: registry-storage
- emptyDir: {}
+ persistentVolumeClaim:
+ claimName: pvc-registry

oc create -f router.yaml
oc create -f registry.yaml

After creating registries and routers, restart the service, or you'll get bad service values

oc get services # check ip addresses ;)
systemctl restart atomic-openshift-master

Share

aprile 12 2017 | Linux | No Comments » | 0 Views

Next »