Chi sono

Ho 35 anni, laureato in Matematica, faccio il programmatore e vivo in R6. Mi impegno nella diffusione delle nuove tecnologie: l'informatica può far partecipare più cittadini alle scelte dell'amministrazione, ed aiutare le persone a controllare l'operato delle istituzioni, interagendo con esse. In questo blog discutiamo di come portare partecipazione e trasparenza grazie all'innovazione, nella nostra circoscrizione e nella nostra città. Benvenuti!

Multiple ipfailover with openshift 3.6 require vrrp-id-offset

Openshift 3.6 supports special pods named ipfailover. They use keepalived/VRRP to create a VIP bound to a given host interface.

Once you’re on a privileged account, use the following steps to create:

– an ipfailover deploymentConfig named ipf-ens0 binding VIP: to ens0

– an ipfailover deploymentConfig named ipf-ens1 binding VIP: to ens1

Each deploymentConfig creates 2 pods running on region=infra.

Note that we specify a vrrp-id-offset so that there is no VRRP interference between the different ipfailover pods.


# Create two ipfailover deploymentConfig binding routers to different interfaces.
oadm policy add-scc-to-user privileged -z ipfailover

# ipfailovers are named with the bound interfaces for clarity
# to avoid conflicts we *must* set vrrp-id-offset
oadm ipfailover ipf-ens0  --interface=ens0 --virtual-ips= --vrrp-id-offset=10 \
  --create --service-account=ipfailover --selector=region=infra --replicas=2 
oadm ipfailover ipf-ens1 --interface=ens1  --virtual-ips= --vrrp-id-offset=20 \
  --create --service-account=ipfailover --selector=region=infra --replicas=2

gennaio 09 2018 | Linux | No Comments » | 0 Views

Promemoria elettrico

Condivido qui un promemoria elettrico 😉

Sbloccare un portalampada con linguetta di sicurezza

Come funziona un impianto con relé – da associare ad un circuito OR nel caso di piu’ interruttori.


novembre 11 2017 | Politica | No Comments » | 0 Views

Zooming in on ip route

iproute2 allows managing multiple routing tables.

By default you get 4 tables. Table labels are defined in

cat /etc/iproute2/rt_tables
# reserved values
255 local
254 main
253 default
0 unspec

but you can use just integers.

Inspect those tables

ip route list table main # like ip r

ip route list table local #

You can create rules in further tables on the fly

ip route add default via table 9 # add a further default gw on table 9

Now ignore main routing table entry metrics and use the rules in the previous table for a local class of ip.

ip rule add not all table 9


ottobre 25 2017 | Linux | No Comments » | 0 Views

Working on the fog-openstack gem

When working on the fog-openstack gem in a proper docker container:

git clone ...
rake test
# install your local gem
bundle exec rake install

# trace http wire
export EXCON_DEBUG=1
export DEBUG=1


– bundle adds only files present in git. To add a file to a package, `git add` first.


ottobre 13 2017 | Politica | No Comments » | 0 Views

Working around sudo limitation for using ansible

It could happen that on some systems you can only sudo with a specified string.

Eg. if you can’t use `sudo …` but only

sudo su - jboss

it’s impossible to use ansible.

The only thing you can do is to pipe the command to dzdo


sudo su - jboss <<< "ls /opt/jboss/ -l"


A first working kludge allowing ansible to work with this peculiar become method consists in adding a new one here

elif self.become_method == 'sudosu':

exe = self.become_exe or 'sudo'
becomecmd = '%s su - %s <<< "%s"' % (exe, self.become_user, command)


and associating constants there

And put in your ansible.cfg




settembre 29 2017 | Politica | No Comments » | 0 Views

soft-anti-affinity on openstack newton: you’ll always can tell

An OpenStack ServerGroup allows setting placing policies on vms. You can for example:

– span them on different nodes (anti-affinity)
– concentrate them on the same nodes (affinity)

The anti-affinity policy is quite rigid though: you needed as many compute nodes (hypervisors) as virtual machines in the group. EG.

if your ServerGroup contains 10 vms, you need 10 hypervisors. This may be quite limiting if eg. you want to have many small vms.

If you run this code, the last node will be in error

COMPUTE_NODES=7 # number of compute nodes
openstack server group create sg_anti_affinity \
--policy anti-affinity
UUID=$(openstack server group show sg_anti_affinity -f value -c id)
for i in $(seq $((COMPUTE_NODES+1)); do
nova boot server-$i \
--image RHEL-7.3 --flavor m1.small \
--nic net-id=d3490da5-dbda-4f2a-952b-3ba90ee10e67 \
--hint group=$UUID

You can workaround this migrating more servers on existing nodes and leaving some compute free:

# colocate server-1..3 to leave compute-01..03 free.
nova --debug live-migration --force server-1 compute-04
nova --debug live-migration --force server-2 compute-05
nova --debug live-migration --force server-3 compute-06

Nova 2.15+ supports soft-anti-affinity, which does a best-effort spanning. To use it you have to specify the exact microversion to nova api

# This works
openstack server group create server-group-soft \
--policy soft-anti-affinity \
--os-compute-api-version 2.15

# This does not
openstack server group create server-group-soft \
--policy soft-anti-affinity

Further infos:



agosto 10 2017 | Linux | No Comments » | 0 Views

Note of caution when using python-requests and certifi

certifi is a set of trusted mozilla certificates. If

pip install certifi

The good ol’ python-requests boldly ignores our local

cp local-ca.pem /etc/pki/ca-trust/source/anchors

and start raising a whole bunch of SSL exceptions even though we have

echo | openssl   s_client -showcerts -connect server:443  |sed -ne '/BEGIN/,/END/ p'  | openssl verify 
stdin: OK

Further infos here:



agosto 02 2017 | Linux | No Comments » | 0 Views

Fixing cinder volumes stuck in detaching state

When Kubernetes automagically attaches and detaches cinder volumes from your servers, you can end in weird situations like:

– a volume is in `detaching` state
– but is still attached to the server


# cinder list | grep $SERVER_ID
| $CINDER_ID | detaching | kubernetes-dynamic-pvc-xxx | ...| $SERVER_ID |

On your server
# ssh kube-node lsblk
vda 253:0 0 30G 0 disk
vdc 253:32 0 10G 0 disk <---- STILL ATTACHED, but not mounted

As the volume is actually attached, you can just tell cinder the true. He'll trust you ;)

# cinder reset-state --state in-use $CINDER_ID

And finally re-detach the volume

# nova volume-detach $SERVER_ID $CINDER_ID

Now it's clean!

# cinder list | grep $SERVER_ID


luglio 29 2017 | Linux | No Comments » | 0 Views

Beware to the excluded – ansible yum module

If you update a excluded rpm with ansible yum module

- yum: name="http://foo-1.2.rpm" state=present

nothing will be done and yum will be successful anyway.

You should ensure that, after the installation, the package version is correct in some other way.


luglio 28 2017 | Linux | No Comments » | 0 Views

There’s heat on the nova: recovering from a faulty flavor

While updating a stack with

o stack update foo --existing

I suddenly noticed a bunch of errors. A flavor disk size had been reduced, and now I had 3 machines in a ServerGroup in ERROR state.

Flavor's disk is too small for requested image. Flavor disk is [30 GiB], image is [60 GiB]

Sadly fixing the flavor (recreating with the same ID) and updating like the following didn’t do the job

o stack check foo
o stack update --existing foo

I then check the server status

o server show server-0
vm_task: None <--- ... flavor: the_correct_one

After reading this page
I found the vm_task was not trying to replace the root volume.

I then reset the state of the first vm and - after the attempt was successful - I did the same with the others.

nova reset-state --active server-0
nova stop server-0
nova start server-0
# wait
ssh server-0 # ok :D

Before running any update, I need to check if the hosts are going to be RESIZEd or not. This command should tell, but inspection of nested resources is still not supported

o stack update --dry-run foo --existing -n # error!

Using the old client works, but on my environment I got a "Bad Request"

heat stack-update --dry-run foo -x -n

As a workaround I run it directly on the nested stacks.

heat stack-update --dry-run -x -n foo-servergroup-xyz


luglio 26 2017 | Linux | No Comments » | 0 Views

Next »