Category Archives: Robotics and Electronics

The ultimate software defined – FPGAs in the datacenter

Recently I have been playing around with this thingy:

what is it and why are you blogging about it you might wonder. Well… it’s an FPGA development board.

A what?

Yes. An FPGA development board. I’m aware that a lot of people in IT infrastructure probably never heard about FPGA’s or maybe they heard about it but don’t really know what they are. Let me explain what it is before I dive into why I’m experimenting with it.

FPGA stands for Field Programmable Gate Array. It is basically a chip which contains a collection of digital gates. So AND gates, OR gates, NOT gates and every thinkable combination thereof. The gates are not connected to each other by default but the chip can be configured to have the gates connected in whichever way you like. Thereby creating any kind of functionality you like.

In short: An FPGA is a programmable (or reconfigurable) chip. So it is possible to have an FPGA behave like a processor, or a display driver, or an S/PDIF receiver, or a bitcoin miner or… well you get the point. It’s a chip that’s not really a chip until you program it to be any chip you like. And I use you the word “program” loosely here because I’m still not sure this can be actually called programming. But for lack of a better word I’ll call it programming for now.

FPGAs are programmed using a hardware description language. The two main ones are Verilog and VHDL. Since VHDL seems to be the dominant language in Europe I started out to learn VHDL. This has been very interesting for multiple reason. Maybe I’ll go into some of those in another blog post but for now I want to highlight the fact that with these languages you are actually describing hardware, not software. The code you write determines what the chip will look like inside. And if you’ve tested this on an FPGA you can even send the code to a chip fab and have your own silicon created. How cool is that? 🙂

So in a way FPGA can be called software defined hardware.


Can hardware be software defined? Yes I think so. Software defined means generic hardware performs a specific function just by loading the right software and it’s behavior can be altered by software. All this is true for FPGAs. In my opinion FPGA are the ultimate software defined devices.

FPGAs have been used in products for years now. you’ll find them in FusionIO (or whatever they’re called now) cards, digital sound consoles, LED displays, Audio DACs and lots of other devices which need real time, parallel processing.

FPGAs in the datacenter

But recently the FPGA has left it’s world of “devices” and can also be found in the datacenter. And that is exactly why I’m so interested in these amazing pieces of technology. In the last couple years a lot of things have been happening in this field:

  • In 2015 Intel acquired Altera, one of the 2 (xilinx being the other one) major FPGA manufacturers
  • Also in 2015 Microsoft started using FPGAs on a large scale for Bing
  • Last year microsoft equipped all Azure physical machines with FPGAs
  • Last year Amazon started offering the “F1 instance” An amazon machine with access to an FPGA

to me this shows that FPGAs are here to stay nd I think they will play an important role in the datacenter of the not so distant future.

What are FPGAs used for?

The main purpose of an FPGA in a server is to offer a reconfigurable accelerator. A lot of algorithms that can be parallelized can benefit from hardware acceleration. In the past it was often not feasible to make a purpose build accelerator for one algorithm because this would make the server it was placed in a single purpose server. Also tweaking the algorithm after the hardware is produced is impossible. Since FPGAs are reconfigurable you can keep all your hardware identical while providing dedicated acceleration hardware.

Acceleration in hardware is becoming more and more important since the end of Mores Law is in sight but the amount of data that needs to be processed is increasing faster than ever. FPGAs are one of the ways to keep up with the processing demand without adding more CPUs. On top of that, FPGA are a lot a lot more energy efficient than traditional CPUs.

Currently FPGAs are mostly used in big data analytics and real time processing. Another area where FPGAs could become popular in the near future is in network function virtualization. Imagine if VMware NSX had access to FPGAs in your servers. It would be able to offload a lot of networking features from the CPUs. Just like current network vendors are using dedicated networking chips or even FPGAs in their networking equipment, NSX would have dedicated networking hardware at its disposal and it would still be 100% softwre defined. All while decreasing latency, increasing throughput and lowering the CPU overhead.

ESP8266 PV Logger

I have written about the ESP8266 before in my post about the magic button. For the button I am using the nodeMCU firmware which let’s you run lua script on the ESP. But recently the people over at created an Arduino IDE version which is compatible with the ESP8266. I was already using an ESP connected to an Arduino to log the output of my solar panels so I decided to try and run this code natively on the ESP. And it worked, so although this is not automation or virtualization related, here are some more details about my ESP8266 PV Logger.


The logger connects to my Mastervolt Soladin 600. This inverter has an RS485 port but connecting it to a TTL serial port is easy. All you need is two resistors. For the schematic see the readme on github. I have been monitoring my PV panels for years using this library. That lib only works with software serial ports which are not supported by the ESP so I changed that to use the Stream class instead. Both Software Serial and Serial are inherited from Stream so now it should work with both on a normal arduino but I haven’t tested it with the software serial. I also changed the serial delay and timeout settings to get reliable communication between the ESP and the Soladin. You can find the updated lib here.

Log Destination

There are a couple places you could send your logging data to. Until today I was using and that worked fine. But it’s lacking nice grapical presentation of you data. And I’m too lazy to build my own so I looked for something else. Now I am using thingspeak and that seems to work just a s well with the added bonus of nice graphs and google gauge visualisation. You can find my stats here.

Logger Code

The logger code is pretty easy. It tries to connect to the Soladin, if that succeeds it sends data to thingspeak. This is repeated every 15 seconds. If it can’t make a connection to the Soladin the loop slows down. You can find the whole code here on github. Don’t forget to compile it with arduino 1.6.1-esp8266-1 instead of the regular arduino IDE.


All you need to run this is a 3.3volt power supply, two resistors, a cable with an RJ11 connect and a ESP-01. Mince looks like this.

ESP8266 PV logger

The best part? The whole thing costs about 5 bucks! That’s a lot cheaper than Mastervolt’s PC-Link cable.

The Magic Button

On March 19th we used The Magic Button ( a.k.a “The What Does This Button Do Button”) in our demo’s at the Dutch VMUG UserCon. It magically  made a CoreOS cluster appear out of nowhere, Launched our demo app and then it scaled it out so all people in the room could open the page. Of course you want to build your own now. Here is how.



The button itself is just a regular emergency stop button I got off e-bay (6$). Inside there is enough space for a battery holder with 2x AA batteries. These batteries power an ESP8266-01 board. The ESP8266 is a Wifi SoC, has a 80Mhz processor, wifi connection, 96KBytes of RAM, a couple of GPIOs, comes with SPI flash on the board, costs around 5$ and looks like this:


The chip has a UART and was originally intended to function as a serial to wifi module. Out of the box it comes with an awkward AT firmware (hello 1990!). But thanks to a very active community we can now build ourfirmware for this neat little chip. I don’t have the time or the knowledge to write my own firmware in C++ but luckily someone created a firmware for this chip that let’s you run Lua code on it! I didn’t know any Lua before but it turns out to be rather easy. Since it’s an event driven interpreted language it has some commonalities with Javascript, which I am very familiar with.

Here is how I connected the board:

  • Connect the button between GPIO0 and Ground
  • Connect the LED between GPIO2 and Ground. I used a 100Ohm resistor to limit the current through the LED
  • Put a 1K pullup resistor between VCC and CH_PD
  • Batteries are directly connected to VCC and GND. No Caps or regulators.

The magic button internals

When everything is connected you can squeeze it all into the case. It actually doesn’t really fit. When I close the case the battery gets damaged a bit. But whatever, it works….

The MAgic Button Squeezed


So How did I turn this wifi board into a magic button? The button simply does an HTTP POST to my webEye application. This application forwards the posted body to an AMQP bus where it get’s picked up by vRealize Orchestrator. vRO in turn runs a workflows which actually performs the magic. To enable your board to do the same, follow these steps:

  • Setup webEye or another webhook receiver to test the button
  • Flash this firmware on the chip: nodeMCU
  • Use ESPlorer or another tool to load these two Lua files:
  • Please edit the variable at the top of the files before copying to your ESP
  • Emergency stop buttons are normally closed. So make sure the button is pressed (open) when you power up the ESP. If you don’t do that it will keep GPIO0 low which makes it boot into bootloader (flash) mode.

Now build a cool workflow which you can trigger with this button. Share your creations in the comments or find me on twitter.

The brain of the Automate-IT robot

I recently posted an article about the automate-it robotic arm. I promised to go into more detail on the brain of the Automate-IT robot so time to awake you inner geek because here is a write up about the controller hardware and software that brings the thing to life.

Controller Hardware

The hardware for the controller is an Arduino Yun microcontroller. In case you’re not familiar with Arduino according to the their website: “Arduino is an open-source electronics platform based on easy-to-use hardware and software. It’s intended for anyone making interactive projects”. There are different Arduino compatible boards available. The Yun is a special one because it combines an AVR ATmega 32u4 chip with a System on a Chip running Linux on the same board. Both are connected via a serial bus. A bridge library provides an API which enables a developers to integrate the Linux SoC into Arduino projects. The SoC part of the board provides WLAN, Ethernet, MicroSD card slot and USB Connectivity. The SoC is running an openWRT linux distribution which runs a webserver and some other services. The webserver can be manipulated from the Arduino code. This makes it very easy to build an http API on your Arduino powered device.

The board can provide about 50mA of power. All servo motors combined can draw up to 3Amps so I needed something a little bit more powerful. I choose the LM123K voltage regulator which gets its juice from a 12v brick type power supply. The voltage regulator magically converts it into 5v. I put the LM123K on a piece of perf board and added headers to connect the servos. Male headers on the bottom of the board turn it into a very simple Arduino shield. It doesn’t look pretty but it works fine.

Voltage regulator on top of Arduino Yun
Voltage regulator on top of Arduino Yun

Controller Software

The roboticArm library

The arm is programmed by recording the exact position of all the joints step by step. With each step the time it takes to execute the step is stored as well. Putting the arm to work is just a matter of playing back the recorded sequences. My roboticArm library implements this logic and presents it as a single class. The object instantiated from the class represents the robotic arm and stores a pointer to a recorded or empty sequence. The object can be in recording mode or in playback mode. When in recording mode the current position in the sequence can be selected and the positions of the servos can be set. It’s also possible to add or delete steps from the sequence.

In playback mode the step number will be automatically increased when the recorded duration for a step is expired. At the end of a sequence the playback will be stopped until the step counter is reset to 0.

My library uses the Arduino servo library so there is did not develop any servo control code. But since this isn’t a dedicated Arduino or development blog I won’t go into too much detail about the actual code. If you’re interested feel free to download the RoboticArm library.

Using the library

Below is an excerpt from my controller code. I did not actually test the code below and it is not very usefull since it doesn’t include any code to program a sequence. But at least the servos should come to life and go to the position configured in the defaults sequences.


The controller presents a REST API via the linux part of the Yun. The beauty of the Yun is that you can actually code this API in the Arduino code. Unfortunately this also means there is no framework available that makes creating a REST API easier. And even if there was I doubt it would fit in the Atmega’s RAM or Flash. The first thing to tackle when creating a REST API is the routing. Routing basically means figuring out what we need to do when a certain URL is requested. You could read the request and use a huge switch case contruction to figure out where to go. but this uses up valuable program storage and ram in a microcontroller. On top of that I find it cumbersome to program in that way. So I used a slightly different approach. I use a URL structure that looks like this: http://RESThost/category/command/parameter. Based on that I wrote the code below:

To figure out which function to run my code uses 3 arrays. The first is an array of strings (or technically a two dimensional array of characters) which defines the categories we have. The second array is a three dimensional array of characters which defines the commands available for each category. The last array is a two dimensional array of function pointers. When a request comes in the code looks for the corresponding element in the first array. Then it know the index number of the category so it starts looking for the element matching the command in the second array. When the command is found the code knows both the index number for the category and the command and uses these index number to find the pointer to the right function in the third array and runs the function. If the function need any parameters the rest of the URL is read and used as parameter. For example: http://roboticarm/status/record will start the function statusRecordCommand() which toggles the recording mode flag of the roboticArm object.

Putting it all together

I made the whole project available for download. You can find it here: robotic_arm. Don’t forget to also download the RoboticArm library and put it somewhere the arduini IDE can find it. The download also includes the AJAX Gui to program the arm. This was my first time building such a gui so it doesn’t like very nice and the code can probably be optimized. So feel free to modify the code and send me the results :).

Disclaimer: The code was only tested on Arduino Yun. The library will run fine on any other Arduino as long as you configure the correct pin numbers. The REST API and Ajax Gui will not work on any other Arduino board currently available.

Meet the Automate-IT Robot

Time for a less serious friday afternoon post. Meet our new friend: The Automate-IT Robot a.k.a. r2vco2. It’s a robotic arm with a special feature: It can be controlled by vCenter Orchestrator.

Robotic Arm

When I was preparing for the yearly ITQ Technical update session I was thinking of a way to show vCenter Orchestrator can do much more then just automating vSphere tasks. What better way to show this that control something physical from a tool that’s usually used in the virtual world? So I decided to try and build a robotic arm.

The Bones

The bones of the arm are 3d printed using an Ultimaker Original. Of course the type of printer isn’t that interesting. But what is interesting are the 3d models I used. Thanks to thingiverse it only took me a few minutes to find a 3d printable robotic arm. I also foudn some modifications to the original design which I ended up using. Here a the links to the different projects I used:

I recommend using the openSCAD version of the design. I initially used parts of the original design but the gripper didn’t work for me. So I replaced the top arm and the gripper for the openSCAD parts. Also I needed stronger servos for rotation and the lower arm so I used the parts modified for MG995 servos. So the only part I used from the original thingiverse project is the middle part of the arm.


The Muscles

The robot’s muscles consist of  5 hobby servos. Those are the same kind found in RC cars and airplanes. For the rotation and the lower arm I used the TowerPro MG995. You can find them on e-bay for a few bucks. The other servos are the very small 9g servos. This means that they weight only 9 grams which is a good thing considering they have to be lifted by the lower servos. Oh, and you;ll find them really cheap on e-bay as well.

All the servos can draw a considerable amount of power from the power. A little over 3 Amps if all motors are at full power. This cannot be supplied by a micro controller or a USB port so I build a simple power supply. It consists of just one voltage regultor  (LM123K) and two tiny capacitors. The LM123 converts 12 volts down to 5 volt and can deliver up two 3 Amps. It doesn’t need cooling to do this and the thing turned out to be fool proof. I hooked it up the wrong way a few time but it still lives.

Voltage regulator on top of Arduino Yun
Voltage regulator on top of Arduino Yun

The Brain

An Arduino Yun is the physical part of the robot’s brain. The Yun is a development board that has an AVR micro controller and an SoC running openWRT in one package. It is equiped with LAN, WiFi and an micro SD Card slot. The cool thing is that the Linux part of the board runs a webserver which can be used from the AVR part of the board. Arduino provides a bridge library to aid communication between Linux and AVR. Using this library it is relatively easy to build a web API on top of your micro controller code. And that is exactly what I did.

The software for the brain is a rather simple sequence recorder/playback device. So I can program positions into steps which are put into a sequence. Everything is controllable via a REST like API. I used the API to create a very simple AJAX web interface which is served by the linux part of the Yun as well. But I didn’t stop there. I created a few vCO workflows which are able to control the robot. The vCO REST Plug-in makes this very easy.

I’ll write more about the software and the vCO integration in a follow up blog.