Thursday, May 23, 2013

"Drosselkom" - All your IP networks are belong to us

Seid Wochen entrüstet sich die Internet-Szene über die Pläne der Telekom, zukünftig Flatrates mit einer Volumenbegrenzung anzubieten. Nachdem es zunächst als Ungerechtigkeit dargestellt wurde, dass man für einen schnellen Anschluß bezahlt, den dann aber nicht voll auslasten können soll, und die Telekom erklärt hat, dass es für Vielnutzer eine kostenpflichtige Upgrade-Möglichkeit geben wird, konzentriert sich der Protest nun auf die durch die Telekom bedrohte "Netzneutralität". Stein des Anstosses ist, dass die Netzbetreiber durch technische Massnahmen Daten mit unterschiedlicher Priorität behandeln können und wollen. Das sei, so wird erklärt, eine Verletzung der "Netzneutralität", die von den Protestiernden quasi als gegebenes Recht angenommen wird.

Unabhängig davon, dass die Proteste nun eigentlich mit der ursprünglichen Geschichte nicht mehr so besonders viel zu tun haben, lohnt es sich doch, die Angelegenheit mal sachlich zu durchleuchten:

Warum will die Telekom ein gemanagtes Netz?

Für die "Netzgemeinde" ist das Internet ihr eigener, unregulierter Raum: Eine Sphäre, in der man beliebige Datenpakete zwischen Servern austauschen kann. Das kann auch mal nicht so gut funktionieren, dann dauert das Laden einer Webseite eben lange oder es ruckelt im Video oder der Ping ist schlecht, so dass man im Spiel ständig abgeschlachtet wird, aber so ist das Internet eben. Funktioniert meistens, und wenn es nicht funktioniert, dann ärgert man sich eben ein bisschen und guckt ein bisschen fern oder spielt lokal.

Für die Carrier ist die Infrastruktur, auf der auch dieses Internet läuft, in der Zukunft aber wesentlich mehr: Traditionell haben die Netzbetreiber über dem Internet, für uns Benutzer unsichtbar, das eigentliche Carrier-Netz betrieben. Dieses Netz basiert nicht auf der IP-Protokollwelt, sondern auf den ITU-Standards, die aus dem Telefonnetz hervorgegangen sind. Die Carrier-Netze waren die Basis für die Telefon- und sonstigen Datendienste, die man neben dem Internet "früher" verwendet hat. Nun ist es aber zunehmend so, dass es kaum noch Dienste gibt, die nicht auf der IP-Protokollfamilie basieren, und die Infrastruktur für IP-Protokollwelt aufgrund der massenhaften Verbeitung des Internet wesentlich preiswerter als klassische Carrier-Netztechnologie ist. Die alten Carrier bekommen das natürlich zu spüren, denn sie muss noch die alte, teure Technik unterhalten, während IP-only-Anbieter das komplett nachgefragte Spektrum von Netzwerkdienstleistungen preiswerter anbieten können.

Um marktfähig zu bleiben, stellen die großen Traditionscarrier nun also Ihre Infrastruktur komplett auf IP um. Das heisst auch, dass höherwertige Dienste - wie beispielsweise virtuelle Standleitungen mit fester Bitrate und Latenz - nicht mehr im Overlay-Carriernetz realisiert werden, sondern innerhalb der allumfassenden IP-Infrastruktur, in der auch das normale Internet angeboten wird. Es macht einfach Sinn, das so zu tun, und dazu braucht man eben ein gemanagtes Netz, in dem sich unterschiedliche Verkehrsklassen realisieren lassen, und darin unterscheidet sich ein Carrier-Grade-IP-Netz vom "Internet". Im Internet nämlich, dessen Freiheit derzeit verteidigt wird, werden traditionell alle Daten als Best-Effort-Verkehr gleich behandelt. Es macht im Internet keinen Unterschied, ob ein Datenpaket einen Teil einer Website oder ein Stück Ton aus einem Telefonat enthält. Wenn es zu Engpässen kommt, dann wird das Netz für alle Verkehrsarten langsamer. Beim Laden einer Website muss man länger warten, bei einem IP-Telefonat versteht man seinen Gesprächspartner nicht oder die Latenz wird größer.

Wenn nun also behauptet wird, dass so ein gemanagtes Netz der Netzneutralität entgegen steht und nach Regulierungen gerufen wird, dann stellt sich die Frage, welches Ziel damit verfolgt werden soll. Sollen die Carrier keine höherwertigen Dienste anbieten dürfen? Sollen sie gezwungen werden, höherwertige Dienste nur auf Leitungen anzubieten, die nicht auch best-Effort-Verkehr transportieren? Wozu soll das denn eigentlich gut sein?

Mir kommt es so vor, als sei die Diskussion im wesentlichen durch Unkenntnis und den Unwillen geprägt, sich mit der Technik und den Skalierungsnotwendigkeiten auseinanderzusetzen. Es soll alles immer schneller werden und fast nichts kosten, und wenn es denn was kostet, dann soll es doch die Allgemeinheit bezahlen. Wenn dieses Thema jetzt im Bundestag diskutiert werden sollte, dann kann ich mir nicht vorstellen, dass dabei irgendwas Sinnvolles herauskommt. Wenn schon die elitäre Netzgemeinde nicht versteht, wie Datennetze funktionieren, und auf einem Niveau von "Bei mir kommt das Netz aus der Telefondose" argumentiert, wie sollen denn dann die Internet-Ausdrucker aus dem Bundestag einen sinvollen Beitrag leisten?

Liebe Netzgemeinde, fordert doch bitte mal was Präzises, technisch Realisierbares, und nicht von technischen Dienstleistern, dass sie Euch bitte für 30 Euro im Monat die Illusion von "Freiheit" erzeugen sollen. Damit überfordert Ihr sie. Für die meisten Menschen ist die Diskussion ohnehin unverständlich, da sie überall und immer und billig Netz haben und das auch so bleiben wird.

Friday, May 17, 2013

FPGA based tabletop Pengo console

As a kid in the 1980ies, one of my favoured arcade games was Pengo, a maze action / puzzle game. I spent most of my pocket money playing it, and when I was introduced to dialup bulletin board systems in the same era, I chose Pengo as my online nick name, which stuck for some 10 years.

The adaptions of the game for home computers never sparked any fire in me, though, mostly because of the inferior graphics. The arcade version only had a display resolution of 224 by 288 pixels, but sprites and colors made it much prettier than what Sinclair ZX Spectrum, Amstrad CPC 464 or Commodore 64 could do.

At one point, a friend gave me an original Pengo arcade game logic board, which I have been carrying around for years, always wanting to make it playable at some point. Due to the non-standard video output generated by the board, I've never actually done it. Recently, I wanted to do something electronics related in my free time, and I picked the Pengo project up.

Arcade game emulation

Emulation is a common way nowadays to run ancient software. The older the software is, the harder it usually gets to find original hardware to run it on. Emulators provide a way to run original ancient software on current hardware, using software to impersonate the original hardware that the program expects to run on. Emulators exist for all kinds of systems and they are also useful when developing for embedded or other non-desktop environments.

Arcade game emulation is rather popular, with the open source MAME emulator being ubiquitous. MAME emulates all sorts of game hardware, starting in the 1970ies up to rather recently. Versions for most popular desktop operating systems are available, and there is MAME4ALL for the Raspberry Pi, too. MAME emulates the hardware for a huge number of games, but it is not distributed together with the ROM images for any of these games, for licensing reasons. ROM images are distributed through the familiar channels, though.

Another way to run older software is to use an FPGA to actually implement the original hardware. In the FPGA, almost arbitrary digital hardware can be created using configuration, so to run arcade software, one needs to recreate the original hardware in a hardware description language. Several arcade hardware systems have been implemented using FPGAs, and Pengo is one of them.

Compared to the software emulation approach that MAME uses, hardware emulation is less flexible as only one typically only one game fits into the configuration memory of one of the cheap FPGA boards. Also, fewer hardware emulations are available. FPGA based emulation boots quicker and requires less power, though.

FPGA based Pengo build

As I wanted to play with FPGAs again, I opted to use a Paplio Pro FPGA board for which a port of Pengo exists. The port includes a video scan doubler so that a standard VGA output signal is generated. All I/O is done through an Arcade MegaWing, which conveniently makes all I/O needed for emulating Arcade hardware accessible on standard connectors.

Wanting to finish up the project once and for all, I decided to get a cheap, used Dell 1504FP 15'' TFT monitor, a Zippyy Arcade and Fight Stick, a bunch of arcade buttons from eBay and talked my brother, who is a carpenter, into helping me build a proper desktop cabinet in his workshop.

To complete the hardware, I found myself a nice industrial 5V switched mode power supply to power the FPGA board and a small Kemo M031N audio amplifier module connected to a speaker.

The port of Pengo to the FPGA required some tweaking in the top level VHDL module to adapt it to the wiring that I used inside of the cabinet - The first joystick port is used for the coin and start signals, the second port is connected to the joystick. I also tweaked the DIP switch settings to suit my taste, which is something that must be done in VHDL as the Arcade MegaWing does not have switches that one could use for run-time reconfiguration.

More games!

At this point, my tabletop console only works for Pengo, and sure enough there are other games that one would want to play on it. Using a more flexible MAME based engine would get me there, and I experimentally swapped the FPGA board for a Raspberry Pi with MAME4ALL. While Pengo works fine on the Pi, the video display quality is much lower because MAME4ALL unconditionally uses antialiasing. This makes the resulting image blurry, no matter what physical resolution is chosen for the Pi. The FPGA based emulation uses simple scan doubling for video scaling which results in super-crisp display quality even though the TFT's physical resolution is not matched by the FPGA video output. So for now, I'm going to stick with the one game solution until MAME4ALL is fixed or I can find another way to run MAME.

Wednesday, March 20, 2013

Dealing with Excel files from Common Lisp - Using ABCL and Apache POI

In my day job, I mostly program in Common Lisp. Most of what I do is file and database work which Common Lisp is pretty well-suited for. Sometimes, though, I have to deal with data that comes in Excel files, and in the past that meant loading the files into Excel, exporting it into some plain text format and then working with those plain text files from Common Lisp.

While this works, it is a manual and error prone process. Also, Excels plain text export mechanism often mangles the data in undesired ways, which requires yet more additional manual steps (or VB scripting), which I'd rather like to avoid for processes that need to be automatic. Thus, I was on the looks for a way to access the Excel files directly with a Common Lisp program.

Writing a new Excel file parser was out of the question - I have real customer needs to fulfill, and implementing a capable Excel file reader is a large infrastructure project. So I looked into using an existing Excel file reading library instead. There are numerous options, commercial and non-commercial, and I've looked into one of the libraries written in C, but the requirement to more or less manually create FFI stubs and foreign structure layouts for a library that in itself was not documented very well and also did not look like being very accessible made me look for other options.

Apache POI

One of the more prominent open source libraries for accessing Microsoft Office files is Apache POI. It has been around for over 10 years and supports most MS Office formats, including the old OLE2 Excel format as well as the newer OpenXML format (that, despite using XML as the base format, is a horribly complex mess that I hope to never have to deal with directly). Apache POI is a Java library, so it can't directly be used from SBCL, which is the Common Lisp implementation that I normally use.

Armed Bear Common Lisp (ABCL)

In the recent months, I have noticed that there was quite some activity around Armed Bear Common Lisp (ABCL). I had tried an earlier release of it, and while it somewhat worked, it seemed to have a fair number of restrictions that made it unsuitable for me at the time. In particular, ABCL lacked support for the Metaobject Protocol which is something that I often use, either directly or as a library dependency. Also, the older version that I tried could not load the Postmodern library that we use to access our Postgres database, which was the final show stopper. But all that was before the recent 1.1.1 release of ABCL.

ABCL is hosted on the Java Virtual Machine (JVM), and maybe the biggest advantage of that is that access to other JVM-hosted code is straightforward and easy from Common Lisp programs running in ABCL. Thus, using Apache POI should be a snap. Also, as ABCL is becoming a reasonably complete implementation of the Common Lisp standard now, I had hopes to be able to use some of my existing infrastructure code in the program that dealt with Excel files.

To make it short: ABCL works great now. It took me very little time to translate the calls that I found in some Apache POI example program to Common Lisp, and I could also use all of the Common Lisp libraries that I needed for the task. There are some important libraries that don't work on ABCL yet (i.e. CXML-STP, CL+SSL), but I don't need these right now. And ABCL, during development, behaves like any other reasonable Common Lisp implementation in that it supports SLIME.

The Excel reading was a snap and the read process is reasonably fast, but ABCL's startup times are a bit annoying. There currently is no way to do the equivalent of "saving the world" on ABCL, so one has to load all required software at startup time. We're using ASDF for that, and it seems that some of the slow startup times need to be attributed to it. The Excel file reader will run as a batch job, so the startup times don't matter for our production uses, but testing the scripts was a tad tardy.

To illustrate how easy accessing Excel files from Common Lisp is, here is some example code that dumps the first worksheet of an Excel file to the standard output in a Tab separated values format:

;; -*- Lisp -*-

(defpackage :export-tsv
  (:use :cl))

(in-package :export-tsv)

(defun init-classpath (&optional (poi-directory "~/poi-3.9/"))
  (let ((*default-pathname-defaults* poi-directory))
    (dolist (jar-pathname (or (directory "**/*.jar")
                              (error "no jars found in ~S - expected Apache POI binary ~
                                        installation there"
                                     (merge-pathnames poi-directory))))
      (java:add-to-classpath (namestring jar-pathname)))))

(defun process-file (pathname)
  (let* ((file-input-stream (java:jnew "java.io.FileInputStream"
                                       (namestring pathname)))
         (workbook (java:jstatic "create"
                                 "org.apache.poi.ss.usermodel.WorkbookFactory"
                                 file-input-stream))
         (sheet (java:jcall "getSheetAt" workbook 0))
         (formatter (java:jnew "org.apache.poi.ss.usermodel.DataFormatter" java:+true+))
         (total-row-count (java:jcall "getLastRowNum" sheet)))
    (dotimes (row-number total-row-count)
      (let* ((row (java:jcall "getRow" sheet row-number))
             (column-count (java:jcall "getLastCellNum" row)))
        (dotimes (column-number column-count)
          (unless (zerop column-number)
            (write-char #\Tab))
          (write-string (java:jcall "formatCellValue"
                                    formatter
                                    (java:jcall "getCell" row column-number))))
        (terpri)))
    (java:jcall "close" file-input-stream)))
Before the process-file function can be used, init-classpath must be called to add the Apache POI jars to the Java class path.

As you can see, the program is rather short, and even if all comments are stripped, the Java version contains a lot more ceremony. No surprise here, maybe you wanted to have your prejudice confirmed :).

ABCL will now have a firm place in my toolkit. Big shouts go to the maintainers who did a great job lifting ABCL up to a level where it will be very useful to me.

Friday, February 15, 2013

Using Pogoplug as a OpenVPN server - with FreeBSD and VLAN

Recently, I needed to establish a VPN between my office, a colocated server and my home office. I generally prefer building networks on my own rather than buying cheap customer grade appliances, so here is what I've used.

Router

As router, I use a Pogoplug E02. It is marketed as a "private cloud device" and basically is a Linux based NAS with Internet sharing built in. Hardware wise, it is an ARM box with 256 MB RAM, four USB ports and one Gigabit Ethernet port in a funny looking plastic enclosure, with a built-in power supply. Kind of an industrialized Raspberry Pi, if you will. The E02 is from a previous generation and no longer marketed by Pogoplug, but it is available from a number of sources for prices below $50 - Mine cost 35 Euro a piece. I spent another 11 Euros for a high-speed USB Flash stick with 4GB and 10 Euros for a Nokia CA-42 cable that I installed to access the serial console port (which I actually did not need yet, but just in case).

Switch

In order to provide me with Ethernet ports which are part of the VPN, I use the Netgear GS108E 8-port managed Gigabit Ethernet switch. At some earlier point in my life, I swore that I will never buy a Netgear device again, but at a price tag of €37, I was willing to take a chance. Sure enough, the experience with this device is not totally pleasant. Configuration requires a Windows computer (or an obscure Linux utility that I could not get to run), the Windows configuration program uses Adobe AIR (wth?), and it never just works. But, with some patience, it does.

FreeBSD

The first step to transform the Pogoplug into a useable computer is to replace the Busybox based Linux with a real operating system. This requires replacing the boot loader with a new u-boot version which can boot from USB media, a procedure that is very well documented on the platform support page for running Arch Linux on the Pogoplug E02.

As a long-time FreeBSD user, I was very happy to discover that my favorite OS would run on the device, too. Detailed and very accurate installation instructions, together with quite some pre-built packages, are available on the excellent FreeBSD for Kirkwood page. I followed these instructions and got going very quickly. I had to build a new kernel in order to add the tun device driver and make some minor adjustments for the Pogoplug hardware, but again, the instructions on Nicole's page were accurate. I tried building a newer FreeBSD release with the same patches, but I did not succeed with that and thus sticked with FreeBSD-8.1.

Router configuration

In addition to the FreeBSD base installation, the openvpn and isc-dhcp41-server packages are required. I built them on the Pogoplug myself.

The Pogoplug only has one Ethernet port, so in order to talk to two separate Ethernet segments, I use a tagged VLAN. On the untagged VLAN, the router is connected to my "normal" home network. The VPN is in a VLAN with tag 1, and the Ethernet switch is used to make some ports into members of the VPN and some members of my home network.

The relevant configuration section from my /etc/rc.conf looks like this:

ifconfig_mge0="Home-LAN-IP/24"
static_routes="vpn_router"
route_vpn_router="VPN-GW-Public-IP Home-Router-IP"
defaultrouter="VPN-GW-VPN-IP"
cloned_interfaces="vlan0"
ifconfig_vlan0="vlan 1 vlandev mge0 Home-VPN-IP"
gateway_enable="YES"

Some explanation is in order: Home-LAN-IP is the (static) IP address of the router in my home LAN. DHCP can be used as well, but I prefer having my infrastructure devices on fixed addresses. A static route named "vpn_router" is established so that packets to the VPN router, on IP address VPN-GW-Public-IP, are always routed through my home LAN gateway with IP address Home-Router-IP. For all other packets, a default route to the VPN gateway with the address VPN-GW-VPN-IP is established. A VLAN interface named "vlan0" is created and the address Home-VPN-IP is assigned to that virtual interface. Finally, IP packet forwarding is enabled.

The OpenVPN router is started from /etc/rc.conf:

openvpn_enable="YES"
openvpn_configfile="/usr/local/etc/openvpn/client.conf"

On the VLAN segment, a DHCP server is run by the way of these lines in /etc/rc.conf:

dhcpd_enable="YES"
dhcpd_flags="-q"
dhcpd_conf="/usr/local/etc/dhcpd.conf"
dhcpd_ifaces="vlan0"
dhcpd_withumask="022"

DHCP server configuration

The DHCP server configuration for the vlan0 segment is placed in /usr/local/etc/dhcpd.conf and reproduced for completeness:

option domain-name-servers VPN-GW-VPN-IP;

default-lease-time 600;
max-lease-time 7200;

log-facility local7;

subnet Home-VPN-LAN netmask 255.255.255.0 {
  range 192.168.21.100 192.168.21.120;
  option routers VPN-GW-VPN-IP;
}

Obviously, the range needs to be adapted to the network in use.

OpenVPN configuration

OpenVPN uses SSL certificates to authenticate clients and servers. Creating and maintaining a certificate authority using the standard OpenSSL command line tools is too cumbersome for me. I am using the excellent and free XCA tool which makes managing a private certficate authority (CA) rather easy. I used XCA to create a CA, a server certificate for my central VPN router as well as a client certificate for my Pogoplug. Certificates and keys must be exported to PEM files for OpenVPN to use them.

I use a fairly standard LAN-to-LAN configuration for OpenVPN which is based on the server.conf and client.conf example configuration files which are located in /usr/local/share/examples/openvpn/sample-config-files. The client configuration used on the Pogoplug looks like this:

client
dev tun
proto udp
remote VPN-GW-Public-IP 1194
resolv-retry infinite
nobind
persist-key
persist-tun
ca ca.crt
cert client.crt
key client.key
ns-cert-type server
comp-lzo
verb 3

Again, VPN-GW-Public-IP is the public IP address of the central VPN router. The client certificate and key need to be placed in the client.crt and client.key files in /usr/local/etc/openvpn/ on the Pogoplug. The root CA certificate that you've created needs to be placed in the ca.crt file in the same directory.

On the server, the OpenVPN configuration is slightly more complex. Here is the server.conf file:

port 1194
proto udp
dev tun
ca ca.crt
cert server.crt
key server.key
dh dh1024.pem
server VPN-GW-VPN-IP 255.255.255.0
ifconfig-pool-persist ipp.txt

client-to-client

push "route 0.0.0.0 0.0.0.0"
push "route Home-VPN-LAN 255.255.255.0"

client-config-dir ccd
route Home-VPN-IP 255.255.255.0

keepalive 10 120
comp-lzo
persist-key
persist-tun
status openvpn-status.log
verb 3

Again, Home-VPN-IP is the IP address of the vlan0 interface on the Pogoplug. The Home-VPN-LAN is the network address of that interface, and the "push" directive is there to push a route to that network to other clients connecting, which may choose to ignore the default route. On the server, a ccd subdirectory needs to be created in /usr/local/etc/openvpn/ to contain client-specific configuration options. Each file needs to have the name that is put into the "Common Name" attribute of the certificate used by the client. For example, I have a file named strelitzer in that directory which is the Common Name of the certificate of my Pogoplug. That file contains this:

iroute 192.168.21.0 255.255.255.0

This directive announces my VPN LAN at home to the OpenVPN router.

Switch configuration

The GS108E switch must be configured for 802.1Q VLANs in "Advanced" mode. In that mode, it is possible to individually set up the VLAN membership for each port. The default VLAN for untagged packets can also be configured on a per-port fashion.

Here is how I configured my switch so that ports 1-4 are members of VLAN 1 (VPN) and ports 4-8 are member of VLAN 2 (Home Network):

Ports 1-3 use VLAN 1 for untagged packets. Port 4 uses VLAN 1 with tagged packets and VLAN 2 with untagged packets. Ports 5-8 use VLAN 2 with untagged packets:


Finally, I configured the PVIDs for each port like so:

Done

That is basically all that is to it. I'm sure I've left out some information that might be useful. Send me email if you have trouble with any of this. It should be possible to adapt this setup to FreeBSD running on other hardware, like the Raspberry Pi. Let me know if you get something like that to run, too!