PoempelFox Blog

[..] [RSS Feed]

Sun, 20. Aug 2017

SHA 2017 Created: 20.08.2017 09:05
Last modified: 27.08.2017 10:30
I was recently visiting SHA 2017 ("Still Hacking Anyways"), a "Hacker camp" in the Netherlands. Now others have posted articles about how great that was, I will instead focus on what I do best: Ranting about what I did not like about it.

The coin system

The bar and the food court insisted on using a system of plastic coins. You could only pay with these plastic coins there. You could only get multiples of 10 coins == 10 Euro out of machines, either with cash or credit-/debitcard. You could not change the coins back, so any leftover coins at the end of the event became worthless plastic waste, and they would keep your money.
This felt like a huge ripoff, and apparently it was intended as a such. The only reasoning Orga could provide for this crap was "safety of bar personnel" (?How?! By making sure they don't cut themselves on these sharp Euro coins?) and "Ensuring the food vendors pay the share of their revenue they have to pay". Wow, those are great reasons to rip off and piss off 3650 people!
To add insult to injury, the machines handing out these coins were designed badly - they would ALWAYS spit out the coins in a way that half of them landed on the ground in front of the machine, and you had to pick them up from the dirt. They tried to fix it by "improving" the output-chamber with duct-tape, but to no avail.
It's not that I could not afford the loss of 9.50 Euro through this ripoff, but the whole scheme makes me angry. I expect such things at a commercial festival, not at a not-for-profit hacker camp. In fact, it pisses me off so badly, that should I visit the successor-camp in 2021, I will not buy a supporter-ticket again (you could voluntarily pay more for your ticket, so that others who could not afford the regular price could get cheaper ones). So congratulations - 9.50 Euro earned, 100 Euro lost.
And last but not least, I wonder if some people used the plenty available 3D-printers to fight ripoff with forgery.

The food court

...was inadequate. Not because the food was bad, most of it was actually pretty good, but because it lacked both choice and capacity. After the huge announcements on the SHA blog before the event about how great the food court will be, this was especially disappointing.
There essentially was:
  • "The Holy Crepe", serving pankakes
  • a stand selling meat burgers. Only one sort, not customizable.
  • "just like your mom", selling vegan stuff imitating meat.
  • a stand selling fries
  • a stand selling breakfast (scrambled eggs and bacon), then in the afternoon switching to some pasta
  • a stand selling a wok dish
  • a stand selling ice cream
  • a stand selling coffee
Now look at that list and imagine trying to get sated from that for four days. It's not possible. Half of it are just sweets, and the other half is not really a full meal either.
And you would also have to stand in line for a very long time, because there was not nearly enough capacity, at least in the first two days. I think by day 3 most visitors had given up on the food court and acquired other food sources, because queues were tolerable then.
They also opened too late: On day 1 at around 14:00, when the place was crawling with people, who had mostly just arrived there after a long trip and were very hungry, only the "Holy Crepe" was open. The poor girl manning (haha) the stand was completely overrun. While I was standing in line (for what felt like an hour), she started telephoning. Now I don't really understand dutch, but it sounded a bit as if she was desperately trying to order supplies. And indeed, when I came by later, the stand was closed and a handwritten sign there that they would reopen in an hour (spoiler alert: they reopened an hour later than announced).
If you like conspiracy theories, the inadequacy of the food court might at least partially have been on purpose: Because the angels (the voluntary helpers) would get really great food as a reward. "So you don't want to starve? No problem, sign up voluntarily here..."

The terrain

I don't even know where to start here.
Perhaps by explaining why the Orga liked it so much: Because it had a lot of infrastructure already in place. There already were a few toilets and showers. Also, water and drain for the additional toilets and showers was already there, they only had to place the containers and connect them up. Even some fibre cables across different parts of the site were already in place. And there was also a fibre to the outside world that permitted the camp to have a 100 Gigabit Internet uplink. I'll also gladly admit that it was spacious, and that the fact that it had its own harbour was pretty cool.
Sadly, this is where the list of what was good about the terrain ends, and the extremely long list of what wasn't starts.
The biggest problem was that the terrain is just very bad for camping.
Large part of the ground consists of sea clay. Water does not drain there at all. That means that after a rain shower, the water will stay there for a very long time. Even one day after a rain shower there are still puddles in many parts of the terrain. When people said they would need rubber boots, at first I though they were joking. They were not. Rubber boots really were needed. As it rains pretty much every day in the Netherlands, and it had rained constantly for a week before SHA, pretty much 50% of the terrain was actually unusable for camping. Among them the biggest fields, where they had pretty much placed all the normal villages. They worked around it a bit by moving the fire lanes into the worst affected parts, but the rest still was more of a mud-fest than I would have liked. The fields on the sea-side of the dyke did not have this problem: They have a very sandy ground, and dry off very quickly after rain. They were also well-protected from the wind (by the dyke). But only half of them (Hopper and Snowden) could be used, because the others (Engelbart and Zuse) had not been provided with power or network. There also would not have been enough toilets and showers on that side of the dyke, they were already in short supply with the little usage those fields got. Sadly, Orga did not change plans and put these fields to good use after it had turned out how bad the other parts of the terrain were. They would probably have been used if there had been basic infrastructure - Snowden Field was totally crowded, and Zuse right next to it was just empty.
The fields worst affected by flooding were the ones that had been designated for the "Family Village", which was not a normal village but a huge area for families with little kids, providing some entertainment for them. Their planned fields (Babbage, Boole, Clarke) weren't even mud anymore, but more of a swimming-pool. So Orga moved the Family Village to different fields, namely Rhodes and Wilson. But there was a problem with that: Each field had been assigned a (maximum) noise level, with the loudest fields being in the southwest (Torvalds, Turing, Wozniak), and decreasing when going away from there. The villages were placed on the fields depending on how much noise they wanted to make. The Family Village had been placed in the very north, in a very quiet area. But due to the move, they ended up right next to the maximum noise area, seperated only through a few trees. The predictable result? Nonstop bitching from Family Village about the noise from the designated noisy area. So noisy area was ordered to not be noisy anymore. Well, the wiser head gives in.
But the terrain was problematic even when it was dry: the paved roads had a lot of dirt on them (probably the dried mud). And because it was also very windy, the wind blew that stuff into your face multiple times every day. It wasn't very pleasant.
Due to the bad ground the terrain was also not suitable for campervans. They were only allowed in a few places next to the paved roads, and there were less than 100 camper spots in total. Those few slots were essentially reserved for families and handicapped people. That meant that the majority of people that wanted to come with a camper could not, and those that managed to get a spot were placed far away from their friends. It also meant that families that came with a camper could not join Family Village.

The parking

There would be only parking for a few cars on site, so for SHA a whole parking lot had to be built in a field nearby. But as the ground there was just as inferior as on the camp, this was a complicated effort, including multiple truckloads full of metal plates to build "roads" from, and bridges over the ditches around the field. As you can imagine, this was very expensive, and a parking ticket cost a whopping 42 Euros. And that is said to just barely have covered the cost. For that price you got parking in a muddy field from which you had to walk more than 1 km over an equally muddy and not properly lit (at night) path to the actual event. At least there sometimes was a luggage shuttle, but it would only drive to and from one end of the parking lot, which meant you would still have to carry your stuff a few 100 meters to/from the pickup/dropoff point; and as the name implies, it was for luggage, not for people, so while your luggage was shuttled, you would have to walk - and hope to find your luggage at the other end of the trip.

The restrictions

There was a huge amount of restrictions imposed, some by the owner of the site, some by the municipality as a precondition for the permit. These were including but not limited to:
  • No cars anywhere but on the roads, no campervans, not even pop-up trailers
  • No coal grills
  • No amplified music or loud noise between 0:00 and 8:00
  • A ridiculous amount of fire protection measures, because as we all know, one of the biggest dangers when camping in the Netherlands in a puddle of mud after a week of rain is fire spreading from tent to tent after somebody had a little mishap with their camping cooker
  • No alcoholic beverages containing more than a certain number of per mille alcohol. No, I don't remember what that number was, but if you wondered why your overpriced "Tschunk" from the bar tasted so thin - this is why.
  • No glass bottles. An exception was made for the mate bottles.

The teardown

That one was really schizophrenic. Because on the one hand, Orga wanted everyone out of the terrain as fast as possible. On the other hand, they did everything to slow the process of leaving down.
The official end of the event was at 16:00 on August 08. But cars were not allowed onto the field before 18:00. They did not adapt these plans when it became clear that there would be a rainshower of epic proportions around 17:00, i.e. people weren't allowed to get their stuff out while it was still dry. And the shower was really bad, all my clothes and even the contents of my backpack were completely soaked just from walking to my car.
And even after 18:00 you could not just drive onto the terrain, park near where your stuff was, and load. There was only a limited number of cars allowed on the terrain at the same time, and each of them had to be accompanied by an angel driving in front of it on a bicycle. Kudos to those angels - as the site was large, they had to cycle at least 2 km per car, so they were probably dead after their shift. In order to guarantee some throughput, they actually had angels check that people had already packed their stuff before letting them on the terrain by car, but that naturally was binding further resources.
As you can imagine, there was "a little" queue of cars as villages tried to load their stuff on that evening and the next morning.

Even without the delays during the loading of stuff, those that had a longer trip ahead could not realistically start that trip on the same day, they had to stay another night. As an example, my way home is about an 8 hour drive, that is if there is no traffic, and I sure as hell won't go on that trip starting at 21:00 while already tired. In a way, Orga seemed to be aware of that, because they requested that the site be vacated by 12:00 the next day. That would have been pretty reasonable.
However, according to reports "on the internet" (I did not experience this first-hand as I spent the night in a hotel), they then started to turn off the power grid and locked most of the toilets that same afternoon. So people spent the last night in the dark and had to take long hikes on unlit paths to go to the toilet. If you want to motivate people to leave, turn off the network, but turning off power and light at night is close to criminal assault.

the factory firmware for the badge

Every visitor got a nice electronic badge, with an e-paper display and an ESP chip, so the thing had WiFi. The problem with that was the firmware that was on it when it was handed out. If you turned it on for the first time, it would automatically start a wizard to set your (nick-)name, because after all, displaying that is the main purpose of a badge. But there was no explanation of which key would do what, and if you guessed wrong, you would set your name to some nonsense or (most likely) the empty string, in which case it would display some default (the name of one of the developers). The problem was that you could not ever call that wizard again or change that nick before successfully connecting to the SHA WiFi and downloading and installing a firmware update. In other words, your name badge was unable to perform its most basic task, namely "displaying your name", before "phoning home" to the cloud. I found that quite remarkable, as I would have expected this kind of braindead reliance on the cloud from a gadget made by Google or Apple, but not from something at a hackercamp.

The weather

...mostly during buildup and teardown. There was a storm on Day 0, teaching a few tents how to fly; and both buildup and teardown were accompanied by heavy rain.
The weather inbetween was actually pretty nice, some sun (enough to get sunburnt if you weren't careful), some clouds, an occasional small rain shower (well it's the Netherlands), not too hot.
It however became extremely cold immediately after sunset. It's been a while since I wished I had my winter coat with me and not just my between-seasons-jacket in August.


That's all I can think of for now, but I might add more later.
So was SHA a disaster? No, it wasn't. But it also wasn't particularly good.
I have also posted a few pictures here.

I'm a bit surprised that you didn't mention the *village* and their Maibaum. And you also failed to mention the guy that ruined your e-bike. So there's some room for extra POEMPEL.
knilch 24.08.2017 11:49

Do you have a spam issue on this website
Kristy 28.10.2018 23:03

Do you see any spam on this website?
PoempelFox 25.11.2018 20:19

write a new comment:
name or nickname
eMail adress (optional)
Your comment:
calculate: (2 times 10) plus 3

Sat, 13. Aug 2016

Why 2.4 GHz WLAN is no fun in the city Created: 13.08.2016 14:55
I've long since given up on using 2.4 GHz wireless at home, because just 3 meters from the access point there was packet loss and it really wasn't fun to use. That wasn't surprising, because I live in a big city, and since everybody has a WiFi network at home those days, I can receive about 20 foreign WiFi networks just sitting in my living room. And of course, because 2.4 GHz effectively only has 3 different usable channels (1, 6, 11), they very much interfere and disturb each other.

Recently, I had a spare old WiFi router with OpenWRT on it lying around, and got the idea from IRC to use this for measuring just how full the 2.4 GHz band is.
On an OpenWRT router, the command iw dev devicename survey dump will display something like:
Survey data from wlan0
        frequency:                      2412 MHz
        noise:                          -95 dBm
        channel active time:            22017 ms
        channel busy time:              9749 ms
        channel receive time:           9457 ms
        channel transmit time:          0 ms
It will show that for all channels. The active time is the amount of time the device was tuned to that channel, and the busy time is the amount of time the device thought the channel to be in use by others (since it doesn't send anything itself). I wrote a simple script to change channels every 22 seconds so that stats for all relevant channels were available. From the collected data, I drew graphs. Here is the result:

Now as you can see, on any average day just the noise from foreign WiFis blocks all available 2.4 GHz channels for a huge percentage of time. There is hardly any airtime left for sending data.
You can also see why I switched to using only 5 GHz WiFi some time ago: Channel 040 is the 5 GHz channel I use, and noone in the neighbourhood besides me uses it. There is only one other 5 GHz WLAN on channel 36 that I can occassionally receive in my appartment, all the other channels are unused.
no comments yet
write a new comment:
name or nickname
eMail adress (optional)
Your comment:
calculate: (2 times 10) plus 3

Mon, 11. Jul 2016

Foxtemp 2016 Created: 11.07.2016 20:34
A few years ago, I constructed my DS1820toUSB device for attaching a temperature sensor to USB. This year, it was time for a new generation: The Foxtemp2016. The main trigger for that was that I wanted to measure humidity in addition to the temperature. I've also since experimented with the home automation software FHEM. While there are a lot of sensors for commercial weather stations that can be received by FHEM, it's always a bit of a gamble: Nobody knows whether the manufacturer perhaps made minor changes to the sensor that break its compatibility with FHEM, and even if that is not the case, those things are usually extremely poorly documented, so you don't know what accuracy they have. You'll also have to buy what's currently available, so unless you buy all your sensors at the same time, you'll usually end up with a whole zoo of different sensors.

The new features of Foxtemp2016 compared to DS1820toUSB are:
  • Now wireless. Sadly, that means it needs to be powered by batteries, and you'll need a receiver for the wireless data.
  • uses a SHT31 as a sensor, not a DS1820. As a result of this, it can now also measure humidity - the datasheet of the SHT31 claims it can do this at a typical accuracy of ±2% RH in the relevant ranges. It also claims a typical accuracy of ±0.3°C for the temperature measurement, so the accuracy is better than the ±0.5°C the DS1820 promises.

Instead of constructing everything from scratch, this time I used premade parts: The microcontroller board that is the base for this is a JeeNode micro v3, and the sensor is on a Adafruit SHT31 breakout board. However, both are open hardware designs, so should they no longer be made, you could still make them yourself. Sadly, due to those premade parts this isn't nearly as cheap as a ds1820tousb, but that really wasn't a top priority for me.

The data from the Foxtemp2016 devices can be received with the same JeeLink (V3) you'd probably use for receiving cheap commercial weather station sensors. You essentially just need to enable something when compiling the firmware for use with FHEM. A small module for making FHEM understand the data received is also included.

You can find the build instructions and more pictures over in the Foxtemp2016 gitlab repository.
no comments yet
write a new comment:
name or nickname
eMail adress (optional)
Your comment:
calculate: (2 times 10) plus 3

Sun, 21. Feb 2016

Download redirector for Debian CD and DVD images Created: 21.02.2016 10:50

The problem

If you ever downloaded Debian CD and/or DVD images from the internet via HTTP/FTP (and not via Torrent), you'll have come across the 'Debian CD/DVD images via HTTP/FTP' website. So did I on multiple occasions, and when I last visited it to download a Jessie image after that was released, I noticed that it still was as horrible as always: There are links to downloads for each architecture, but they all point to the main site for CD/DVD images in Sweden. There is also a list of all known mirrors hidden away at the bottom of the page, but even if you happen to find that, it is next to useless:
  • It does link to mirrors that have been down for weeks or months
  • It does link to mirrors that do not have the current version (yet)
  • The links point to the root of the mirror, not to the directory containing the image for the current version, meaning you'll have to find your way there, which sometimes isn't easy. And of course, if you actually selected a mirror that does not have the file that you wanted, you'll have to start over with another mirror... In some cases it may not even be obvious if the mirror really does not have the file that you wanted, or you just got lost in the maze of directories with no description.
As a result, nobody uses the mirrors, everyone just uses the main site in Sweden. I can actually confirm that, as I run a listed public mirror of debian-cd at work, and even during release-time it is one of our least traffic-generating mirrors with only a few gigabytes per day - while the site in Sweden pushes out 10+ GBit. While they don't seem to mind the traffic surge that generates, this is still less than ideal. Especially with huge files like 4 GB DVD ISO-images, it does make a huge difference if you download them from another continent, or at least ten times as fast from a mirror in the same country.

The solution

Ideally, the download-page would send you to a mirror that is local to you and has the file that you want automatically. This would better spread the load and also improve the user experience through faster downloads.
This is actually not rocket science. Software that does that is available, and in fact is already in use by Debian: httpredir.debian.org does exactly that for the archive, i.e. the repositories that apt uses. Unfortunately, the software used for httpredir can only handle package repositories, because it needs the index files in those repositories. It would be hard and messy to implement support for redirecting debian-cd into the software used for httpredir. There is however other existing software of this kind that would work very well for debian-cd. The two most popular solutions are Mirrorbrain and Mirrorbits. Both boast a very similiar featureset, as Mirrorbits was written by a VLC developer to replace Mirrorbrain because it allegedly became too slow. Even the commandline-interface of both is almost identical.
The way this works is that Mirrorbrain knows about all the public mirrors of debian-cd and their location. It will check if they are up every few minutes, and it will also scan their contents in regular intervals, so that it knows which mirrors have which files. When a client asks the Mirrorbrain-server for a file, the server will look up the client in his GeoIP and ASN databases. As a result of that, it will usually know in which country and on which ISP the client is. It will then try to find a good mirror for the client to use. It will automatically ignore mirrors that are down or do not have the file that was requested (because they are partial mirrors, or out of sync). If there is one (or multiple) mirrors on the same subnet or the same ASN (in simplified terms, the same ISP) it will select those. If there aren't, it will look for mirrors in the same country, and select a random one of those. If no mirrors are in the same country, the search is broadended to the same continent. Only if that search is also unsuccessful, or if the initial GeoIP-lookup failed to return a country for the client, a random server from all over the world will be selected. The client will then get a HTTP 302 response that redirects it to the selected server, which will send the actual file.

The catch

So why is this not live yet? Well, in a way it is, see below.
But for this to run as an official debian service on official debian infrastructure, there is one major prerequisite: The software needs to be in the standard Debian package repositories. And unfortunately, so far neither Mirrorbrain nor Mirrorbits are.
There actually are Debian packages available for Mirrorbrain, but not in the standard Debian repositories. They would probably need some work to make them compliant with Debian packaging policies. Mirrorbrain consists of multiple packages, among them three small Apache-modules.
For Mirrorbits there are no packages available, and I imagine packaging it will not be fun, because it's the typical "lets download 1000 random libraries in random, mostly beta-versions, because we have to use the absolutely latest features, from random sites around the internet" kind of software. I'll rant about how much I loathe that another time. On the plus side, Mirrorbits packs everything, from builtin webserver to command line utilities, into just one binary, so there will only be one mirrorbits-package. It is also the newer project and under active development.
So far, all attempts to find a Debian Developer to create and maintain packages have been unsuccessful. If you are a Debian Developer and willing to package Mirrorbrain or Mirrorbits - please do. It doesn't really matter which one, both will provide the required featureset, and both have their advantages and disadvantages.

Running instance

Originally because I wanted to toy around a bit with Mirrorbrain, I actually did set up a working mirrorbrain-instance for debian-cd. If you want to give it a try, head over to
This is a fully functional mirror redirector for debian-cd downloads. It knows all mirrors of debian-cd contained in the official mirrorlist, and scans those that it can scan. If this were running on official DSA infrastructure and not on a private server run by me, all download-links on the webpage could be pointed at this tomorrow, and thus be automatically redirected.
It could still use some fine-tuning with regards to mirror selection though: For example, it is possible to set priorities for the mirrors within a country depending on how much bandwidth they have available, and that will make Mirrorbrain redirect more or less clients there; Or it might sometimes be a good idea to override the selection of servers for countries that do not have any mirrors on their own, because Mirrorbrains automatism based on geographic distance sometimes makes less than ideal choices. That however can only be improved by lots of feedback from lots of users. Feel free to send me feedback to the address given on the website. It should be possible to import all tuning made on my demo-instance into a production-version on official Debian infrastructure, which will hopefully happen one day.
For any feedback, it is imperative to know what mirrorbrain thinks about you: Where you're coming from, which mirrors it considered, and why. Luckily, one can easily get that info: You just need to append ?mirrorlist to the download URL, and instead of redirecting you, Mirrorbrain will give you lots of info helpful for debugging. Here is an example output from this in a case where mirror selection worked perfectly:

As you can see, it recognized there is a mirror on the very same subnet (because the university happens to run a debian mirror), and since it is up and has the requested file, the client would be redirected there. If the mirror were down, Mirrorbrain would randomly select (with the "prio" values influencing the likeliness of a mirror getting selected) a mirror from the next group of "same AS", and so on if none was available there.
The Mirrorbrain-version running is mostly the current one from the Mirrorbrain and mod_asn Github repositories with a few minor fixes for IPv6 support applied. The latest released version does not support IPv6 yet, the git version does, but still had a few bugs in that - I've reported them upstream and sent pull requests for some of them. They'll hopefully be fixed soon. Apart from that, there really isn't anything special about this installation. Most of the work setting this up was feeding it the mirrorlist. That was because the official mirrors.masterlist contained A LOT of mirrors, many of them dead, with wrong paths, not answering to rsync or ftp even though the masterlist said they would, and so on. That meant that even though I had a script that would compare the list of mirrors in Mirrorbrain and the one in the masterlist and spit out the commands needed for bringing those into sync, I would have to manually check every second entry, because the masterlist was just wrong. It took me some hours to get all about 130 mirrors into Mirrorbrain. In some rare cases, mirrors are actually up, but cannot be used in Mirrorbrain. That happens because Mirrorbrain needs to scan what files a mirror has, and for that it needs a way to get a directory listing. If a mirror only offers HTTP and no FTP or RSYNC, and prints the directory listings in a format that misses vital information or is just too messed up for Mirrorbrain to parse, that mirror cannot be used, even though it might be perfectly fine for downloading. However that applies to less than 5% of all mirrors, so there aren't that many lost due to that.
If you want to see a list of mirrors currently known to the Mirrorbrain instance, have a look at http://debian-cd.debian.net/mirrorlist.html. It also shows a nice map of the mirror locations.
My biggest hope in running this demo instance is that seeing how good this works in practice will motivate someone to take on the packaging. Lets see if that works out. Until then, spread the word that this option exists. I plan to keep this running for the foreseeable future.
no comments yet
write a new comment:
name or nickname
eMail adress (optional)
Your comment:
calculate: (2 times 10) plus 3

Fri, 12. Feb 2016

Heizungsspass Created: 12.02.2016 20:44
Last modified: 20.02.2016 18:00
Seit über einem Monat habe ich Probleme mit meiner Heizung. Und obwohl seit mehreren Wochen eigentlich bekannt ist, wo das Problem liegt, schafft es das "Kompetenzteam" aus GBW Reparaturservice und beauftragter Heizungsfirma (Herzog Sanitär aus Allersberg) nicht, das Problem zeitnah und dauerhaft zu beheben. Aber der Reihe nach.
  • Bereits in den Weihnachtsferien deuten sich die Probleme an: Die beiden Heizkörper im Wohnzimmer zeigen gelegentlich Aussetzer. Sie werden nicht mehr so heiss wie normal, manchmal werden sie sogar nur noch lauwarm - das erzeugt natürlich genau gar keine Heizwirkung. Die Thermostate (dank digitaler Thermostate kann ich das gut verfolgen) haben das Ventil praktisch dauernd zu 100% offen um die Temperatur zu halten. Wann sie warm werden scheint zeitabhängig zu sein: Nachts heizen sie ordentlich, tagsüber meist nicht.
  • Fr., 22.01.: Da die Heizkörper im Wohnzimmer seit zwei Tagen überhaupt nicht mehr warm werden, sondern eiskalt sind, rufe ich beim "GBW Reparaturservice" an. Die GBW hat fast alles, was früher die Hausmeister erledigt haben, an eine Firma in München outgesourcet. Folglich telefoniert man bei Problemen in seiner Wohnung in Erlangen mit einem Callcenter im 200 km entfernten München, das selbstverständlich durch ganz hervorragende Kenntnisse der lokalen Gegebenheiten glänzt. Und selbstverständlich spricht man bei jedem Anruf mit einem anderen Mitarbeiter, die übliche Callcenterhölle halt. Die geben dann erstmal wahnsinnig hilfreiche Tips wie "entlüften" - was ja im Prinzip nicht verkehrt ist, nur: Ich wohne im Erdgeschoss eines mehrstöckigen Hauses, da ist Luft im Heizkörper ein eher seltenes Problem. Und wenn noch nichteinmal die Zuleitungsrohre zu den Heizkörpern warm werden, umgekehrt aber wenn sie gerade funktionieren die Heizkörper von vorne bis hinten absolut gleichmässig heiss werden, dann ist das kein Problem das sich mit entlüften meiner Heizkörper lösen lässt.
    Langer Rede kurzer Sinn: Da das Problem nicht dringlich ist, bekomme ich einen Termin am Do., 28.01., da soll jemand vorbeikommen um sich das anzusehen.
    Ich versuche abends noch die Heizkörper zu entlüften, aber es ist wie erwartet: Da ist keine Spur von Luft drin, da kommt sofort ein Schwall Wasser (auch bei den Heizkörpern die gerade eiskalt sind).
  • So., 24.01.: Am späten Nachmittag sind plötzlich alle 5 Heizkörper in der Wohnung komplett kalt.
  • Mo., 25.01.: Als ich morgens aufstehe, steht das GBW Hausmeisterauto vor der Tür, und die Heizung in der Küche funktioniert wieder. Ich nehme an, der Hausmeister würde die Heizung reparieren, das erweist sich aber als Irrtum, er hat nur Aushänge gemacht. Zwei Stunden später ist auch der Heizkörper in der Küche wieder eiskalt.
  • Di., 26.01.: Ich telefoniere morgens erneut mit dem Reparaturservice, und melde dass jetzt alle Heizkörper ausgefallen sind und es in meiner Wohnung unangenehm kalt wird. Die erste Mitarbeiterin besteht darauf, dass ich erst noch die anderen Hausbewohner befrage, ob bei denen die Heizung funktioniert, denn wenn es bei denen funktioniert handelt es sich ja um keinen Notfall. Dem kann ich nicht so ganz zustimmen, ich finde eine komplett unheizbare Wohnung die mittlerweile 17 Grad hat, Tendenz fallend, eigentlich Notfall genug, versuche es aber trotzdem - wenig erfolgreich. Bei der einzigen Dame die daheim ist funktioniert die Heizung - die hängt aber auch offensichtlich an einem anderen Heizungsstrang. Beim zweiten Anruf wird das Problem dann trotzdem aufgenommen, und ich werde gebeten daheimzubleiben, damit ich da bin wenn der Heizungsnotdienst kommt. Das tue ich dann.
    Als bis 17:30 noch niemand aufgetaucht ist oder sich auch nur gemeldet hat, rufe ich erneut beim Reparaturservice an. Ich erhalte die Auskunft, das Problem wäre an die Firma Herzog weitergegeben worden, und die hätten einen 24h Notdienst, es könnte also auch sein dass der Techniker erst um 20 Uhr auftaucht. Tut er selbstverständlich nicht, weder an diesem Tag noch am Folgenden. Aber man opfert doch gerne einen Tag Urlaub um sich verarschen zu lassen...
    Ich denke ausserdem erst, ich hätte den Namen der Heizungsfirma falsch verstanden, denn beim googlen nach dem Namen ist die näheste Firma mit diesem Namen in Allersberg, also gut 50 km entfernt. In den Folgenden Tagen stellt sich jedoch heraus: Nein, ich hab den Namen schon richtig verstanden. Die haben wirklich eine Firma beauftragt, die 50 km Anfahrt hat. Vermutlich muss man noch froh sein, wenn sie nicht gleich eine aus München beauftragen...
  • Do., 28.01.: Es meldet sich endlich jemand von Herzog. Ist also eher ein 3x24h-Reaktionszeit-Notdienst als ein 24h Notdienst. Der Techniker soll im Laufe des Nachmittags kommen. Er kommt tatsächlich um kurz nach 13:00. Dass das Problem nicht in meiner Wohnung liegt, ist ihm relativ schnell klar - nachdem er überprüft hat dass alle Heizkoerper kalt sind aber die Ventile gängig sind. Allerdings hat er wohl die Schlüssel für den Heizungskeller vergessen, so dass wir dort nicht reinkommen - er ruft den Hausmeister an, der ihm aufsperrt. Seine Fehlersuche im Keller ist erst nicht erfolgreich, und er überprüft die Heizungsanlage im Nebenhaus, ebenfalls ergebnislos. Schliesslich wird er doch noch fündig: Das Problem ist ein Differenzdruckregler, der den Druck des Heizungsstrangs an dem ich hänge regeln sollte, und ihn offensichtlich auf "null" regelt. Als er dran rumfummelt geht das Ding mit einem hörbaren WOOOSH Geräusch plötzlich auf - und bei mir in der Wohnung werden praktisch sofort die Heizkörper nicht nur warm, sondern ordentlich heiss. Was genau es nun war weiss er auch nicht - er meint, sowas könnte schonmal vorkommen, die Dinger würden halt altern. Er glaubt nicht, dass das Problem nochmal auftreten wird. Oh wie falsch er damit lag...
  • Mo., 01.02.: Die Heizkörper im Wohnzimmer bleiben erneut eiskalt, jedenfalls tagsüber - nachts wird einer der beiden manchmal lauwarm. Bei den Heizkörpern im Rest der Wohnung ist das Bild gemischt, der im Bad wird manchmal heiss und manchmal auch nicht. Nur der in der Küche funktioniert fast immer.
  • Di., 02.02.: Ich melde den erneuten Ausfall beim Reparaturservice. Sie wollen es erneut an die Firma Herzog weitergeben, die würde sich dann bei mir melden.
  • Mo., 08.02.: Ich rufe erneut beim Reparaturservice an. Sie wollen bei Herzog nachfragen.
  • Di., 09.02.: Ich rufe erneut beim Reparaturservice an, und bin langsam wirklich froh dass ich eine Telefonflat habe. Die Mitarbeiterin vom Reparaturservice versucht direkt bei Herzog nachzufragen, dort geht aber nur der Anrufbeantworter ran, der verkündet, dass die komplette Firma in den Faschingsferien ist. Allerdings habe ihre Kollegin am Vortag wohl jemanden erreicht, und der Firmeninhaber selbst habe versprochen, sich um das Problem zu kümmern. Sie verspricht weiter zu versuchen die Firma zu erreichen, und mich auf jeden Fall zurückzurufen. Richtig geraten, ein Rückruf kam natürlich nie.
  • Fr., 12.02.: Ab 5:30 morgens werden die Heizkörper im Wohnzimmer plötzlich wieder warm, die Ursache ist unbekannt - vielleicht ist einer der Nachbarn übers Wochenende weggefahren und hat die Heizung abgedreht, so dass der Druck wieder für mich reicht. Die Heizkörper im Wohnzimmer werden jedenfalls wieder brauchbar warm. Nicht so heiss wie normal, aber immerhin so warm dass sie eine spürbare Heizwirking erzielen - die ganze Zeit über voll aufgedreht schaffen sie es bis mittags, das Wohnzimmer auf fast 21 Grad aufzuheizen.
    Um 12:50 ruft mich plötzlich ein Techniker von Herzog an, er wäre jetzt bei mir daheim, ob ich kommen könnte. Satte 10 Tage, nachdem ich das Problem gemeldet habe, und vollkommen unangekündigt. Da ich aber natürlich daran interessiert bin, dass meine Heizung irgendwann wieder normal funktioniert, radel ich von der Arbeit nach Hause.
    Dort muss ich feststellen, dass der Techniker (ein anderer als beim letzten Mal) zwar wenigstens die Schlüssel für den Heizungskeller dabeihat, aber leider nicht hier ist um den defekten Differenzdruckregler zu reparieren oder tauschen. Im Gegenteil weiss er noch nichtmal ansatzweise, was sein Kollege vor 2 Wochen bereits getan hat, und auch sonst erweckt er nicht unbedingt einen kompetenten Eindruck:
    • Er faselt irgendwas davon, dass er im Keller Luft in den Leitungen gehört hätte, und man dringend entlüften müsste.
    • Er dreht wild an den Ventilen für die anderen beiden Heizungsstränge im Haus herum. Die sind jetzt vermutlich nicht mehr so justiert wie früher...
    • Er sucht die Heizunganlage unter dem Dach. Klar, bei den 15 cm dicken isolierten Rohren, die im Heizungskeller aus der Wand kommen, und beschriftet sind mit "Fernwärme Heizung von X-Y-Strasse 42", kann man unmöglich darauf kommen, dass das warme Wasser im Nebengebäude erzeugt wird...
    • Er führt mir stolz vor, wie der Differenzdruckregler funktioniert, als ob ich das nicht wüsste. Natürlich stellt er ihn danach nicht wieder so ein wie zuvor, sondern irgendwie. Und natürlich geht durch das Rumfummeln, wie bereits zwei Wochen vorher, der Regler wieder auf und regelt erstmal wieder so wie er sollte - was er aber nicht kapiert.
    • Als er danach feststellt, dass die Heizkörper heiss sind, hält er mich offenbar für zu blöd den Thermostat zu bedienen.
    Die traurige Vorstellung endet damit, dass er unter dem Vorwand, im Nachbargebäude den Wasserstand prüfen zu wollen abhaut ohne sich zu verabschieden. Aber immerhin: versehentlich hat er den kaputten Regler erstmal wieder gängig gemacht, meine Heizung funktioniert wieder. Wenn auch, weil er den Regler ja komplett verstellt hat, im "Luzifer-Modus": Der Druck den er eingestellt hat ist offensichtlich viel zu hoch, die Thermostate schiessen regelmässig über die eingestellte Temperatur hinaus, weil selbst eine Öffnung des Ventils von nur 10% die Heizkörper innerhalb von 2 Minuten so heiss werden lässt, dass man sie nicht mehr berühren kann ohne sich die Finger zu verbrennen.
  • to be continued... Ich glaube nicht, dass die Geschichte hier zu Ende ist. Der Regler ist immer noch der alte, nur dass er jetzt völlig verstellt ist, und wird vermutlich bald wieder das spinnen anfangen. Vielleicht funktionierts ja noch bis zum Sommer...
Ich hab jedenfalls daraus gelernt: Beim nächsten Mal werde ich nicht naiv damit rechnen, dass die beteiligten Firmen irgendwas auf die Reihe bekommen, und gleich den Anwalt einen hübschen Brief zum Thema saftige Mietminderung aufsetzen lassen. Vielleicht bringt das ja Bewegung in die Sache.
no comments yet
write a new comment:
name or nickname
eMail adress (optional)
Your comment:
calculate: (2 times 10) plus 3

Sat, 03. Oct 2015

Warum ich nicht mehr Kunde bei M-net bin Created: 03.10.2015 17:37
Last modified: 12.02.2016 18:00
Nach nunmehr über 13 Jahren habe ich kürzlich meinen M-Net-Anschluss gekündigt und versorge mich seitdem über einen anderen Anbieter mit Internet.

Der letzte kleine Tropfen der das Fass zum Überlaufen brachte, aber sicher nicht der alleinige Auslöser, war die Unfähigkeit seitens M-Net, den hier seit Anfang 2014 laufenden Glasfaserausbau zu Ende zu bringen. Bereits Anfang 2014 tauchten die ersten "wir bauen für sie" Schilder hier im Viertel auf, und erste kleine Löcher wurden gebuddelt, wenn auch am gegenüberliegenden Ende des Viertels.
Etwa ab diesem Zeitpunkt war das komplette Viertel auf der Karte der Stadtwerke auch als "Glasfaser im Bau" markiert. Auf den "wir bauen für sie"-Schildern stand als Zeitangabe noch "bis Oktober". Der Fachmann sieht sofort: Da steht kein Jahr, das wird wohl einen Grund haben. Nur der naive Normalverbraucher glaubt, es waere von Oktober des gleichen Jahres die Rede. Selbstverständlich passierte dann nach Buddeln der ersten paar Löcher erstmal monatelang gar nichts mehr, insbesondere wurden sie auch nicht mal mehr zugebuddelt. Das blieb dann fast das komplette Jahr 2014 so, lediglich Mitte Dezember 2014 (!) brach eine kurze Phase hektischer Aktivität aus, in der tatsächlich mal gearbeitet wurde, kurzzeitig sogar in der Strasse direkt vor meinem Haus. Danach wieder monatelang nichts. Erst im Frühjahr schienen die Arbeiten mal wieder weiterzugehen, und diesmal dann sogar für mehr als 3 Tage.
Anfang Juli lag dann ein Werbebriefchen mit Einladung zu einem Infotag mit feierlicher Einweihung eines weiteren Stücks Erlanger Glasfasernetz am 18.07.2015 im Briefkasten. "Direkt vor ihrer Haustür wird damit die Zukunft der Kommunikation durchstarten." Die Veranstaltung war wie erwartet eine nervige Selbstbeweihräucherung, mit Erlangens Oberbürgermeister Janik und Presse. So unwichtige Fragen wie wann denn das Netz das hier heute eingeweiht wird tatsächlich mal fertig gebaut ist, und wann man Internetanschlüsse darüber endlich bestellen kann, konnte aber leider niemand beantworten. "Also... ich glaub... vielleicht... so ca.... Ende des Jahres" So Details würden auch nur die wir-sind-die-tollsten Eierkraulerei von Politik, Presse und M-Net-Vorständen stören. Dementsprechend war dann auch der folgende Artikel in den Erlanger Nachrichten: davon dass bestenfalls die Hälfte der beiden an diesem Tag eingeweihten Glasfaser-"cluster" fertig gebaut war fand sich kein Wort.
Nur 3 Tage nach der Einweihungsfeier (21.07.2015) wurde dann bei mir im Keller jedoch tatsächlich der Patchkasten für die Glasfaser installiert - also so richtig mit Glasfaser drin. Leider ist das bis heute (03.10.2015) noch immer Stand der Dinge, seit Ende Juli hat sich rein gar nichts getan. M-Net muesste nur noch den Umsetzer von Glasfaser auf Kupfer daneben setzen, dann könnte man anschliessen. Das kriegen sie aber seit Monaten nicht hin.
Lediglich der Verfügbarkeits-Check für Glasfaser an meiner Anschrift auf der M-Net-Webseite wechselt seit mindestens einem Jahr alle paar Monate zwischen "bald verfügbar" und "nicht verfügbar" hin und her, das tut er nach wie vor.

Durch die ewige Nicht-Verfügbarkeit eines zeitgemässen Anschlusses bei M-Net war ich gezwungen, mich nach Alternativen umzusehen, denn das 16 MBit DSL mit real inzwischen eher 11 MBit machte wirklich keinen Spass mehr. Und dabei fiel mir dann leider auf, wie schlecht M-Net mittlerweile geworden ist:
  • Abzocke mit Vertragslaufzeiten. Zwar hatte M-Net auch früher eine Mindestvertragslaufzeit von 2 Jahren um die Anschlusskosten wieder reinzuholen (die man gegen Zahlung eben jener Anschlusskosten sogar loswerden konnte), doch konnte man nach Ablauf dieser Mindestlaufzeit relativ problemlos und kurzfristig kündigen. Mittlerweile hat M-Net die (nicht durch M-Net entstehenden Kosten zu rechtfertigende) Unsitte einiger Konkurrenten übernommen, mit Klauseln der Art "wenn der Vertrag nicht 3 Monate vor Ablauf gekündigt wird verlängert er sich automatisch um 1-2 Jahre" Kunden wesentlich länger an sich zu fesseln, als die das eigentlich wollen. Dieses Geschäftsgebahren finde ich widerlich.
  • Zwangsrouter. M-Net besteht seit ein paar Jahren darauf, Kunden durch einen mit kastrierter und schlecht gewarteter Firmware (Sicherheitsupdates gibts bestenfalls mit grosser Verspätung) versehenen Zwangsrouter zu gängeln. Dieses Geschäftsgebahren finde ich widerlich.
  • Der Support ist genauso schlecht wie bei der Konkurrenz, ich hatte ja 2009 schonmal über meine Odyssee beim Austausch des zum x-ten Mal defekten Modems gebloggt. Lediglich im Forum passiert es gelegentlich noch, dass man an einen kompetenten Ansprechpartner gerät, aber das allein reissts leider nicht raus.
  • Bearbeitungszeiten sind katastrophal lang, selbst einfachste Vorgänge dauern über 4 Wochen.
  • Es gibt keine Konkurrenzfähigen Angebote mehr. Noch nichtmal in den (sehr wenigen!) Glasfaser-Ausbau-Gebieten weiss M-Net die technische Überlegenheit die es dort hätte zu nutzen. Es werden nur noch die Angebote der Konkurrenz kopiert, und zwar schlecht. Das beste was M-Net derzeit in den Glasfasergebieten anbietet sind 150 MBit down und 15 MBit up (*). Das Magenta-T bietet bei mir mit VDSL-Vectoring 100 MBit down und 40 MBit up, und Cablesurf 200 MBit down und 12 MBit up. Das ist alles ungefähr auf dem gleichen Level, beim Upstream hinkt M-Net sogar deutlich hinter T her und beim Downstream hinter Cablesurf. Und das obwohl M-Net hier technisch gesehen mit Abstand am Besten aufgestellt wäre, und mit Leichtigkeit die Konkurrenz alt aussehen lassen könnte. Stattdessen liegen sie nur beim Preis und der schon erwähnten Zwangsrouter-Gängelung "vorne".
    (*) theoretisch gibt es auch noch 300 down / 30 up, aber nur dort wo FTTH statt FTTB ausgebaut ist, d.h. wo die Glasfaser bis in die Wohnung geht. Auf diese Weise ausgebaut sind grob geschätzt vielleicht 10 Einfamilienhäser in ganz Bayern, beworben wird es von M-Net aber als wäre es praktisch flächendeckend verfügbar.
  • Kundenwünsche haben keine Bedeutung mehr. Um zum Beispiel von gerade eben zurückzukommen: Eine Erhöhung des Upstreams ist nicht möglich, obwohl sie technisch kein Problem wäre. Selbst die Teledumm, der man ja nun wirklich nicht nachsagen kann, sie würde irgendwelche Innovationen vorantreiben, hat inzwischen verstanden, dass in Zeiten der "Cloud" ein vernünftig dimensionierter Upstream mehr bringt als nochmal ein paar MBit mehr down die man ohnehin so gut wie nie ausgelastet bekommt, und bietet bei den Vectoring-Anschlüssen 40 MBit up. Vor vielen Jahren, als M-Net noch NEFkom hiess, hatte man das durchaus verstanden gehabt, und bot gegen Aufpreis eine Verdoppelung des Upstreams bis zur Grenze des damals technisch machbaren an.
    Anderes Beispiel: M-Net besteht darauf, die Glasfaser-Anschlüsse mit unsinnig hohen Interleaving-Werten zu fahren. Das führt dazu, dass diese eine wesentlich höhere Latenz haben als nötig, was vor allem Online-Spieler ärgert. Hier ist M-Net sogar zu dumm von sich selbst richtig zu kopieren: Bei DSL-Anschlüssen kann man dieses Anti-Feature nämlich gegen Aufpreis abschalten lassen...
  • Das Handling von DS-Lite. Um das gleich Vorwegzunehmen: Mich stört nicht, dass M-Net per default DS-Lite macht, das ist eine technische Notwendigkeit; mich stört wie M-Net mit denen umgeht, die aus technischen Gründen mit DS-Lite nicht auskommen.
    Wie fast alle anderen auch, schaltet M-Net alle neuen Anschlüsse mit DS-Lite, d.h. die Anschlüsse erhalten keine globalen IPv4-Adressen mehr und kommen nur noch per NAT ins V4 Internet. Das ist an sich überhaupt nichts verwerfliches und eine technische Notwendigkeit, weil die IPv4-Adressen weltweit "alle" sind - es gibt keine freien Adressen mehr, alle sind vergeben. M-net kann also nicht einfach für neue Kunden neue Adressen besorgen, sondern nur die die sie schon haben umverteilen. Und die gängigste Methode Adressen zu sparen ist nunmal DS-Lite. Die meisten Nutzer bemerken das DS-Lite noch nichtmal. Es gibt allerdings einige wenige Fälle, in denen Nutzer mit DS-Lite nicht zurechtkommen, und stattdessen noch eine vollwertige IPv4-Adresse benötigen, z.B. für manche Home-Automation und VPN-Geschichten, oder auch manche Spielkonsolen.
    Etwa ein Jahr lang liess M-Net diese Nutzer komplett im Regen stehen. Selbst Nutzer die einfach nur ihren Vertrag umgestellt hatten bekamen plötzlich DS-Lite aufgedrückt, und von einem Tag auf den anderen funktionierte ihr VPN / ihre Spielekonsole nicht mehr. Das war M-Net völlig egal, und dank 2 Jahren Vertragslaufzeit konnten die Kunden noch nichtmal zu einem Provider wechseln der sie nicht verar...t. Ist doch auch viel gesünder wenn die Spielkonsole nicht geht, dann ist man öfter mal an der frischen Luft!
    Mittlerweile bietet M-Net eine IPv4-Option gegen Aufpreis an. Das ist an sich genau so wie es sein muss: Wer mit DS-Lite zurechtkommt kriegt das, wer nicht kriegt gegen einen Aufpreis doch noch IPv4. Was mir sauer aufstösst ist der Preis: M-Net kassiert satte 4,90 Euro pro Monat für diese Option. Angesichts dessen, dass diese Option M-Net keine laufenden Kosten verursacht, und eigentlich nur eine Schutzgebühr sein sollte (damit das eben nicht jeder der das gar nicht braucht dazubucht), ist dieser Preis völlig überzogen.
Alles in allem finde ich es sehr traurig, was aus dem einst kundenfreundlichen und innovativen Laden geworden ist. Angebote wie den SIXXS PoP mit dem M-Net ab 2003 dem computertechnisch begabten Teil seiner Kunden ermöglichte, richtig gute IPv6-Konnektivität zu erhalten, lange bevor ihre Infrastruktur das nativ konnte, würde es heute sicherlich nicht mehr geben. Ich hoffe sehr, dass M-Net es irgendwann noch gelingt das Ruder rumzureissen, und sich wieder in die richtige Richtung zu entwickeln. Auf mein Geld müssen sie bis dahin leider verzichten.

Update 1: Nur der Vollständigkeit halber ein kleines Update: Am 11.12.2015 hat mir M-Net stolz mitgeteilt, dass ich jetzt einen Glasfaseranschluss bei ihnen bestellen könnte.

Update 2: Seit August 2016 bin ich wieder Kunde bei M-Net. Der Grund ist zum einen, dass sie zum 01.08. endlich konkurrenzfähige Angebote mit vernünftig dimensioniertem Upstream eingeführt haben (bis zu 150 MBit down mit 50 Mbit up); zum anderen musste M-Net den Routerzwang wegen Gesetzesänderung zum 01.08. fallen lassen. Der Rest meiner Kritikpunkte bleibt leider weiterhin bestehen, aber die beiden grössten absoluten No-Gos für mich waren dadurch weggefallen.
no comments yet
write a new comment:
name or nickname
eMail adress (optional)
Your comment:
calculate: (2 times 10) plus 3

Mon, 21. Apr 2014

Movies in the Cinestar Erlangen in English Created: 21.04.2014 13:44
Last modified: 09.06.2014 13:45
Since I like to watch movies in their original version, if that version is in a language I can understand, I was pleased to learn that the Cinestar in Erlangen has now started to show "OV" movies, which usually means they're shown in english. However, they seem to want to keep this a secret: You have no way of ever finding out about that from their homepage unless you know what you're looking for. They do not list it as a special offer, in contrast to e.g. the turkish movies they occassionally show in turkish. Instead you have to know that they show these only on Sunday afternoon, and then by looking at the program for Sunday, you'll find one that has a "OV" tab (next to the 2D or 3D tab). They also list these only less than a week in advance.
It really makes me wonder if they are actively trying to discourage people from seing these showings.

For reference, these are the cinemas that I'm aware of showing OV movies as of April 2014 in the Erlangen/Nuernberg area:
  • Cinestar Erlangen
    Right in the city center of Erlangen. Shows one movie per week in english, on Sunday evenings.
  • Roxy Renaissance Cinema
    This is actually the only proper foreign language cinema in the area, showing almost exclusively movies in english. As much as I like the atmosphere there, the big downside if you live in Erlangen is that it is in the south of Nuernberg ("Gartenstadt"), which is a rather long drive by car and an even longer trip (45 minutes+) by public transport.
  • Cinecitta
    Giant multiplex in the center of Nuernberg, that also shows quite a few english movies, but almost all of them only in 3D whether you like it or not. Also tends to be a bit expensive.
  • Manhattan Erlangen
    In the city center of Erlangen. Shows some OV movies, not only in english, but also french movies in french and so on. However they always show them "OmU", "Original mit Untertiteln", that means with german subtitles - something that I find way too annoying to ever watch.
  • Babylon Fuerth
    Shows one OV movie every two weeks, but like the Manhattan, always with german subtitles.
no comments yet
write a new comment:
name or nickname
eMail adress (optional)
Your comment:
calculate: (2 times 10) plus 3

Sun, 14. Jul 2013

NTP-GPS-LED-Clock Created: 14.07.2013 10:53
Shortly after the NTP DCF77 LED Clock was finished, work began on a successor with GPS and a different network connection. However, this project never really progressed very fast. In the end it was only partially finished in 2012, mostly thanks to the support from Andreas Schwarz, after it had been lying around in a corner gathering dust for years.
Partially finished means that this is now actually hanging on the wall in the NOC at HaWo, where it usually displays the time, but a few problems/bugs remain.
For more information, head over to the NTP GPS LED clock project page.
You can leave your comments below.
no comments yet
write a new comment:
name or nickname
eMail adress (optional)
Your comment:
calculate: (2 times 10) plus 3

Sat, 18. May 2013

KLM and Schiphol Airport - where incompetence complements incompetence Created: 18.05.2013 12:45
TLDNR: KLM sells/advertises trips with short 40 minute stopovers at their main european hub airport Amsterdam Schiphol. Avoid these. While in theory they have everything that is needed to make this work pretty reliably, they are too dumb to use the facilities they have, making it far more of a risky gamble than needed.

I recently had a flight from the UK (Glasgow) to Germany (Nuremberg), booked through the KLM website. Since the way KLMs network of flights is organised centers around their main european hub airport Shiphol in Amsterdam (Netherlands), it naturally involved a layover there. KLM claims that a 40 minute layover is enough for flights within europe (50 minutes for intercontinental flights), and the website will therefore suggest/sell tickets taking this into account.
When I first read that some time ago, I was a bit sceptical whether 40 minutes was enough, but the KLM website advertises this as practically the best invention since sliced bread. They also have these videos, e.g. the "easy transfer" video here, that show you how they intend to make it work: Your baggage gets transferred automatically of course, and passengers with short connections can use priority lanes at the passport control. So far for the theory, and I actually used this successfully before, but now have to conclude that I was just lucky it worked, because KLM and Schiphol will do nothing at all to make it work properly.
I was flying with two friends. Our flight itinerary said we would land in Amsterdam at 15:50, and our connecting flight would leave at 16:30. All times mentioned in the following come from my memory, so they may be slightly off - but you should get the general idea. While we were still in the air approaching Amsterdam, our pilot already told us (among lots of other things we could not understand due to his mumbling and the aircrafts noise) that we would be landing on the outermost runway, meaning that we would have to "taxi" for around 20 minutes before reaching the terminal. This is one of the problems of Shiphol, they have runways that are far far away from the actual airport facilities, so the plane just drives around on the ground for up to 20 minutes (not exaggerating!), going over a bridge over the highway and then driving between some canals in the process. This can be entertaining if you have the time, after all you're getting a free tour through half of Amsterdam, but in this case it was just annoying, and the first small source of delay: We arrived at our parking position at 15:55.
Unfortunately, we could not leave the plane then. According to a loudspeaker announcement, we had to wait for the hand-luggage that had been stored in the hold to be placed next to the exit. What a great idea to let all passengers that did not even have any hand-luggage stored in the hold and that have short connecting flights wait for another 5 minutes for absolutely nothing! And then we still had to transfer by bus.
Finally having reached the actual airport a few minutes past 16:00 and knowing we were now really really late, we hastily made our way through the airport. We were luckily able to pick up which gate we had to go to from one of the big monitors on our way, so we did not have to stop at one of the self-service terminals to get that information. And as expected, it was a gate (B28) at the completely opposite end of the airport, a walk of more than 20 minutes at normal walking speed. However, we're young and relatively healthy, and therefore walking pretty fast, although at that point we weren't running yet. We quickly reached passport-control, and this is where things really started to go downhill.
As explained in the nice "easy transfer" video, they have priority lanes for first class and short-connecting-flight-passengers. That's actually only half the truth: They have a sort of priority-priority-lane for short-connecting-flight-passengers that can jump right to the head of the queue in the priority lane. That however was closed off when we arrived, and the two incompetent idi... employees that are there to direct passengers to the right queue just directed us to the normal priority lane. Big mistake. Because the priority lane lead to just two open passport control stations, and both were apparently blocked by people leading lengthy discussions with the immigration officials. A big queue had built up already, and it was not moving the slightest bit. We quickly realized that the non-priority queue moved a lot faster than our priority queue, something that the two employees whose only job it is to direct people should also have noticed a long time ago. If they had not been that incompentent, they would have directed us to the non-priority queues instead, or better yet: They should have let us jump to the head of the non-priority queue. But even when we overheard other passengers complain about the non-moving priority lane, they just said they didn't know what to do, and ordered them to stay in the priority lane.
So we stood there, and spent minutes waiting for the queue to move again. When it finally did, the employees directing people then suddenly started to use the priority-priority-lane after all, and put people through that, meaning that people that had arrived 5 minutes later than us passed through passport control before us. After having wasted close to 10 minutes in the queue, we finally reached passport control and passed through it in just a few seconds, as is to be expected with European passports entering Schengen-Europe.
Behind the passport control was security. There was really nothing to complain about here - there was no queue at all, and I was through in no time at all. My two friends needed slightly longer, one had to take off his hiking shoes because they thought the metal inserts would confuse the metal detector, the other had to show what was inside her backpack, but all in all there was no significant delay here - we were through in less than 5 minutes.
After security we could see on the monitors that our flight was by now listed as "gate closing" and continued to sprint.
Shortly afterwards we heard the infamous announcement that you hear in Shiphol all the time: "Passengers (absolutely indecipherable mispronounciation of our names) travelling to Nuremberg, you are delaying the flight. Immediate boarding please at gate B28, or we will proceed to offload your baggage." - after this we started running as fast as we could, and finally reached the gate less than 5 minutes later - it was now almost 16:25, so in theory 5 minutes before the scheduled departure, but too late nonetheless, as the gate usually closes 10 minutes before departure time. So in other words, just to spell that out: when they sell you a 40 minute layover, in reality it always is just a 30 minute layover.
As expected, we were told at the gate that we were too late and that they were in the process of unloading our baggage. This is actually what I consider another display of major incompetence: As we were not boarding, they had to remove our luggage from the hold, meaning searching through all of the luggage in the hold and removing our three bags. This takes a few minutes. If they had simply cancelled that process as we arrived, and let us board, they would actually have delayed the flight LESS, because us boarding would surely have been faster than continuing to search the hold for our three bags. But their thinking clearly is "fuck passengers".
We were then told that there was another flight to Nuremberg later that day, and that we should simply rebook at the transfer desk T2 in the main hall. If the attendant at the gate was lying on purpose to quickly get rid of us or just did not know due to KLMs complete lack of organisation is unknown, but as we found out at the transfer desk a bit later, that flight was full. As were all other flights to Nuremberg in the next two days. I simply was put on the waiting list with the comment that I would almost certainly get a seat, without being offered any alternatives. My two friends got handled a bit better, and took the option of taking a flight to Munich instead of spending possibly multiple days on the waiting list, having to stay in the airport all the time. There is good train connectivity from Munich to Nuremberg, so this was not a bad alternative, although KLM refused to pay for the train ticket.
This is another thing that pissed me off: KLM denied any responsibility for the missed connection. I overheard them saying "It's not our fault, it's the airport facilities" to another couple who had missed their connection. Well, I'm sorry, but that excuse just is not valid. First and most importantly: You, not any third party but YOU KLM, did sell me the tickets for this very short connection, so it is your responsibility to try everything to make it work. That could have included just letting us board when it was still technically possible when we arrived late clearly not due to our own fault. As I already explained, it probably wouldn't even have caused any delay. More generally speaking, it could also include giving passengers another five minutes to make it to boarding when you know that the incoming flight was late, IF that will not delay the flight. You might think that waiting for late passengers would always delay the flight, but that is not true: It happens in Shiphol (and in fact it did happen to me on that same day), that even after boarding is complete, the plane will just stay at the gate for another ten minutes, because there is so much congestion on the runways that it cannot start anyways. In this case there is absolutely no need to hurry the boarding, but they still do.
So, since you failed to do anything at all to make me catch my connection, you should at least have moved your lazy ass after the missed flight. Of course, you cannot just throw passengers off a later flight to make room for me, but offering alternatives to "sit in the terminal for 5 hours hoping you get a seat through the waiting list" would have been the least. As would have been a food and phone-home-on-KLMs-expense voucher.

I actually did get a seat on the later plane to Nuremberg, but I'm not sure what would have happened if I had not - I doubt KLM would have paid for a hotel room for me without major discussion, if at all. My two friends flew to Munich, and paid for their own train tickets from there to Nuremberg. If they had also taken the waiting list offer, two of us would have had to stay in Amsterdam over night - the plane I was on was completely full.

To wrap up this post, I have this related picture below: It shows one of the KLM self service checkin terminals at Glasgow airport.

These nice error messages kept popping up on all their self service terminals, whether someone was currently using them or not. In general, KLM seem to be real "experts" at running their computer systems... The online-checkin on their homepage frequently throws nonsense-error-messages as well.
no comments yet
write a new comment:
name or nickname
eMail adress (optional)
Your comment:
calculate: (2 times 10) plus 3

Wed, 06. Mar 2013

Intel MIC/Xeon Phi MPSS on Ubuntu Created: 06.03.2013 20:39
I recently tried to get the Intel Xeon Phi Software Stack (Manycore Platform Software Stack, MPSS) to run under Ubuntu 12.04. More exactly, the version KNC_gold_update_1-2.1.4982. Ubuntu is not supported (yet), but it did not prove to be too difficult. As Google wasn't exactly helpful with setting this up, I'm writing this blog post in the hope that others googling for "mpss ubuntu" will find it and it will be helpful for them. Note that this is not a complete HowTo though, but it should help you get going. Use at your own risk.
  1. The first step is to download KNC_gold_update_1-2.1.4982-15-suse-11.2.tgz, i.e. the version for SLES11 SP2 offered on Intels download pages.
  2. Unpack the tgz, and inside you will find some RPMs and a bunch of subdirectories. The RPMs are the important part here: Convert them all with "alien --scripts", except intel-mic-kmod-2.1.4982- which contains the kernel modules that would be surprisingly unhelpful on an Ubuntu kernel.
  3. The kernel modules will have to be rebuilt for the Ubuntu kernel from source, and their source will have to be patched as they're not compatible with the 3.2 kernel in Ubuntu 12.04. There is more than one way to do this, I choose to use the provided spec-file and then convert the resulting RPM. The source of the kernel modules is hidden in src/intel-mic-kmod-2.1.4982-15.suse.src.rpm. Unpack that with rpm2cpio src/intel-mic-kmod-2.1.4982-15.suse.src.rpm | cpio -idmv and you get a spec file and a .tar.bz2. You need to put these two files and the patchfile intel-mic-mpss21up1-kmod-2.1.4982.patch into your rpmbuild SOURCES/SPECS directory. The spec-file needs to be patched with intel-mic-mpss21up1-kmodspecfile.patch, after that you can run rpmbuild -bb intel-mic-kmod.spec. The result of this will be an intel-mic-kmod-2.1.4982-, which you need to convert with "alien" again into a intel-mic-kmod_2.1.4982-16.3_amd64.deb.
  4. Install the kernel module .deb together with the .debs you converted in step 1.
  5. micctrl and the other tools put their libraries into /usr/lib64, which does not normally exist anymore in Ubuntu 12.04, thus the linker is not searching for libraries there. You need to echo "/usr/lib64/" > /etc/ld.so.conf.d/mic.conf and then run ldconfig to fix that.
  6. By now you should be able to execute micctrl without major error messages, but you cannot really do much, because mpssd needs to be running for the card to actually do anything.
  7. The init-script for mpssd needs to be adapted so it actually starts the mpssd. I cannot post my patch for the initscript, as it contains quite a lot of messy workarounds tailored to our system. However, to actually get the script to work, you only need to fix one thing: Just replace the line that reads startproc -t 1 $exec with these two:
    [ -d "/var/lock/subsys" ] || mkdir /var/lock/subsys
    start-stop-daemon --start --exec $exec
    After this the script will still print A LOT of error messages about all the missing rc_* stuff, but it will actually start and stop the mpssd daemon now.
  8. Normally, the configuration of the virtual micN network-interfaces would happen automatically, but as the Intel stack knows nothing about the "Debian/Ubuntu way of things", it cannot do that. You will need to manually edit /etc/network/interfaces and give them a proper configuration. In the default network config, the card gets the IP and no bridging, so a proper entry in /etc/network/interfaces would look something like this:
    iface mic0 inet static

So that's it, you can now start to play around with the card. Which is not always an easy task.
For example, it seems MPSS has never heard of these fancy things like "directory services" where you do not create all users locally on all of your computers, but in a central directory instead. This is probably because Intel is such a small company that they only have 2 or 3 computers, so this isn't relevant for them.
But I will leave my ranting about the immatureness of the system software stack for an otherwise nice product sold at a quite significant price tag (ca. 2500 Euros) for another time.

As things are evolving quickly, I was wondering if you have you tried to install MPSS3.1 with Ubuntu12.04.03LTS? If so, ;-) could you help me out.

Thanks in advance.
Jofre 31.10.2013 12:40

I have not, at least not yet.
PoempelFox 31.10.2013 19:25

You can download a patch for MPSS 3.1.2 kernel module. mic.ko, from here:


Copy it into mpss-modules-3.1.2 source directory and apply:

patch -p1 < mpss_mod_patch.txt
Alexei 11.02.2014 19:27

I am trying to install MPSS3.1.4 with Ubuntu 12.04.1. I am following your instructions and adapting when it is neccesary. For the moment it does not run, but I hope it will be able to run soon.
If you can help me for the last steps I will be very glad.
Did you try this installation ? May you help me ?
Vivi 11.03.2014 13:37

Hi !
I need to install Xeon Phi, but my motherboard is not compatible. I will buy a new one, but I want to be sure of my choice.
Could you tell me which one you are using for Xeon Phi ?
Thanks in advance.
Virginie 14.03.2014 10:44

ASUS P9X79W works NOT the P9X79L.

Specifications about PCI interface are criticals.

AloMoi 20.04.2016 08:19

write a new comment:
name or nickname
eMail adress (optional)
Your comment:
calculate: (2 times 10) plus 3

Sun, 03. Mar 2013

Fake traceroute Created: 03.03.2013 20:38
About a month ago, the Star Wars Traceroute circled the Internet. I found it quite amusing - and thought that I could improve on that idea.
My first experiments to generate something similar with stock Linux kernel tools (like ip6tables and different routing tables) were pretty unsuccesful - something was always missing to make it work properly. There also is a perl utility called countertrace which does something similar, but it did not have all the features I wanted and AFAIK does not support IPV6. So in the end I simply coded a small program that would listen on a network interface and send hand-generated packets back to fake the hops in the traceroute. This actually made some things a lot easier to implement: The trace can contain hops that depend on the current time or on the temperature. This would have required constantly changing the rules if I had done it with stock kernel tools, but the way it's implemented now the program just has to adapt the (fake) IP in the reply. Another nice feature that was easier to implement was that you can set the delay for every hop - so you can properly fake e.g. the delay a transatlantic hop causes.
You can see all this in action by doing an IPv6 traceroute to target.fauad.de (2001:470:1f0b:1d0f:23::ff). I do not think it's a good idea to use completely bogus hostnames, possibly hitting domains belonging to someone else in one of the 10 trillion new TLDs, which is why all my hops have an added .fauad.de; and I do not think it's a good idea to use too long quotes from movies for copyright reasons, so in this respect, my traceroute is less cool. But it allows you to get the current time and temperature in Germany, which is the killer-feature you always wanted, isn't it?
So here is how the traceroute currently (at half past 8 on the 3rd of March 2013 with a temperature of about -3 degrees outside) looks like:

# traceroute -6 -m 100 target.fauad.de traceroute to target.fauad.de (2001:470:1f0b:1d0f:23::ff), 100 hops max, 40 byte packets using UDP [...] 8 tserv1.fra1.he.net (2001:470:0:69::2) 17.614 ms 17.032 ms 18.992 ms 9 you.have.reached.germany.fauad.de (2001:470:1f0b:1d0f:23::1) 21.104 ms 22.468 ms 19.845 ms 10 local.time.is.20.32.fauad.de (2001:470:1f0b:1d0f:1:2032:0:1) 20.834 ms 19.981 ms 20.121 ms 11 and.the.current.temperature.in.Erlangen.fauad.de (2001:470:1f0b:1d0f:23::3) 19.884 ms 19.846 ms 20.476 ms 12 is.minus-2.95.degrees.celsius.fauad.de (2001:470:1f0b:1d0f:2::5705) 20.202 ms 19.860 ms 20.234 ms 13 im.not.crazy.fauad.de (2001:470:1f0b:1d0f:23::5) 20.094 ms 19.909 ms 19.820 ms 14 my.mother.had.me.tested.fauad.de (2001:470:1f0b:1d0f:23::6) 20.458 ms 19.768 ms 19.680 ms 15 and.at.least.im.not.wasting.ipv4.for.this.fauad.de (2001:470:1f0b:1d0f:23::7) 20.651 ms 20.708 ms 20.127 ms 16 Oo.oO---------------------------------Oo.oOo.fauad.de (2001:470:1f0b:1d0f:23::20) 20.340 ms 19.857 ms 20.024 ms 17 so.lets.see.where.this.is.going.fauad.de (2001:470:1f0b:1d0f:23::8) 20.141 ms 20.004 ms 19.817 ms 18 40ge-7-3.fra1.de.fauad.de (2001:470:1f0b:1d0f:23::9) 20.382 ms 20.890 ms 20.202 ms 19 invasive.pat.down.tsa.security.theater.us.fauad.de (2001:470:1f0b:1d0f:23::a) 99.836 ms 98.758 ms 98.147 ms 20 no.such.agency.spycorps23241.us.fauad.de (2001:470:1f0b:1d0f:23::b) 110.627 ms 110.739 ms 110.602 ms 21 40ge-1-2.ny1.us.fauad.de (2001:470:1f0b:1d0f:23::c) 112.935 ms 111.858 ms 111.163 ms 22 no.such.agency.spycorps812424.us.fauad.de (2001:470:1f0b:1d0f:23::d) 119.413 ms 118.728 ms 117.650 ms 23 28kbit-3-2.funafuti1.tv.fauad.de (2001:470:1f0b:1d0f:23::e) 206.467 ms 205.527 ms 204.449 ms 24 10ge-4-1.beijing.cn.fauad.de (2001:470:1f0b:1d0f:23::f) 232.709 ms 232.126 ms 231.047 ms 25 40ge-5-9.erl1.de.fauad.de (2001:470:1f0b:1d0f:23::10) 290.281 ms 299.625 ms 298.543 ms 26 it.seems.this.was.the.shortest.path.fauad.de (2001:470:1f0b:1d0f:23::11) 297.446 ms 296.811 ms 295.727 ms 27 Oo.oO---------------------------------Oo.oOo.fauad.de (2001:470:1f0b:1d0f:23::20) 291.313 ms 290.374 ms 299.368 ms 28 our.whole.universe.was.in.a.hot.dense.state.fauad.de (2001:470:1f0b:1d0f:23::12) 298.633 ms 297.558 ms 296.479 ms 29 then.nearly.fourteen.billion.years.ago.fauad.de (2001:470:1f0b:1d0f:23::13) 299.223 ms 298.138 ms 297.057 ms 30 expansion.started.WAIT111.fauad.de (2001:470:1f0b:1d0f:23::14) 299.245 ms 298.163 ms 297.076 ms 31 im.afraid.this.will.have.to.stop.here.fauad.de (2001:470:1f0b:1d0f:23::15) 292.238 ms 291.157 ms 299.893 ms 32 to.avoid.nasty.copyright.infringement.letters.fauad.de (2001:470:1f0b:1d0f:23::16) 298.907 ms 297.829 ms 296.745 ms 33 so.thats.it.for.now.fauad.de (2001:470:1f0b:1d0f:23::fd) 289.845 ms 299.042 ms 297.962 ms 34 ill.be.back.fauad.de (2001:470:1f0b:1d0f:23::fe) 296.882 ms 295.801 ms 294.715 ms 35 you.have.reached.your.destination.fauad.de (2001:470:1f0b:1d0f:23::ff) 299.357 ms 298.276 ms 297.195 ms

I'll probably release the sourcecode of this in a few weeks when I'm done playing with it.
i have a (maybe very silly) question.
Why is on your traceroute after tserv1.fra1.he.net your /48 HE-Net IP and not the HE "Client IPv6 Address" ?

btw: it would be great if you would publish the source
meinname 16.03.2013 01:39

Remember that everything you see is completely fake. Of course the next real hop after the HE tunnel server is the client IPv6 address, but there is no reason for you to see it: Your traceroute packets are addressed to an IP in the /48, and it is an IP the machine does not have configured on a kernel level, so the kernel will not send a reply on its own.
PoempelFox 16.03.2013 23:49

Ahh, thnx for the explaination.
meinname 17.03.2013 00:35

do you still plan to release the sourcecode?
meinname 07.06.2013 17:46

write a new comment:
name or nickname
eMail adress (optional)
Your comment:
calculate: (2 times 10) plus 3

Mon, 19. Nov 2012

Foxis crappy GPXVisualizer Created: 19.11.2012 00:28
Last modified: 09.10.2016 11:10
I was recently trying to generate static pictures from GPX tracks for putting into a photo album. However, all the tools I looked at had some deficits in that usage mode. For example, gpsprune does have an export function, but it has a severe limitation: The exported picture always has the exact same resolution as the application window, it's essentially an integrated "screenshot" function.
So, I quickly hacked together something on my own in Qt. The resulting source code can be found in this public copy of the GPXVisualizer source code repository. There are currently no binaries available.
It can display an arbitrary number of GPX tracks over a few openstreetmap-based maps, and also export static images of that to file with selectable resolution. Here is an example showing some kajaking attempts in Punat bay in croatia over an map background in openstreetmap.de style:

And here is a screenshot:

Update 2016: The URL for the public copy of the source code has been updated.
no comments yet
write a new comment:
name or nickname
eMail adress (optional)
Your comment:
calculate: (2 times 10) plus 3

Sun, 02. Sep 2012

Converting the osm2pgsql planet_osm_nodes table to the new flat-nodes file Created: 02.09.2012 22:49
If you are using osm2pgsql to keep an up-to-date copy of the relevant parts of the Openstreetmap database, e.g. because you're running a tile-server, you will be happy to learn that it has a new "flat-nodes-file" mode. There is a new parameter, --flat-nodes=FILENAME, that makes osm2pgsql store the nodes-data it needs to keep (to be able to make updates from minutely/hourly/daily diffs) into a special binary file instead of a database table. This is only recommended for doing full-planet imports/updates, not when you only have a small region in your database, but in the full-planet-case the advantages are quite convincing: Not only does it speed up the processing of diffs, in my experience between 20 and 30 percent, but it also saves a lot of disk space. Instead of a postgresql-table that takes up 100 GB on disk, you get a 17 GB file (at the time of writing this). This also makes it easier to store the file on a SSD, further speeding up the processing (but even when the file is on normal spindles the speed-up is significant).
However, there is a small problem: You cannot just switch to the flat-nodes mode if you did your initial planet import without it. You would have to start again from a fresh planet-dump import, which can take days, and then you'll have to wait another few days until your database has catched up all the changes that have happened since the last dump. This procedure seemed so undesirable to me that I decided to invest a few minutes of my time to create a tool that allows to convert the database-table to the file. This patch needs to be applied to your osm2pgsql sourcecode-directory. If all goes well, you will then have a new convertnodestabletofile binary at the end of the osm2pgsql buildprocess. This can then be used to convert the database by running something similiar to
psql -d osm -q -c 'COPY (select id,lat,lon from planet_osm_nodes order by id asc) TO STDOUT WITH CSV;' | ./convertnodestabletofile /mnt/flatnodes/flatnodes.db
Of course, you may need to adapt the database-name after the -d or authentication parameters.
This command should take a while, and print some progress information while it goes. After it has run through successfully, you can then test updates with the --flat-nodes=FILENAME parameters. If everything is fine, the last step is to clean up the data in the old postgresql-table that is no longer needed. Note however that (at the time of writing this) osm2pgsql still requires the table to exist, even if it does not use any data from it in flat-nodes mode. The fastest way to clean up is probably to delete the table and create a new empty one, or you can do a delete from planet_osm_nodes followed by an vacuum planet_osm_nodes full.
And just as a warning in case this is not obvious: You need to make sure that nothing tries to update the database-table while you convert it, and after you have converted it, you must make sure to NEVER run an update without flat-nodes parameter again, as this wil seriously mess up your database.
no comments yet
write a new comment:
name or nickname
eMail adress (optional)
Your comment:
calculate: (2 times 10) plus 3

Sun, 10. Jun 2012

Sgwd yr Eira and the four waterfalls trail Created: 10.06.2012 01:17
During a recent vacation, I visited the four (water-)falls trail and the Sgwd yr Eira (Waterfall of the snow). The Sgwd yr Eira is probably the most famous waterfall in South Wales, because it is possible to walk behind it. The reason for this blog post is mostly that before going there, I tried to google the trail and the waterfall, and I came up almost blank. Although there are of course some pictures of the fall and a few descriptions of the walk, I found all of them very confusing and sort of missing the big picture. With this post I will try to clear up some of the confusion. Note however that I do not have detailed knowledge of the area, I'm writing from my limited experience.
So for starters I made a map. The following map is a openstreetmap.org Export with some additional information added:

And here are some (hopefully) helpful bits of information:
  • The four waterfalls trail (or just "four falls trail"), as the name suggests, visits four waterfalls along its way (marked with an X on the map), with the most famous one being the Sgwd yr Eira.
  • The walk is located in a triangle between the villages of Pontneddfechan, Penderyn and Ystradfellte in the Brecon Beacons National Park.
  • It will take a few hours to do the walk, in our case almost 6 hours, but we had a few disctractions on the way. A realistic time estimate would be 4 hours without detours.
  • You will need proper hiking shoes, as parts of the walk are muddy, going through small ditches, over uneven stony surfaces and roots, or just slippery as hell. I hear there are a few casualties every year, and I'm not surprised.
  • Your first stop in any case should be the Waterfalls centre in Pontneddfechan. There you can buy a map set for (at the time of writing this) 3 Pounds. It contains the Four Falls Trail as Route number 7. Get it, I mean it. All maps of this area I found online are somewhere between incomplete and completely wrong. That even applies to Openstreetmap, which lists some nonexistant ways but misses some important others (unfortunately, I did not have enough GPS data from my walk to correct this). There are also numbered signposts along the way, that are marked on the maps in the mapset, which is very helpful. It's money well spent.
  • Forget cellphone reception, there is none. You'll probably have to walk for a while to even get an emergency call out. Random trivia: Nedd valley, the last place to get connected to the electricity grid on the British main land in 2005 (!), is only a few miles away from there...
  • You have a few options to get to the trail. The two main ones are the parking areas marked as P1 and P2 on the map. P1 is really just a designated unpaved space on the side of the road, and it's rather small. But if you start there, you're only ten minutes away from seing the first waterfall. P2 on the other hand is a pay and display car park. It is properly paved and there are toilets and a bench for a picnic. This car park is also used a lot by cavers exploring the nearby caves.
    There also might be two other options, but I did not try these: There seems to be a way from Pontneddfechan to Penderyn that passes the Southern side of the Sgwd yr Eira, so it should be possible to start in one of these villages and enter the trail through Sgwd yr Eira (by walking behind it to the other side of the river). However, there are no parking areas at all on the map in Penderyn, so that might not be such a good idea.
  • There is a nice viewing platform that gives you a great view of Sgwd Isaf Clun-Gwyn close to P1. It is on the map, but there are no signs pointing to it, and (at least at the time we were visiting) the path leading to it is basically invisible - it looks like you're just walking over a muddy grassland.
Finally, here's a picture of Sgwd yr Eira:

I will not post the picture of me after I walked behind it, but let me assure you that I was really glad that it was a sunny day and thus my clothes dried rather fast...
no comments yet
write a new comment:
name or nickname
eMail adress (optional)
Your comment:
calculate: (2 times 10) plus 3

Wed, 15. Feb 2012

Building my own spice rack in the FAU fablab Created: 15.02.2012 23:01
Last modified: 08.04.2012 18:00

There currently is a Fablab being established at Friedrich-Alexander-University (site is in german, see Wikipedia for an explanation of fablabs in general). Although they're not fully equipped yet, they recently got their laser cutter/engraver. Naturally, such a "toy for big boys" was something I desperately wanted to try out - but I needed a project where I could make use of it. Just doing the usual, cutting and engraving a sign out of a piece of acrylic glass and then lighting it with some LEDs, does certainly look cool, but has been done about a gazillion times already in the week since the laser was delivered. I found it too boring.
After a week, I finally had the idea that I needed a spice rack, and while there are certainly better ways to build a spice rack than with parts cut out of acrylic glass, it was not a terribly bad idea.
I measured the place where I wanted to put it, and then constructed it so that would exactly fit that spot. I also made the construction so that it would hold together without any glueing. While I intended to make it more stable by glueing, the basic structure would hold without any of it. 5 millimeter thick acrylic glas was to be used, as that would provide the needed stability and ease construction.
Turning my construction into reality started off with a few technical difficulties:
First of all, I had planned for the rack to be 4 stories and 60 centimeters high. However, the acrylic glas available in the fablab only was 50 centimers long, thus only permitting 49 centimeters of height. The top level thus had to be cut off.
The second problem was the software: The lasercutter has a not-so-great windows driver, and usually "Corel Draw" is used to send data to it. I had created my design with "QCAD" (community edition), which saves files in DXF format. Importing those into Corel Draw was absolutely unproblematic. However, when sending it to the cutter, the printer driver created nonsense cuts because it only has two modes for vector sorting. The "inside out" mode (that had to be used because there were holes to be made into the parts before they were cut out) gets confused with the outer border of the part, and randomly cuts individual lines of the outer border. Luckily, corel draw has a function to combine multiple connected lines into one continuous object. That also causes them to be sent to the printer driver as one object, eliminating the problem. Of course, it would be a very useful feature for the printer driver if it could do that internally.
After that, cutting the parts went pretty smooth - but problems continued during assembly: The laser currently seems to be a little misaligned, meaning that it does not do cuts through the material at an 90 degrees angle, but more like 80 degrees. Also, the acrylic plates from which I cut have a pretty high tolerance themselves: a "5 millimeters" thick plate is somewhere between 4.1 and 5.9 millimeters in reality. Both these problems meant that I needed to use a lot more force than planned and - in one case - a rasp to get the parts together. On the pro side, after finally putting the parts together, they were really stuck, making the construction rock stable.
I still decided to glue the most important parts together as originally planned. I used Dichloromethane for that - it essentially melts the acryl together and should create almost perfect connections. The problem here was that I had read the german Wikipedia article on Dichloromethane - and as a result, I was very very veeeeery cautious with it. In the end, I used way too little, and my "glue points" did not hold. I decided to skip a second attempt - as mentioned, the construction was stable enough without glueing.
There was one more FAIL worth mentioning: I had misconstructed the mount points for the bars at the front and the back that prevent the spices from falling out. They were 5 mm too high due to a little miscalculation. Instead of lasering the big side parts again with the mount points in the right places, I just made the bars (which I had not lasered yet) 5 mm larger so they would fit. In the files below, the .cdr file has the wrong mount points and the larger bars. In the .dxf file, the error has been corrected, so bars and mountpoints are as intended, but not as I built it.
In the meantime, the spice rack is in full operation. Unfortunately, it is too small, I can't fit all my spices in due to it only having 3 instead of 4 floors. I'm still pondering ways to add the missing one.
Though I had constructed a bar for screwing it to a wall, that was not done: The construction is attached to the wall only through a Tesa Power Strip that has been cut into two halves.

As usual, here is a collection of stuff from the project, in the hope that some of it might be useful for others:
[Photos from the construction]
[Corel Draw file - this contains what was actually built]
[QCAD / .dxf file - what was originally planned]
no comments yet
write a new comment:
name or nickname
eMail adress (optional)
Your comment:
calculate: (2 times 10) plus 3

Sat, 26. Nov 2011

Strange coincidences Created: 26.11.2011 13:55
Last modified: 26.11.2011 20:50
At work we have a few external RAID arrays from a major manufacturer, lets call him "HAL", bought as part of a bigger storage system.
From the beginning, they were pretty annoying - aside from their subpar performance caused by intentional castration of the hardware built in (so you would have good reasons to buy the even more expensive systems from the same manufacturer), they threw spurious errors all the time. We soon got used to nonsense-mails from the system, e.g. telling us parts would be 'overheating' - in a 16 degrees cold cold-isle that is - and back to normal temperature a few seconds later. There were also other alerts of equal uselessness, all dissappearing again as fast as they showed up. My personal favorite being messages like:

Event occurred: Thu, 11 Aug 2011 01:19:24 CEST Event Message: Optimal wide port becomes degraded Component type: Enclosure Component (ESM, GBIC/SFP, Power Supply, or Fan) Component location: Enclosure 85, Slot 1

which translates into "something somehow somewhere went wrong, but I won't tell you what or where or how, HAHA!".
So, we were used to getting the occasional nonsense alerts and everything going back to normal without any external intervention just seconds later. Until one evening in 2010 (after office hours, of course) all hell broke lose. Over the course of about 40 minutes, we got more than 200 spurious errors from the 6 arrays. Those errors were not equally spread out, instead one array complained about 20-50 errors in exactly the same second, going back to completely normal a second later, and then a few minutes later the next array would act out in exactly the same way. What was worse was that, in one case, the errors included the "removal" of 8 out of 10 disks in a RAID6 group - which is of course very plausible, because removing 8 disks in exactly the same second is a piece of cake - naturally leading to the failure of that RAID group. Although all those supposedly removed disks were back seconds later, that naturally did not revive the RAID group.
I'm not going to talk about the nightmare with the "support" hotline that followed, although that was a great example of how to not handle support, but instead cut to the end of it: Almost a day later (which is somewhat different from what our service-level-agreement said!), we were in contact with an seemingly very arrogant support engineer from HALs storage division, that told us the magic commands we needed to enter to revive the dead RAID group without destroying all data on it.
Of course we also demanded to know what had caused the major outage, but the only thing we got from HAL on that was that absolutely, clearly, no doubt possible, our power grid was the cause of all evil. It was clear for us that this was nonsense: The server room is powered by an online UPS with tight monitoring of the output lines, and neither the monitoring noticed anything unusual, nor the other systems in the same rack (on the same outlets!) or in the rest of the server room. And even if there had been anything on the power grid (too small for the monitoring to notice), it could not have spread out over 40 minutes and then dissappear in the middle of the night. Nonetheless, HAL was unwilling to consider any other explanations.
So why am I telling stories from 2010? Because a few weeks ago, the exact same thing happened again: Distributed over around 40 minutes, all of the RAID arrays acted out by throwing insane amounts of spurious errors again. And again, one RAID6 group failed because 7 of its 10 disks were "removed" in the same second. Luckily, I remembered the command for reviving them, so the resulting complete system outage only was a few hours until I noticed the problem in the middle of the night.

And then, just out of curiosity, I started to calculate - how long has it been since the last failure? And my calculations revealed: 497 days, and a few hours. That certainly rang a bell in me, but for those who never heard of the Linux uptime bug, I'll explain: Almost all operating systems internally count the time since they were booted up and use it for internal functions, like scheduling things to happen in certain intervals. They do that because it's fast, and doesn't depend on real world time with all its complications like time zones and daylight savings time. At least in FreeBSD and Linux, this internal counter was increased by the timer interrupt 100 times per second. As they both used 32 bit counters, this timer would overflow after 232 1/100ths of a second - which works out to 497 days, 2 hours, 27 minutes and a few seconds. In both old Linux and old FreeBSD systems, this would be visible through the "uptime" command, which showed the time the system has been up - and as the counter overflowed, the uptime it showed would wrap around and suddenly start at 0 again after 497 days, 2 hours, 27 minutes...
Of course, these RAID arrays don't run an old linux version, but vxworks - however a quick research on google tells me that vxworks does the exact same tick counting, with a programmable tick rate. They do also offer functions to handle tick overflows, but that of course requires the programmer using these functions to use his brain... So if the tick rate was set to 100 per second, the system in those RAID arrays would exhibit the same behaviour as old Linux/FreeBSD.
Such overflows are also highly likely to cause other complications, because suddenly the values returned are not monotonically increasing anymore, and if used wrongly things can go terribly wrong. One popular example would be in the famous Year 2000 problem, and one similiar problem that is still to come will be the Year 2038 problem when the commonly used Unix timestamp wraps if it is a signed 32 bit counter.
In particular, such an overflow is also very very very likely to cause effects like the "disappearing discs" we saw. It is very easy to construct how things could go terribly wrong: Suppose you poll the responsiveness of all hard discs regularily, and remember the timestamp when you last received a valid reply. To see if the disc is still alive, you could do something like calculate (current_timestamp - last_reply_timestamp) and if that is more than a few seconds, then the disc hasn't replied for a long time and is probably dead. That will work, but explode horribly if the timestamp wraps: The current timestamp is suddenly slightly above zero, the last reply from the disc has a timestamp of close to 232, so the difference between the two is close to 232 - which could lead you to wrongly assume that the disc hasn't replied in ages and is dead. The problem would also instantly disappear again on the next poll, because then both timestamps would be in the low range again, causing the "dead" disc to be declared alive again.
Thus, declaring such problems occuring after 497 days, 2 hours, ... as coincidence or the result of a fluke in the power grid is about as plausible as claiming that 6 identical computers crashing on 2000-01-01 00:00:00 are just a coincidence. It is far more likely that this is a major firmware bug.

PS: In case you're not convinced yet: I calculated back 497 days from the time of the first failure. And not surprisingly, I arrived at the day where the racks housing these disc arrays were cabled. What a coincidence, huh?

PS2: We're getting these boxes exchanged for unrelated reasons soon. And I sure hope that will happen before the next 497 days are over...
no comments yet
write a new comment:
name or nickname
eMail adress (optional)
Your comment:
calculate: (2 times 10) plus 3

Mon, 17. Oct 2011

Flashing stock firmware with Heimdall Created: 17.10.2011 00:08
Please note: This article simply describes what I did to my Samsung Galaxy S2. It is provided in the hope that it will helpful for others.
Neither will I assist you in doing the same to your phone, nor will I accept responsibility for damages to your phone while doing what I describe here.
This is mainly an continuation of the last article. So, recently the first version of Cyanogen Mod supporting the Galaxy S2 has been released. I naturally had to try it out. Unfortunately, I did not like it at all - the design made me want to puke, and the overall impression was just UGH. After about an hour, I desperately wanted my (rooted but otherwise) stock Samsung firmware back.
That turned out to be a tough task.
In theory, it would be very simple: Put the phone into download mode, which is basically a recovery mode where no real system is running, but it accepts files to flash - which is probably where the name "download mode"" comes from. You could just flash the firmware from that, reboot and be happy with your new firmware. However, the Samsung way of firmware updates does not use that, probably because that would be too simple and way too little error prone.
Instead, firmware updates are done with KIES while the phone is in contact sync mode. KIES is a wonderful piece of bloatware that makes you wonder whether someone stole the SSD out of your computer, because it takes longer to start than Windows Vista on a cheap netbook. Once it started up, it's supposed to do everything, including syncing contact info etc., and also flashing firmware. Unfortunately, its crappy and bugridden. Sometimes it doesn't see the phone when you plugged it in before starting it, sometimes you have to do exactly that to make it work. Even if it does see the phone occasionally, it seems to be pure luck what options it offers to you.
Of course, KIES will not flash the stock firmware back if your phone isn't running the stock firmware. Though that is fine, I'm not sure whether it's on purpose or just out of incompetence. Because even with stock firmware, I've had exactly zero successful flashes with KIES by now. Either the "flash update" button is just missing, or it DOES work and downloads a firmware update for half an hour - only to tell you that it suddenly doesn't recognize the phone (to which it talked for the last half hour, and where it backed up all data from) anymore.
Luckily, Heimdall can not only be used to flash a rooted kernel, but it can also flash a complete stock firmware image - you don't need to rely on strange things like "ODIN" for that. The firmware-packages are usually ZIPped TAR files - unpack them (both the ZIP and the TAR within), and you'll get a directory full of files. Then start looking for corresponding parameters to Heimdall. For example, there might be a file factoryfs.img - the corresponding heimdall parameter is --factoryfs. Most are obvious, some need a bit thought: Sbl.bin usually is the Secondary bootloader, parameter --secondary-boot. The complete commandline for my stock firmware was:

export KR=/tmp/I9100XWKI8_I9100OXXKI2_I9100XXKI4/ heimdall flash --factoryfs $KR/factoryfs.img --cache $KR/cache.img \ --hidden $KR/hidden.img --modem $KR/modem.bin \ --kernel $KR/zImage --param $KR/param.lfs \ --primary-boot $KR/boot.bin --secondary-boot $KR/Sbl.bin

Yes it's long - but other than KIES it worked like a charm. Just put the phone into download-mode and flash away.
One question remains: Where to get the stock firmware images?
That's a question to which I'd love to hear a good answer myself. Unfortunately, Samsung does not seem to provide them for manual download, KIES automatically downloads them from god knows where. So the only way to get them is through the same forums that provide the root-kernels. I'd love to have a more reliable source, so if you know of any - leave a comment.
PS: A little off-topic, but I was thoroughly impressed with Titanium Backup. After going back to stock, it was able to restore everything from the backup I made before. And I mean everything - contacts, call history, received SMS, calendar and alarm clock entries, even Angry Birds and my progress in the game. This is an incredibly useful tool.
1 comment
Thank you so much! Helped me a lot!
luisajeronimo 22.06.2013 18:57

write a new comment:
name or nickname
eMail adress (optional)
Your comment:
calculate: (2 times 10) plus 3

Sat, 06. Aug 2011

Rooting Samsung Galaxy S2 with Heimdall Created: 06.08.2011 17:59
Please note: This article simply describes what I did to root my Samsung Galaxy S2. It is provided in the hope that it will helpful for others.
Neither will I assist you in doing the same to your phone, nor will I accept responsibility for damages to your phone while doing what I describe here.
For some time now, I have owned a Galaxy S2, and been quite happy with it. I didn't really feel the urge to root it - things worked well enough without rooting. However, then I tried to take some pictures of sleeping cats with it - and noticed that there is absolutely no way to turn off the annoying shutter sound the camera makes when it takes a picture without rooting the phone. It even does that sound even when the phone is set to silent. That's the sort of braindead design decision I'd expect from Steve Jobs, but on an otherwise nice phone?!
Anyways, my annoyance with this crap was large enough to decide to root the phone.

Google spits out tons of helpful threads on the topic, however they all use some leaked software from Samsung named "Odin". That software of course is Windows only, but I run Linux. And I didn't really like the idea of using some rather fishy and copied around for years software of unknown origin either.
My search for Alternatives turned up Heimdall. It's open source, runs on (probably) almost everything, and there are Binaries for Windows, Mac and Debian Linux available. As it was easy enough, I simply decided to build it from source on my Ubuntu 11.04 system. Main dependency from the system was libusb-1.0-0-dev. Build process:

tar xvzf Benjamin-Dobell-Heimdall-v1.3.0-0-ged9b08e.tar.gz cd Benjamin-Dobell-Heimdall-ed9b08e/ cd libpit/ ./configure make cd ../heimdall ./configure make

That's about it - you should now have a 'heimdall' binary in the current directory.

I then used a rooted kernel - i.e. a kernel that is essentially the stock kernel with minor modifications to include root access - from this cf-root thread on forum.xda-developers.com. I downloaded the modified kernel matching my current device kernel and unpacked it (it's a .zip file containing a .tar containing the zImage for the kernel).
Flashing that kernel to the device is then pretty straightforward:
  • Activate USB debugging on the S2 in settings / applications / development
  • connect the S2 to your computer via USB
  • Turn off the S2 and wait for it to shut down
  • Press the Home Button, Volume Down, and the Power Button at the same time and keep them pressed until the "Downloader" screen appears. It will display a yellow attention sign and a warning that you need to acknowledge.
  • Only if you don't want to run heimdall as root: Find out which bus and device ID the kernel assigned to your phone with lsusb and then chown that device to the user running heimdall, i.e. sudo chown fox /dev/bus/usb/002/022
  • Tell heimdall to flash the modified kernel: ./heimdall flash --kernel /tmp/zImage
That's it. The phone will automatically reboot as soon as heimdall exits. From now on it will show an attention sign during boot to show that the firmware has been tampered with, but see the already mentioned forumthread for tips on how to get rid of that.

PS: Just in case you want to get rid of the annoying camera sound too: After rooting the phone, open a shell (look for Terminal Emulator in the market), and then enter this:

su cd /data echo 'ro.camera.sound.forced=0' >> local.prop

This will not directly "disable" the sound, but instead the shutter sound will then be controlled by the "system sounds" volume setting - as it should have been from the beginning. And in particular, if you set your phone to "silent", the camera will be silent as well.
You will probably need to reboot the phone for this setting to take effect.
no comments yet
write a new comment:
name or nickname
eMail adress (optional)
Your comment:
calculate: (2 times 10) plus 3

Wed, 23. Mar 2011

Doppelboden-modding, oder: Pimp my Doppelboden Created: 23.03.2011 21:20
Last modified: 25.03.2011 17:00
Im Doppelboden des Serverraums gibt es neuerdings ein Loch Sichtfenster, durch das man die Rohre fuer das Kühlwasser des neuen Clusters sehen kann.

Was man aufgrund der schlechten Bildqualität kaum erkennen kann: Das Loch ist von unten blau beleuchtet.
Es erinnert mich irgendwie an diesen alten "Rambo" Film: "Was ist das?" - "Das ist blaues Licht." - "Und was macht es?" - "Es leuchtet blau."
Update: Dank Kollege T.R. gibt es jetzt ein Bild der blauen Beleuchtung bei Dunkelheit.

no comments yet
write a new comment:
name or nickname
eMail adress (optional)
Your comment:
calculate: (2 times 10) plus 3

Sun, 20. Mar 2011

S2API and the complete lack of documentation Created: 20.03.2011 11:04
While adding support for DVB-S2 to getstream-poempel, i had the joy of having to figure out how S2API works.
S2API is the new API for DVB on linux, and it is the only way to tune DVB-S2 channels, as the old API (that still exists) cannot do that. It also supports other standards, but DVB-S2 was the first and the one it was developed for, hence the name. It is in the vanilla linux kernel since 2.6.28.
S2API is a nice redesign of the API: Instead of having to add new structures for every new DVB standard like with the old API, it uses flexible name-value pairs. This is not a bad idea, and it seems to work well. What doesn't work so well is the fact that there is still no trace of any documentation anywhere. Many applications have at least experimental support for S2API by now, but it seems they all implemented it by copying from each other. I basically had to do the same and found it pretty annoying. The S2API article on the linuxtv.org Wiki offers an interesting read on the history of the new API and the "API wars" that lead to it, but actually providing info on how to use the new API would be better.
So, here is a writeup of my findings, in the hope that it will be useful to others that face the same problem. I also hope that the linux dvb guys get their act together and provide some useful documentation soon.
The first thing to check is whether the kernel (or rather, the kernel headers you're compiling with) supports the new API. For that, you need to #include <linux/dvb/version.h>. It defines DVB_API_VERSION. As the S2API has been given the version number 5, you need to check for DVB_API_VERSION >= 5. Please don't overdo the checking: While trying to compile scan-s2, a version of scan modified for S2 support, I could "admire" that they check for the API version to be _exactly_ 5.0 - which is of course nonsense and makes the compile fail on every current kernel, as they have reached API version 5.1.
Up next is the basic concept of the new API: You give it a list of "commands" that it will execute. Those commands can either change settings, or actually do stuff. For example, for tuning you'll usually send some settings, like the frequency, and then send a "tune" command as the last one, that will then tune to the settings you gave before. The "commands" are given in an array of type struct dtv_property. That struct contains a command ID, and parameters for the command in an union. Like with the old API, everything needed is defined in <linux/dvb/frontend.h>. Some example:

#include <linux/dvb/frontend.h> struct dtv_property myproperties[] = { { .cmd = DTV_DELIVERY_SYSTEM, .u.data = SYS_DVBS2 }, /* Select DVB-S2 */ { .cmd = DTV_FREQUENCY, .u.data = frequency }, /* Set frequency */ { .cmd = DTV_SYMBOL_RATE, .u.data = srate }, /* Set symbol rate */ { .cmd = DTV_TUNE }, /* now actually tune to that frequency (no parameters needed) */ };

The last line will start the actual tuning. To get that command array into the kernel, you will first need another structure of type struct dtv_properties. This tells the kernel how long the array is, it's therefore a very simple structure with only two members: an uint32_t num giving the number of commands in the array, and props which is a pointer to the array with the commands. You then pass this struct to the kernel with an ioctl. Like this:

struct dtv_properties mydtvproperties = { .num = 4, /* The number of commands in the array */ .props = myproperties /* Pointer to the array */ }; if ((ioctl(myfd, FE_SET_PROPERTY, &mydtvproperties)) == -1) { perror("FE_SET_PROPERTY failed"); }

The last thing is a mystery I haven't solved yet: Although the new API does seem to offer commands for setting voltage and 22 kHz tone, namely DTV_VOLTAGE and DTV_TONE, all programs I looked at still did those two settings through the old API, by calling the extra ioctls FE_SET_VOLTAGE and FE_SET_TONE. One the one hand this isn't surprising, as due the lack of documentation they had to copy off each other, but on the other hand I'm wondering whether there perhaps is a real reason for this. Perhaps that functions weren't in the very first version of the new API?
Thanks for publishing this, it's really helpful.

Let's hope the DVB API guys get their act together and actually document this stuff.
Mister Fishfinger 03.01.2013 15:57

Thanks! Very useful!
Ferranti 14.11.2014 16:10

write a new comment:
name or nickname
eMail adress (optional)
Your comment:
calculate: (2 times 10) plus 3

EOPage - generated with blosxom