Felix Schwarz Diplom-Informatiker
Software-Entwicklung und Beratung

GEGLs "Mono Mixer" mit Python verwenden

Eine Lobeshymne auf GEGL, gobject-introspection und Fedora

Hintergrund: Rotfilter

Diese Woche konnte ich mich endlich einem Problem widmen, welches ich schon längere Zeit im Hinterkopf hatte: Einer meiner Kunden muss regelmäßig Text aus gescannten Rezepten der gesetzlichen Krankenversicherung extrahieren. Im Prinzip eine relativ einfache OCR-Aufgabe.

Allerdings enthält bereits der Blankobeleg einige Texte wie z.B. die Beschriftung der einzelnen Felder. Die Arzt- bzw. Apothekenbedruckung kann durchaus über die vorhandenen Texte gedruckt werden. Um der OCR die Aufgabe zu erleichtern, werden für die Texte des Blankobelegs Rottöne verwendet, während Arzt und Apotheker in Schwarz drucken.

Mittels eines Rotfilters können daher alle verdefinierten Texte des Belegs entfernt werden, so dass die OCR nur noch den eigentlichen Arzt- bzw. Apothekendruck erkennt. Heraus kommt dann ein Bild, welches nur noch Grautöne enthält. Bislang wird dies direkt durch die "Hardware"/Spezialscanner des Kunden durchgeführt, die dafür auch gut getunte Algorithmen verwenden.

Allerdings gibt es dabei wie immer ein paar Nachteile:

  • Durch den Rotfilter gehen auch Informationen verloren, so dass eine eventuell nötige manuelle Nacharbeit schwieriger wird.
  • In Einzelfällen werden auch kleinere Scanner benutzt, die über keinen Rotfilter verfügen.
  • Der wichtigste Punkt für mich ist aber die Unflexibilität des Hardwarefilters: Arzt- und der Apothekendruck können sehr unterschiedlich sein, da es sich ja um zwei verschiedene Drucker handelt. Manchmal ist einer der Drucke nur schwach zu erkennen (nahezu erschöpfte Farbbänder/Tonerkartuschen), während der andere kräftig und klar ist. Je nach Schwellwert kann es sein, dass der Rotfilter den schwachen Druck komplett entfernt. Ist der Schwellwert aber zu niedrig eingestellt, bleibt zu viel Text der Blankovorlage erhalten. Ich würde also gerne auf dem farbigen Scan bestimmte Bereiche mit unterschiedlichen Schwellwerten versehen (ggf. auch in Abhängigkeit der OCR-Ergebnisse).

Ich habe bereits vor Jahren einen primitiven Filter selbst geschrieben. Dieser funktioniert meistens „gut genug“, ist aber sehr langsam und ist nur schlecht parametrisierbar.

GEGL Mono Mixer

GEGL („Generic Graphics Library“) ist eine Bildbearbeitungsbibliothek, die ursprünglich entwickelt wurde, um Grafikoperationen für GIMP bereitzustellen. Allerdings ist GEGL in keiner Weise an GIMP gebunden, sondern kann auch völlig unabhängig davon verwendet werden.

Insbesondere stellt GEGL eine Operation namens Mono Mixer („Monochrome channel mixer“) bereit. Diese Funktion ist genau der gesuchte Rotfilter.

/posts/2019/gegl-colorfilter/colorfilter-result.png

Allerdings ist GEGL größtenteils in C geschrieben, während die OCR ansonsten mit Python gesteuert wird. Allerdings wurde im Umfeld des GNOME-Projekts die großartige gobject-introspection Bibliothek entwickelt. Damit ist es möglich, entsprechend annotierten C-Code aus verschiedensten Sprachen wie Python oder JavaScript aufzurufen, ohne dafür spezielle C-Anbindungen schreiben zu müssen.

Der größte Nachteil ist aus meiner Sicht, dass es für GEGL nur wenig Dokumentation gibt. Insofern ist der folgende Abschnitt vielleicht auch für andere interessant :-)

Code: Rotfilter mit GEGL und Python

#!/usr/bin/env python3
# Licensed under the Creative Commons Zero v1.0 Universal
# https://creativecommons.org/publicdomain/zero/1.0/
# SPDX-License-Identifier: CC0-1.0
import os
import gi
gi.require_version('Gegl', '0.4')
from gi.repository import Gegl
Gegl.init()
graph = Gegl.Node();
gegl_img = graph.create_child('gegl:load')
gegl_img.set_property('path', 'input.jpg')
colorfilter = graph.create_child('gegl:mono-mixer')
colorfilter.set_property('preserve-luminosity', False)
colorfilter.set_property('red', 4)
colorfilter.set_property('green', 0)
colorfilter.set_property('blue', 0)
gegl_img.link(colorfilter)
sink = graph.create_child('gegl:jpg-save')
sink.set_property('path', 'output.jpg')
colorfilter.link(sink)
sink.process()
# not calling "Gegl.exit()" to suppress some warnings, see also
# - GEGL issue 142: "GEGL warning und buffer leak after loading a file"
#     https://gitlab.gnome.org/GNOME/gegl/issues/142
# Gegl.exit()

Ergebnisse

Der obige Farbfilter (mit angepassten RGB-Parametern) liefert bei mir exzellente Ergebnisse. Der neue Code ist zudem etwa 4-5x schneller und viel besser parametrisierbar, so dass die Erkennungsrate auch bei schwachem Druck besser ist.

Außerdem will ich an dieser Stelle noch mal Fedora loben: Alle nötigen Pakete sind mit einem einfachen dnf install gegl04 python3-gobject-base installiert und einsatzbereit. Fedora 29 bietet derzeit (März 2019) die relativ aktuelle GEGL-Version 0.4.12 an (veröffentlicht im Oktober 2019). Die neueste Upstream-Version ist 0.4.14, die erst vor drei Wochen (am 01. März 2019) veröffentlicht wurde.


Ascendos – A New Hope?

Last week I came across a new Open Source project which looks very promising to me (and at the same time got almost no media coverage yet). Therefore I decided to blog about this right now: Ascendos.

Ascendos is will be a free rebuild of Red Hat Enterprise Linux (RHEL), technically very similar to CentOS.

I started to write a bit of technical background what is so interesting about RHEL but it became to long so you find that in a separate blog post.

Why is Ascendos so exciting to me:

So it looks like the time has finally come for a serious CentOS contender. Currently it's all vaporware but I'm hoping for the best.


The Downfall of CentOS

What features made CentOS successful

As you likely know, CentOS is a free rebuild of Red Hat Enterprise (RHEL). I wrote a praise on RHEL which pretty much explains why RHEL is great.

I don't think it'll surprise you that there are a couple of RHEL rebuilds:

These rebuilds ("clones" as they are sometimes named) generally take the source rpms published by Red Hat, remove Red Hat's branding (required by Red Hat legal) and recompile the packages. The end result is usually free to use.

However even "just" recompiling is quite some work even if it can be done by individuals which enough time and skill. CentOS gained a lot of users quickly and always had a couple (about 2-6) core maintainers who did a lot of work. As a consequence the creators of Whitebox Linux and Tao Linux (both were RHEL clones as well) recommended switching to CentOS when they discontinued their work.

CentOS always had the major advantage that they promised the same support period as RHEL and strived for maximum compatibility with RHEL. CentOS did not add additional packages or patches unless it was unavoidable (e.g. adding yum to update a system in CentOS 4 instead of using the Red Hat Network).

And last but not least CentOS had a community behind so the project was present on all major Open Source conferences and fares which might have increased its popularity as well.

Chronicles of a Downfall

However even with all that popularity problems began to mount up. CentOS' downfall did not start due to a major catastrophe but was more a slow but steady process. During that time the CentOS core team members failed to react appropriately and so things got worse.

Dag Wieers leaving

In June 2009 Dag Wieers left the CentOS project because of his level of frustration with the CentOS development team. This in itself didn't have a big impact on the project. However it was a clear warning sign. When a highly skilled, well known contributor leaves a project not for the lack of time but because they are not satisfied with the project itself, it means that there is something seriously wrong.

Dag is an experienced RPM packager, providing additional RPMs for Fedora and CentOS for a long time (he's also one of the guys behind RPM Fusion. From personal experience I can say that he's a nice guy, easy to deal with and for sure someone with good 'social' skills.

Now while there were no changes in the project, some of the problems Dag was frustrated about became public just a month later…

Disrupted communication within the CentOS core team

CentOS was founded by Lance Davis in 2004. However Lance stopped contributing a few years afterwards. This resulted in an exceptional situation of an Open Letter to Lance Davis in July 2009.

The public learned that Lance was the sole owner of the CentOS.org domain as well as the CentOS Paypal account. For several years he did not even disclose to other core developers how much money was available and the actual project ("project" being the people doing marketing and engineering) did not benefit at all from monetary donations.

Lance did not answer neither email nor calls even from the inner circle of CentOS developers. Tim Verhoeven even wrote: "We tried to contact Lance multiple times over a period of more then a year."

Ralph Angenent, another CentOS core member, wrote: "This means that the project depends on one person in too many ways. Add to that a person who doesn’t answer calls, isn't available as meetings, doesn't publish things he promised to do - we have a problem. And this is unacceptable. We as a project have to be more transparent. And this is one of the things blocking this."

As many users mentioned this is unacceptable and not only a failure of Lance but IMHO also a project failure because keeping quiet for long means that problems are just brushed under the carpet.

Also Dag Wieer's comment on the whole story is an interesting read and he disclosed that Lance's position and absence was one of the things which drove him to leave the CentOS project.

End of the story was that Lance showed up in a IRC meeting two days later and the issues were resolved. However at least to me as an outsider transparency did not really get better…

Release delays and missing security updates

RHEL clones can publish their updates only after Red Hat naturally. For CentOS regular updates are done in a quite timely manner. However every few months Red Hat releases 'minor updates' (similar to Windows service packages) like 5.1, 5.2, … Many CentOS pay a lot of attention to the time it takes CentOS to get their release out as the update also contains security fixes (comparions with release dates of Scientific Linux are misleading as the latter project continues to publish security fixes even when the full update is not yet released).

Wikipedia has the complete release history with the slack time relative to RHEL. Generall you can see that the slack time increases over time both for CentOS 4 and 5 which means CentOS became slower to follow up on Red Hat (the quick release of CentOS 4.9 is likely because Red Hat doesn't change much anymore for RHEL 4.x at this point in the release cycle).

This culminated in a delay of 85 days to release CentOS 5.6 which is almost 3 months without security updates for a big install base! How is that for an enterprise (CentOS) distribution?

Actually Karanbir Singh mentioned that critical/important security fixes will be published for CentOS 5.5 as updates but in that case he failed by his own standards:

As you can see on the CentOS archive there were no updates after the release of RHEL 5.6 (13.01.2011). There was even a LWN article about that: CentOS 5, RHEL 5.6, and security updates.

Compared to that the 242 days of delay to publish CentOS 6.0 is not that significant because there were no existing CentOS 6 users waiting for security updates. Though the enormous time tells something about CentOS' capacities.

Lack of Collaboration

Red Hat publishes their source RPMs which is really nice because makes rebuilding RHEL possible. However rebuilding is still not trivial because of Red Hat's packaging errors (missing build requirements for example) and non-obvious build system requirements. Scientific Linux documented the rebuild problems they encountered. Also rebuilding from scratch (and keeping compatibility with RHEL) requires to follow a specific, non-documented build order.

But despite of that core members just deny that there is any "secret sauce" (1, 2, 3, 4, …).

Questions about privately rebuilding RHEL are frowned upon on the CentOS devel mailing list (for example 1) or people are put of with superficial answers (1, 2).

In the end there is a lot of fear (of unofficial rebuilds) and of giving away their own position:

Red Hat did not tell me how to build it. (...) Why should I tell someone how to build a replacement OS to CentOS.

(Johnny Hughes). Also FUD tactics are used for excuses not to publish things.

There is one statement which summarizes the philosophy pretty well:

Our goal is not a reproducable system so YOU can build software, it is for US to produce software. If you are looking for a distribution that teaches YOU to build things, get Gentoo or Linux From Scratch."

Uh, wait. I always though that sharing knowledge and enabling people to do things on their own was at the very heart of the Free Software Movement? Well, this brings us to the next problem…

CentOS is only a community of users

Many people critizing the CentOS core team because of their attitude towards the community don't understand how Karanbir and Johnny see the CentOS 'community'.

Karanbir, January 2011:

what do you think is your association with CentOS ? do you use it ? if so, you are a part of the community already. None ever said it was BUILT by a bunch of random driveby community members. Its built for a community and around a community.

Johnny:

Community because all the QA team, the forum moderators, the graphics team, the mailing lists and the wiki are all members of the community providing help. (…) CentOS is for the community ... it is not BUILT buy the community.

As you can see, even though CentOS is officially the 'Community Enterprise OS', the term 'community' only means a community of mere users.

"My way or the highway" attitude - failure to understand the essence of voluntary contributions

Karanbir Singh, November 2010:

Lots of people will argue that open source works in a way where people do what they want to do, so you cant tell them what needs doing - and they will do what they want, when they want. Its what many imagine is the 'fun' in the open source way. Fortunately, or unfortunately we dont have that luxury. What comes down the pipe needs to be addressed, sometimes its what we want to do - and sometimes its what needs doing because that's the issue on hand. The process we have in place is mostly finite, with a specified origin and a specified delivery expectation. We need to join those dots. And if people don't want to help with that joining-the-dots effort, they are never going to be a part of the process.

So when people imply that there are lots of potential-contributers who would want to get involved and help etc : What fantasy world are they looking at ? I, or one, would like to get in on that action please.

To me this post illustrates a severe lack of misunderstanding how voluntary contributions work. The first two sentences are mostly correct. However the conclusions he draws from this ("if people don't want to help with that joining-the-dots effort, they are never going to be a part of the process.") are just shortsighted.

The essence of voluntary work is motivation. Potential contributors will have all kinds of motivation when the first try to help. It is crucial to allow them to work how they want initially. Over time some will stick with the project and develop a sense of duty so they care also about non-fun stuff. Others will contribute only one small thing but even the little things will add up as they were shared.

However if you set a process in stone and throw some more or less boring tasks with much explanation over the fence (as Karanbir did initially) it's very unlikely to attract new contributors. So in a way Karanbir is right: There are not many potential contributors – for his definition of potential contributor as someone who is willing to even boring tasks without much questioning.

This ties in "nicely" with an attitude that many people offering help want to do their own/don't care for "the project".

That attitude is absolutely killing. Getting new volunteers in a free software project is usually quite hard. The project members have to work hard (in a social sense, not technically!) to get new developers constantly. Any obstactles in that process will stop people from contributing very quickly.

Well, so it does come as a surprise that there is very few attempts are being make to speed up the release process with the help of outsiders (1) and that there are still significant delays in the release process.

At the same time I find interesting that Dag doubts that QA is the main bottleneck. Which in turn means getting more people involved in actually building CentOS might be a good thing…

Also CentOS chose a way of working which requires extra work to pass on knowledge about the build process. As Johnny wrote even in case of a missing build requirement they don't modify the source rpm.

While I certainly like that CentOS tries not to change the upstream sources I think this is a clear exaggeration: In the end the missing package is use in the build one way or another. The alternative (add the missing BuildRequires line which Red Hat should've added) is quicker (no need to change the build root/restart the build process), passes the knowledge to others (as it is recorded in the source rpm) and looks just like the right approach to the problem. Of course filing bugs in the Red Hat Bugzilla should always be done.

Another limiting factor is that CentOS is very strict about not publishing anything before they are sure that all branding was removed (Karanbir's statement) unlike Scientific Linux which also publishes betas and the like.

A good example for the lack of understanding is a conversation between Douglas McClendon and Karanbir Singh where Karanbir does not accept any of Douglas' more significant proposals related to feedback and generally dismisses most of his ideas pretty strictly ("if you believe that, you are about as far away from CentOS as can be.", "No, that's a serious waste of time and effort.", "Whats so hard to understand about that ?").

Analysis of the failures

I think the current situation can be attributed to the three following project failures:

  • Showing the secret sauce only to core members Details of the development process are completely intransparent: How packages are rebuilt, which special handholding is required for which package is not documented because of an attitude "this process works, we need help elsewhere". Many people and companies are interested in this (so there they are potential contributors) but the project doesn't grab them.
  • No strategic community building: There is no strategic effort to recruit new team members constantly which is required to keep a project alive in the long run.
  • Lack of trust and fear of uncontrolled work: You have to complete predefined QA tasks or something similar before you can even start thinking about fixing (maybe very small) issues. You can't even work on your own and just submit the results because you lack information from the very beginning unless you are a CentOS team member.

Is this the End of CentOS?

Now even with all my ranting about the CentOS project I don't think it is dead already. Certainly some users switched while waiting on CentOS 5.6/6.0 (mostly to Scientific Linux I think). The bad news got some media coverage which has some impact on CentOS' image but the install base is still huge. There are still people doing fares and conferences for CentOS. Karanbir Singh is still doing package rebuilding.

So the project is as alive as a year ago. Its weaknesses are just more obvious than before.

The most important thing keeping CentOS strong at the moment is that there is no real competitor right now. What is needed is a real community-based, open and transparent RHEL clone which does not add any features (like CentOS, even better if there is no such thing as "CentOS extras").

Even with such a contender CentOS will remain an important distribution as long as the key people still invest time to maintain CentOS. So currently it's are bit premature to talk about a 'downfall' of CentOS but there is for sure an eclipse…


Why you should consider RHEL+clones for your Linux machines

If you're not familiar with the Red Hat Linux world, you might ask why you should bother at all looking at this enterprisy Linux. After all Ubuntu is for sure the most successful desktop Linux distribution which even has 'long-term' support (LTS releases). Alternatively Debian might be seen as a stable distribution without too much commercial influence.

Well, Red Hat enterprise Linux beats these alternatives single handed in my use cases:

7 years of security support for every major version (4.x, 5.x, 6.x) for all packages
This is already a significant advantange over Ubuntu's LTS for me because Ubuntu's LTS does not really have "long-term" support:

  • Packages from the universe/multiverse repositories don't have any long-term support, sometimes they are more like some quickly created packages without a community of packagers maintaining them.
  • desktop packages are only supported for three years, afterwards you're left alone.
  • And last but not least it's only 5 years support even for server packages.

I run a couple of Linux servers and even today (July 2011) I'm running 3-4 CentOS 4 boxes. CentOS 4 was released initially in 2005 and these servers now run for more than six years now without problems.

Unlike Debian installs I did not have to upgrade my install (with the need to adapt configurations because of new major versions being incompatible with the previously installed versions). All I had to do is to install updates coming in though yum.

Red Hat invests a significant amount of time to update all the old components (RHEL 4 comes with Linux kernel 2.6.9) in order to minimize disruptions. Which means I can plan very well when I do a major upgrade of my servers. Currently I'm planning to retire to the old CentOS 4 boxes but I had more than two years for the actual migration which really helps!

Fedora as a Technology Preview
At the same time I can run Fedora on my desktop or on servers which really need the "lastest and greatest" technology. As RHEL is based on (older) Fedora releases the administration is very much the same. I can create private RPM packages for Fedora and reuse them with minimal effort on RHEL/CentOS.

Fedora EPEL provides all other packages
Fedora EPEL is a Fedora subproject which provides more high-quality RPM packages for RHEL/CentOS in a transparent, reliable way. It is based on Fedora's standards, policies, and technologies so there are a lot of good packages which are even updated constantly. Usually it's quite easy to jump in and fix things and with co-maintainers it's not so much of a problem if an individual packager has no time anymore to update packages.

The high quality of these packages is also demonstrated by the fact that Red Hat often blesses EPEL packages as 'official' and distributes them as part of RHEL.

Red Hat/Fedora really take packaging seriously
It's a small point but comes in handy more often than most people think:

  • Each RPM has a changelog section where each packaging change is listed.
  • Usually packages have only the absolutely required dependencies with subpackages for specialized functionality which requires additional packages. On the other hand I found Debian sometimes to be too granular which makes it hard just installing some meaningful functionality.
  • Package installation/updating can be done completely unattended. Fedora's packaging policy mandates that there is no user interaction. This might be bad for a "Desktop installer"-style installation with initial configuration but it ensures that you can set up systems completely unattended.
  • Services are not restarted automatically after updates. I was burned once on Suse Enterprise Linux 9 when I just updated some packages while waited for all user to log of so I could start the actual maintenance. Unfortunately several services were restarted automatically which caused TCP connections to reset which in turn caused data loss for some users.

Red Hat and Fedora take Free Software and community seriously
There is very little "secret sauce", most of the stuff is done in the open. I don't remember Red Hat dictating big changes, new features is introduced through Fedora and people with enough merits can influence these decisions. Most (all?) software from Red Hat is released under a free software license.

Also Red Hat provides source RPMs publicly and does not make any attempts to prevent free "clones" like CentOS and Scientific Linux: merging all kernel patches into one big patch is not a GPL violation and does not affect the community clones but rather Oracle as a commercial free-rider.

So that's why Red Hat earns more money every year even if they 'just' publish a Linux distribution.


MediaCore – an extensible video platform

In the last five years, web videos became more and more popular. Obviously YouTube has a big market share and provides a lot of functionality for the user. However many companies, associations and organizations want to host their videos on a dedicated platform. The reasons I hear most often are:

  • Pages should blend in the main company site (styles, integration)
  • Show custom ads to make money
  • More flexibility, the ability to add custom features whenever you like

/posts/2011/mediacore/france.png Merci à Quentin Theuret pour avoir traduit cet article en français.

About MediaCore

MediaCore is an open source video and podcast hosting software, first published in January 2010. Even the stock distribution package contains a lot of interesting functionality:

  • web videos using Flash or "HTML5", including popular players like Flowplayer and JW Player
  • embed videos from YouTube, Vimeo or Google Video
  • Support for podcasts
  • sophisticated storage backends (e.g. store media in the Amazon S3 cloud)
  • Social media integration (sharing on Facebook, twitter etc)

/posts/2011/mediacore/MediaCore_Website.png

The main development work is done by Simple Station, a Canadian design studio. However development happens transparently with a github repository, a user forum and a developer mailing list.

One thing to keep in mind with MediaCore is that it is currently not a turn-key solution for everyone. Some commonly requested features which are not implemented yet include:

  • automated video encoding, e.g. encode uploaded videos to a format suitable for web viewing)
  • user permission system, restrict uploading ability, show videos only to some (paying?) users
  • In-video advertising support

Also installation on very cheap web space without shell access is quite hard, sometimes even impossible. Many of these problems will be solved by plugins after the release of MediaCore 0.9 (mid March 2011) or will be worked on in later releases. It's just not there yet.

Customization

So far MediaCore customizations required always changes to the actual source code which makes updates much harder because you need to change the source code for every new release. With the upcoming 0.9 release MediaCore supports plugins so you can do a lot more customization without actually modifying the MediaCore source code. Examples for plugins:

  • Change the overall style in arbitrary ways by overriding templates
  • Add new players
  • Trigger custom code after certain events (e.g. "new video uploaded")

With MediaCore 0.9 you can develop custom changes as plugins so it's way easier to update the stock MediaCore without having to change your code. See the next section for technical details.


MediaCore plugins

Currently (March 06 2011) there is no public documentation how to write MediaCore plugins so I decided to share my experience with you. However this won't/can't be a complete guide for newcomers. I assume that you're familiar with Python basics.

Generally plugins are developed using Python (backend code), Genshi (templating) and the usual web technologies (HTML, CSS, Javascript) for the frontend. For the Python side, you also might want to check the homepage of Pylons which is the underlying framework used in MediaCore. There is also The Definitive Guide to Pylons (online version) by James Gardner.

Structure

First of all, let's see the structure of a typical plugin (you can download this skeleton as well):

/posts/2011/mediacore/mediacore_plugin_structure.png

  • sample is the name of your Python module, so you can change that to any valid Python name.
  • mediacore_plugin.py is an arbitrary name but I like to use that as a central entry point where I import all my code.
  • If your plugin needs static resources like Javascript files, style sheets or additional images, you need a folder named public below "sample" (or however you named your module). Obviously the static resources can have arbitrary names, I just added a CSS file named sample-style.css.
  • If you want to add new pages to your MediaCore, create a folder templates below "sample" (or however you named your module). Inside you put your Genshi templates, again my-page.html is just an example.

Every folder of your plugin should contain an (empty) file __init__.py (this is because of how Python's setuptools/zipimport works). In the root folder, you need a setup.py file.

Example: Adding a New Page

To add a new page, you need a controller, a specific exposed method in that controller and a template. Let's see an example

# File: sample/mediacore_plugin.py
# ------------------------------------------------------------
# -*- coding: UTF-8 -*-
from mediacore.lib.base import BaseController
from mediacore.lib.decorators import expose
class MyController(BaseController):
    @expose('sample/my_page.html')
    def my_page(self, tag_name, **kwargs):
        # do your backend work here
        # …
        return {'tag_name': tag_name}
# ------------------------------------------------------------
__controller__ = MyController

If you use this example code (and fill in the missing pieces) in your plugin, your new page would be available as http://<mediacore>/sample/my_page?tag_name=Something. Please note that the sample part of the URL is depend on your settings in setup.py (see section about installation below), not the name of your Python module (however it makes sense to keep them the same).

In your template you can reference your static resources using ${h.url_for('/sample/public/<filename>')}.

If I remember correctly, every plugin can have only one controller. However a controller can have multiple exposed functions of course.

Example: Adding a New Player

MediaCore supports different types of players, especially HTML5 players (using the <video> tag and Javascript) and Flash players. The two important classes to check are mediacore.lib.players.AbstractHTML5Player and mediacore.lib.players.AbstractFlashPlayer.

I'll show you an example how to build a custom player based on the FlowPlayer:

# File: sample/player.py
from mediacore.lib.players import FlowPlayer
class MyPlayer(FlowPlayer):
    name = u'my_player'
    display_name = u'Custom Player based on Flowplayer'
    # you could override flashvars() for example to add player some options

Afterwards you need to register your class, so add this to your mediacore_plugin.py:

# …
from mediacore.lib.players import AbstractFlashPlayer
from sample.player import MyPlayer
AbstractFlashPlayer.register(MyPlayer)
# …

If you installed your plugin after you initialized the main MediaCore database (paster development.ini setup-app, you need to register your player via SQL:

insert into players (name, enabled, priority, created_on, modified_on, data) VALUES ('my_player', false, max(priority)+1, now(), now(), '{}');

Now your player should be visible in the admin settings ("Players" section) and you just need to activate it. All videos will use that player (if possible).

Installing your Plugin

MediaCore uses setuptools to find plugins. All found plugins are enabled automatically, there is no way to disable a plugin (currently).

So that's why your plugin needs a setup.py file which looks like this (roughly):

#!/usr/bin/env python
from setuptools import setup, find_packages
setup(
      name='SamplePlugin',
      version='1.0',
      author='Felix Schwarz',
      author_email='info@schwarz.eu',
      license='GPL v3 or later',
      packages=find_packages(),
      entry_points = {
          'mediacore.plugin': [
              'sample = sample.mediacore_plugin',
          ],
      }
)

Besides some meta information, The interesting point is the entry_point 'mediacore.plugin':

  1. The first part ('sample = …') defines the URL prefix where your new pages/resources are found (http://<mediacore>/sample/….
  2. Besides that the line tells MediaCore where to look for the plugin (sample/mediacore_plugin.py). This file is executed when MediaCore loads its plugin, and you may use the __controller__ attribute here to add new pages.

Before you can use your new plugin, you need to install it (don't forget to activate your virtualenv before!):

python setup.py install

For development you'll likely want to use the 'develop' mode instead of 'install' so that you don't have to install your plugin every time you made a code change.

That's my very rough primer on MediaCore plugins. I just left out one topic: Events. That's because I think the API design makes them basically pointless.

I hope you liked this paragraph anyway. In case of errors, please send an email. Also remember, you can download the basic skeleton.


My Contributions and Adaptations

I contributed several patches to MediaCore, for example:

  • Basic internationalization (i18n) infrastructure and large parts of the German translation
  • Support for Python 2.6
  • Better diagnostics for plugin problems

These patches were created while I modified MediaCore according to the needs of my customers.

Commercial Customization Examples

While I can't share all the commercial customizations I did for customers, I like to give two examples for modifications to give you an idea what can be done:

/posts/2011/mediacore/One_Goal.png

A German NGO launched a campaign during the Soccer World Cup 2010 in South Africa. The wanted a web platform where users (mostly schools) can upload videos and images. I needed a German site and the ability to manage images in a MediaCore instance.


/posts/2011/mediacore/carousel_with_ads.png

A commercial portal hosts a lot of videos with MediaCore but lacked the ability to display in-video advertisements using OpenX. I developed a custom MediaCore player which shows a pre-roll advertisement. Also I developed a video carousel which can be embedded into other sites (also showing the ads). All of this functionality is contained in plugins so MediaCore can be updated easily.



Further Questions?

As always, if you have more questions about MediaCore, check out the user forum. If you want to ask about commercial support, remote installation and modifications, just send an email to info@schwarz.eu.


Free Software is no silver bullet for niche markets

Many people think of solid application which causes not too much trouble when they talk about Free Software. Most Free Software is given away for free, everyone can extend it as (s)he likes (presuming the necessary knowlegde) and there is no marketing department which urges for premature 'final' releases. Administrators often like that Free Software does not require any license management (if you just use it) and bug reports are openly shared because of the abundance of company politics which may try to maintain a zero-defect illusion even if the product is buggy like hell.

But the article's head line is 'Free Software is no silver bullet for niche markets' so you probably already expect that I will give you some counter-examples to the cliches above.

Pipe dreams meet real live

So today's example is Lx-Office (Wikipedia, sorry no English version available). Lx-Office is a web-based ERP and CRM system written in Perl and PHP and distributed under the GNU General Public License. It includes a module for financial accounting in companies. The ERP/accounting module of Lx-Office originated in SQL-Ledger (Wikipedia). After the fork from SQL-Ledger in 2003, Lx-Office was modified to meet German regulatory requirements.

In Germany, the majority of all tax advisors (approximately 80% according to Financial Times Germany) uses software produced by a German cooperative named DATEV (Wikipedia). This software is really the heart of a tax advisor's services, different versions support almost everything from financial accounting and tax calculation to payroll accounting.

So you can imagine that for a serious accounting software, data exchange with tax advisors is really a must-have. Though DATEV does only offer properietary software (which only runs on MS Windows), they sell a SDK which contains the complete specification of the KNE format ("Postversandformat"). With that format you can share accounting information with your tax advisor (e.g. the routine accounting is done in-house but the tax advisor should assemble the balance sheet at the end of the year).

The KNE format is an 'old-style' binary file format with a long track record (if you include the predecessor, the OBE format which was introduced in the late '80s). When I wrote libkne, a pure Python library to create and parse these binary KNE files, I tested my implementation against several real-world implementations. However, when I used the DATEV export using Lx-Office's demo server I was very surprised to see that the exported files where not valid according to the specification!

Export bug known for several years

It turned out that the problem is caused by different date string formatting. The export module DATEV.pm transforms a string which is presumed to contain a date ('25.12.2008', which is 2008-12-25 in ISO notation) into the KNE binary date format ('251208' - yes, the KNE format is not y2k-safe). As soon you use a different date format (like '12/25/2008'), the export just puts in the date string '12/25/2008', ignoring the size restrictions (6 bytes) and the format specification ('DDMMYY') completely.

What really puzzled me was that this bug is known for 2 1/2 years and nothing happened so far. In June 2006 there was a post in the user forum which explained the problem and even spotted the bug in the source code. Only thirty minutes later, another forum reader filed a bug report with a reference to the post. Six months later, a developer requested more information (which looks stupid to me, all necessary information was given) from the submitter but no further action was taken until now (December 2008). That means that the DATEV export of Lx-Office is non-functional for 2 1/2 years if you don't use one specific date format!

Looking at the source

As always, we can have a look at the source to reveal the issue:

# Code taken from SL/DATEV.pm (r3482)
# https://lx-office.linet-services.de/svn/trunk/unstable/SL/DATEV.pm
# Licensed under the GPL v2+
sub datetofour {
  $main::lxdebug->enter_sub();
  my ($date, $six) = @_;
  ($day, $month, $year) = split(/\./, $date);
  if ($day =~ /^0/) {
    $day = substr($day, 1, 1);
  }
  if (length($month) < 2) {
    $month = "0" . $month;
  }
  if (length($year) > 2) {
    $year = substr($year, -2, 2);
  }
  if ($six) {
    $date = $day . $month . $year;
  } else {
    $date = $day . $month;
  }
  $main::lxdebug->leave_sub();
  return $date;
}

So the datetofour function gets a date string $date and a boolean flag $six and returns the KNE date string. The problem is that the function assumes a certain date format in line 10 in the snippet (line 481 in original file as of r3482): 'split(/\./, $date)' But this date formatting can be configured by the user so the function should query the configuration for the format to use.

Conclusions

I presented you one example of a Free Software application which has a serious known bug that was not fixed for more that 2 1/2 years (though it is probably only a matter of one or two lines). The project itself is backed by two small companies and the software is used by quite a lot of people (Sourceforge counted 4468 downloads for Lx-Office ERP beta1 between 2008-08-12 and 2008-12-25). The project follows a traditional 'open source' development approach (public bug tracker and source control repository, community support through web forums) and uses a well-known license (GPL 2+). So this is clearly an example of a 'real' Free Software project.

Nevertheless the project failed to fix a simple bug so that users can not use functionality which is quite important in many enterprise use cases if you consider the extensive German regulatory policies. Obviously this functionality is not important enough for most users of Lx-Office. Probably (speculation!) many of them are in the 'low budget' segment which does not regularly exchange financial data with their tax advisors.

Admittedly financial accounting is really a niche use case (highly dependend on the country you're operating in, laws and policies change on a regular basis) so the user and developer number is quite small (compared to other software like Linux, Mozilla Firefox or OpenOffice). But it still gives evidence to my initial thesis that Free Software is not necessarily a solution for you. While Free Software in very common use cases can excel their proprietary competitors (e.g. Apache HTTP), you may find that you should evaluate Free Software in niche markets very carefully.

Brief remark: 'Batteries included' is the right approach for class libraries

While I described what code caused the bug on a technical level, the real (technical) problem for me is found below: In Perl 5 there is no built-in data type to represent dates. So the Lx-Office developers had to fall back on using strings to pass dates around. If Perl shipped some included module for date representation and manipulation, probably the bug would have never occured because things like date formatting would only influence the visual presentation.

So I think that on a class library level Python's approach of batteries included is the right one. A decent(tm) programming language needs to ship modules for date manipulation (even a less-than-optimal API like in Python is better than a completely missing one). Just having add-on modules for that is not sufficient because not all developers use them and it's hard to use multiple independently developed third-party modules with each other because they probably will use incompatible date representations...


How to make input validation really complicated

Thanks to Vera Djuraskovic there is a Serbo-Croatian translation of this article available.

In every web application you need to validate your input data very carefully. Data validation is a very common task and so it's surprising that there are several validation libraries in Python (like formencode) which include validators for common tasks. However, Trac does not integrate any of these libraries so every plugin developer has to write their own validation code.

Now look how complicated you can check if a string does only contain the characters a-z, hyphens ('-'), underscore ('_'):

# Only alphanumeric characters (and [-_]) allowed for custom fieldname
# Note: This is not pretty, but it works... Anyone have an easier way of checking ???
f_name, f_type = customfield[Key.NAME], customfield[Key.TYPE]
match = re.search("[a-z0-9-_]+", f_name)
length_matched_name = 0
if match != None:
    match_span = match.span()
    length_matched_name = match_span[1] - match_span[0]
namelen = len(f_name)
if (length_matched_name != namelen):
    raise TracError("Only alphanumeric characters allowed for custom field name (a-z or 0-9 or -_).")

Please note how deep the author digged into Python's re library to find the span() method. So he first looks for an acceptable substring, computes the position of this substring, derives the length of the substring from that and checks if the length of the substring equals the length of the whole string.

At least the author had some doubts if his solution is the most elegant one (see the second line of the snippet above). So a simpler method of checking could be:

# Only alphanumeric characters (and [-_]) allowed for custom fieldname
f_name, f_type = customfield[Key.NAME], customfield[Key.TYPE]
match = re.search("([a-z0-9-_]+)", f_name)
if (match == None) or (match.group(1) != f_name):
    raise TracError("Only alphanumeric characters allowed for custom field name (a-z or 0-9 or -_).")

So now we got rid of all the index stuff. But still, you can do much easier than that by properly using regular expressions:

# Only alphanumeric characters (and [-_]) allowed for custom fieldname
f_name, f_type = customfield[Key.NAME], customfield[Key.TYPE]
if re.search('^[a-z0-9-_]+$', f_name) == None:
    raise TracError("Only alphanumeric characters allowed for custom field name (a-z or 0-9 or -_).")

Of course there is re.match but I try to avoid it due to personal issues with the method - I produced some bugs when using re.match previously.

You wonder which software does ship this mess? This code is part of a plugin which helps you to manage custom fields for your Trac tickets (CustomFieldAdminPlugin). Also the overall code quality of the module is quite poor so if you like to spend some time to learn refactoring, this is a good place to start.


TurboMail 3.0 Alpha1 released

For quite a while no new versions of TurboMail were released and indeed TurboMail 2 just worked well for many users. But now there is an alpha release of the upcoming TurboMail 3.0 which is nearly a complete rewrite of TurboMail 2. With the new code structure we were able to add some nice features:

  • No strong coupling to TurboGears anymore (use TurboMail in command line scripts if you like!)
  • Different delivery methods like SMTP, local mailbox etc.
  • Different delivery strategies (queuing and immediate delivery)

To get more people involved, we decided to release an alpha version. That means the most important functionality is already in, but some things are missing right now. Furthermore there are some small known bugs (sending pre-formatted messages won't work) but we can work on that later. Besides that, I'm using a this alpha (with some custom patches) on production servers since several months and it works really great so far.

Download

Documentation

I tried to built comprehensive documentation for every feature. This is still work in progress, but at least some docs are there:

If you have questions, patches or just want to rant about the alpha, please use the mailing list. The roadmap page shows the missing tasks. I'm currently writing a Python MTA to get real SMTP tests in again.


Migrating data from MS Access to MySQL easily

MySQL AB: If we are too slow, your bug must be invalid Most people will know the MySQL database. MySQL AB (recently bought by Sun) is the company behind that open source database and pays most developers working on the code.

Some years ago MySQL AB released a really nice tool which they named MySQL Migration Toolkit. Its a Java GUI application which can be used to copy data from other databases (like Microsoft Access) to a MySQL database. The tool even generates a lua script file so that you can automate the migration process afterwards.

In a recent project I was hired to build a replacement for a custom Access application. Part of the work was to create a new database schema. Of course the old data must be migrated so that it isn't lost. The chosen production database was PostgreSQL due to its solid track record when it comes to SQL features like transactions, referential integrity and all this boring stuff.

I figured out that it would be much easier for me converting data in the old database to an open source database and do the structure conversion in a second step with custom tools.

This was when the Migration Toolkit came into play: It copied the data from Access to MySQL and I wrote some scripts to get that data into Postgres. It worked really great after a very short time and everyone was happy. Sure there were some initial problems (like Migration Toolkit won't work with Java 1.6 but only 1.5) but these were solved quickly.

Some weeks later I discovered that some data in my new database was different from the Access data: In Access the value was 0.075, MySQL stored 0.08. This happened for all columns which were specified as "Single" with automatic precision in MS Access. The MySQL bugtracker revealed that others had the same problem: Access singles are converted to MySQL 7,2 double.

Unfortunately this bug was marked as 'invalid' when I found it two years after its creation (because the MySQL developer investigating it did not read the bug description correctly so he was not able to reproduce it). Bugs are far from unusual and so I was quite confident that I could jump in and help fixing it. Unfortunately, the bug was closed and therefore I was sure that no MySQL developer would ever look at it again. But their bug tracker won't let me open the bug as a normal user.

I pinged another MySQL dev directly (Susanne Ebrecht) and she reopened the bug :-) I followed her advise and tried to reproduce the problem which newer versions of MS Access. Clearly, the bug was still there...

Unfortunately, no MySQL developer had enough time to look at the report again and so an automatic process closed the bug again:

No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".

Maybe I'm just too stupid but I was not able to re-open it. Maybe only the original reporter can. But the main problem is the crappy configuration of the MySQL bug tracker which closes bugs which are marked as 'need more info' after 30 days which is quite short. Of course you need some sort of 'timeout' in a bug tracker because else you get swamped with useless bug reports no one is able to reproduce so they stay open forever.

Other bug trackers implement that better: If a bug has the NEEDINFO status in Bugzilla there is a checkbox "I am providing the requested information for this bug." and afterwards the status is being set back to normal.

The problem with MySQL was that I provided all necessary information and the bug was closed only because MySQL AB worked too slowly!

This is an example how to not handle bug reports! If you don't have enough time to triage all bugs in a short period of time, then don't set the timeout to 30 days!

Which left me alone with the rounding problem... Luckily I was a able to use some kind of "dirty hack" to work around the whole issue: The problem is that the created MySQL database schemes are too imprecise. The rounding problem occurs not until the real data is copied which happens at a later stage.

So I extended the lua migration script to call a Python script of mine. This script just sets all columns with the type 'double(7,2) to 'DECIMAL(7,4)'. And this fixed the rounding errors for me. Of course it would be nicer if a proper fix was included in the Migration Toolkit itself but this seems to be out of reach...

Maybe my workaround can save you some trouble or time so I put my dirty hack online. Help yourself!

My lua adaptation:

(...)
  -- create database objects online
  print("Create database objects.")
  grtM.callFunction(
    grtV.toLua(targetConn.modules.TransformationModule), "executeSqlStatements",
    {targetConn, "global::/migration/targetCatalog", "global::/migration/creationLog"}
  )
  grt.exitOnError("The SQL create script cannot be executed.")
end
-- added this line to execute my Python script
os.execute("C:\\Python24\\python.exe \"c:\\migration\\fix_access_single_migration.py\"")
-- ------------------------------------------------------------------
-- checkpoint 5
-- Bulk data transfer
(...)

And this is my Python script (fix_access_single_migration.py):

#!/usr/bin/env python
# License: MIT licee
# Copyright (c) 2008 Felix Schwarz <felix schwarz AT oss schwarz eu>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# 
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# 
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
import MySQLdb
hostname = 'localhost'
user = 'root'
password = ''
db = ''
def get_all_tables(cursor):
    tables = []
    cursor.execute('show tables')
    rows = cursor.fetchall()
    for row in rows:
        tables.append(row[0])
    return tables
def find_double_fields(cursor, tables):
    problematic_fields = []
    for table in tables:
        cursor.execute(u'show fields from `%s`' % table)
        rows = cursor.fetchall()
        for row in rows:
            col_name = row[0]
            col_type = row[1]
            if col_type == u"double(7,2)":
                problematic_fields.append({"table": table, "col_name": col_name})
    return problematic_fields
def fix_fields(cursor, problems):
    for problem in problems:
        table = problem["table"]
        col_name = problem["col_name"]
        sql = u'ALTER TABLE `%s` CHANGE COLUMN `%s` `%s` DECIMAL(7,4)'
        cursor.execute(sql_template % (table, col_name, col_name))
conn = MySQLdb.connect(host=hostname, user=user,
                       passwd=password, db=db,
                       use_unicode=True, charset="cp1250")
conn.begin()
cursor = conn.cursor()
tables = get_all_tables(cursor)
problems = find_double_fields(cursor, tables)
fix_fields(cursor, problems)
conn.commit()
conn.close()