How to be a responsible web developer/web admin

flattr this!

Due to the latest changes in the GDPR I stumbled upon some topics and issues people where facing while trying to make their website compliant to the new demands. This brought me to this little tutorial on how to make your website more secure and less sniffing.
Not all of the topics will be GDPR-related as this is NOT another “how to make your website GDPR compliant”-article. However this is a collection of different things you, as a web Developer or Administrator of for example a wordpress site should be aware of. Basically it’s a “How to not fuck around with the security of your users”.

Google Fonts

A lot of People use “Google Fonts” in their website. So do the most wordpress themes. This is not a bad idead by default. The problem is that you cannot be sure every user has the Font you are using installed. So if you use Arial as a default Font and someone doesn’t have this font installed on his computer – no text can be displayed. This is not exactly right, because every web browser provides a default Font it falls back to in such cases. So the website is just not displayed with the Font you would want it to, but with a Fallback-Font. To ensure everyone can see it with the same Font you can use Google Fonts where the Fonts are loaded directly from Google.

The Problem: With every visit the IP Adress of the user gets transported to Google. So in fact they can track every user uniquely over every site where Google Fonts are used. This is bad and you shouldn’t be part of that problem.

The Solution: If you are a web Developer please consider providing the Font as a TTF File as described here.
If you use wordpress, Google Fonts are very likely to be used by your theme. To verfiy this, you can run the following Linux Command in your wordpress root Folder:

grep -ir "fonts.googleapis" .

If you get any results Google Fonts are used by one of your themes. The normal way to load them under wordpress is via the following PHP function:

wp_enqueue_style( 'Gfonts', ',regular,italic,500,700|Roboto:300,300italic,regular,italic,500,700', array(), true );

The values may differ in your case but the URL for Google Fonts is the important part. To disable this function just put a # before this specific line in every PHP file found by the grep command above. You can ensure you did this for every line by running

grep -ir "fonts.googleapis" . | grep -iv "#"

If you don’t get any result you sucessfully removed Google Fonts from your website. Please remember to have a look on your website if it still looks good.


When you click a link, your browser will typically send the HTTP referer header to the webserver where the destination webpage is at. The header contains the full URL of the page you came from. This lets sites see where traffic comes from. The header is also sent when external resources (such as images, fonts, JS and CSS) are loaded.

The Problem: if you link to an external site, this site can track the movement of your users as soon as they visit another page which links to them. In combination with cookies this is one of the most useful tracking possibilities out there. In fact you should not imagine the scenario as the tracking by one single page but by a tracking network consisting of hundreds of thousands of pages where it is really likely that one of the other websites of such a network will be called by the same user. Also think of facebook buttons on a website. As soon as a logged in facebook user visits a site with a facebook button (which, by it’s nature, refers to facebook), facebook knows that you visited this exact page.

The Solution: A great invention called “Referrer Policy” gives you the posibilitie to prevent the transfer of such data to other sites. This is not just for clicked links but also for other referres such as linked images, etc.
You should have a look at the different policies. However I recommend to use no-referrer which disables the referrer Header completly. To do so add the following meta tag to the header.php of your wordpress theme or your normal HTML Header:

<meta name='referrer' content='no-referrer'>

Also you can use this wordpress plugin.

HTTP Headers

This part focuses on the security of your website. We want to prevent XSS, MIME-sniffing of the content type and framing of the website. We use the following Header options for this purpose:


Here is a complete .htaccess file with all the Headers set:

Header set X-XSS-Protection "1; mode=block"
Header always append X-Frame-Options SAMEORIGIN
Header set X-Content-Type-Options nosniff
Header set Strict-Transport-Security: max-age=15768000

Just place the .htaccess File in your webroot or appent the code above into any existing .htaccess File. Please note that you need to enable the apache module mod_headers to make this work.
You can ensure the changes worked via curl:

$ curl -I URL
HTTP/1.1 200 OK
Date: Thu, 24 May 2018 19:05:14 GMT
Server: Apache/2.2.15 (CentOS)
X-Powered-By: PHP/7.2.0
X-Robots-Tag: noindex
X-Content-Type-Options: nosniff
Access-Control-Expose-Headers: X-WP-Total, X-WP-TotalPages
Access-Control-Allow-Headers: Authorization, Content-Type
Allow: GET
Strict-Transport-Security: max-age=15768000
X-Frame-Options: DENY
X-Xss-Protection: 1; mode=block
Connection: close
Content-Type: application/json; charset=UTF-8


I shouldn’t need to say much about SSL and the need of an encrypted website. Since letsencrypt launched you have no excuse for not having a valid SSL Certificate. What you should do is to enforce the use of https even if someone acesses the website unencrypted. This can be done in the .htaccess file:

RewriteEngine on
RewriteCond %{HTTPS} !=on
RewriteCond %{ENV:HTTPS} !=on
RewriteRule .* https://%{SERVER_NAME}%{REQUEST_URI} [R=301,L]

While doing this you also should ensure that you’re only linking to sources such as images, etc. via https.

Run File

flattr this!

Just a short note for everyone who’s expecting a configure file in a Software he wants to build but there’s only a file. This File is for the so called autoconf command which generates the configure file. Just run autoconf to generate it.

Please mind that you need to install the following packages for sucess:


If you miss one of those packages you might end up with the following error: error: possibly undefined macro

Sometimes you need to install some dependencies for the configure process. To do so run autoreconf --install

NetworkManager is not running

flattr this!

Yesterday my NetworkManager suddenly stopped working. After a reboot the tooltip “NetworkManager is not running” appeared in my nm-applet. When I tried to start the service, I got the following message:

$ sudo systemctl start NetworkManager.service
Job for NetworkManager.service failed because the control process exited with error code.
See "systemctl status NetworkManager.service" and "journalctl -xe" for details.

As suggested I had a look at

$ journalctl -xe
-- Unit NetworkManager.service has finished shutting down.
Mar 20 08:27:38 hugo systemd[1]: NetworkManager.service: Start request repeated too quickly.
Mar 20 08:27:38 hugo systemd[1]: NetworkManager.service: Failed with result 'exit-code'.
Mar 20 08:27:38 hugo systemd[1]: Failed to start Network Manager.
-- Subject: Unit NetworkManager.service has failed

And here is where I had my problems. The message is clear: somehow the service was started too often in a short time frame. But how?

This is how: If you have a look at /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service you’ll find the following entry:


This entry is the reason for the multiple startups of NetworkManager in a short time. To get to the actual error message we need to disable this entry:


Then we need to reload the deamon:

sudo systemctl daemon-reload

and now we can have a look on the original error message:

$ sudo systemctl start NetworkManager.service
Job for NetworkManager.service failed because the control process exited with error code.
See "systemctl status NetworkManager.service" and "journalctl -xe" for details.
$ journalctl -xe
-- Unit NetworkManager.service has begun starting up.
Mar 20 08:27:51 hugo NetworkManager[16335]: /usr/bin/NetworkManager: error while loading shared libraries: cannot open shared object file: No such file or d>
Mar 20 08:27:51 hugo systemd[1]: NetworkManager.service: Main process exited, code=exited, status=127/n/a
Mar 20 08:27:51 hugo systemd[1]: NetworkManager.service: Failed with result 'exit-code'.
Mar 20 08:27:51 hugo systemd[1]: Failed to start Network Manager.

So in my case a library was missing which I could fix really quick by

$ cd /lib
$ sudo ln -s

Remove Subfolders from your Dovecot Inbox

flattr this!

I am running Dovecot as mailserver and had the problem that I wasn’t able to remove a subfolder in my Inbox. Thunderbird just prompted the error that the action could not be performed. So I decided to remove the folder myself and this is how you do it:

1. Go to your maildir. In my case it’s /var/vmail if you have a divergent configuration have a look at your configuration file
2. Navigate to your mailserver and the account you want to do the change to
3. Find the file subscriptions

This file contains every (sub)folder you want to fetch with your mail client. So even the Trash and Sent folders are listed here. To unsubscribe from one Folder just remove the entry from the file. Then you need to restart your mailclient because it only fetches the subfolders at startup.
The benefit of this way is that you don’t lose the messages of the subfolder. You can re-subscribe to it everytime you want by just adding the name of the folder again.

Hint: make sure to check the file permissions of subscriptions afterwards. It needs to belong to user and group vmail

Xorg bzw. lightdm startet nicht

flattr this!

Gestern Abend habe ich an mein Netbook ein HDMI Kabel angeschlossen. Der Fernseher am anderen Ende wurde aber nicht erkannt. Daraufhin vermutete ich einen Fehler in meinen Treibern (den HDMI Anschluss hatte ich noch nie benutzt) und versuchte einige Dinge, um den Treiber zum Laufen zu bringen.
Letzlich stellte sich dann raus, dass das HDMI Kabel einfach nur kaputt war. Sehr ärgerlich.
Noch ärgerlicher war allerdings, dass beim Booten heute morgen kein X Server startete. Ich hatte aber vollen Zugriff auf die TTY Konsolen. Im normalen syslog war nichts hilfreiches zu finden. Also versuchte ich lightdm von Hand zu starten, was jedoch die Fehlermeldung

** (lightdm:289): WARNING **: Error getting user list from org.freedesktop.Accounts: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.Accounts was not provided by any .service files

brachte. Etwas recherche zeigte, dass es mit dem Greeter zu tun haben könnte. Also deinstallierte ich den lightdm-gtk-greeter und richtete dafür den lightdm-webkit-greeter ein. Kein Unterschied.

Schließlich fand ich das eigentlich Problem im Xorg.log:

(EE) Failed to load module "intel" (module does not exist, 0)
(EE) No drivers available.

Fatal server error:
no screens found

Nun war klar: der Greeter kann nicht Schuld sein. Xorg versagt hier bereits beim Laden der Bildschirmkomponente. Ich hatte zwar nichts an Xorg verändert, aber vermutete, dass eines der Packages, die ich zum beheben des vermeintlichen HDMI Problems installiert hatte, hier etwas verändert hatte. Ich passte die xorg.conf wie folgt an:

Section "Device"
Identifier "card0"
#Driver "intel"
Option "Backlight" "intel_backlight"
BusID "PCI:0:2:0"

Und siehe: danach bootete das Gerät wunderbar! xorg benötigt heutzutage ja eigentlich kaum noch Konfiguration. Der autoload macht eigentlich meist seinen Job, greift jedoch nicht, wenn im config File ein Treiber explizit angegeben wird.

Nun wollte ich natürlich wissen, welcher Dienst diesen Eintrag vorgenommen hat. Nach etwas recherche stellte sich raus: Es war der Display Settings Dialog von xfce!
Dieser bietet unten ein kleines Häckchen: “Configure new displays when connected”. Wenn man dieses Häckchen setzt, wird wohl der Eintrag in der xorg.conf dementsprechend angepasst und das führte bei mir zum nicht-starten des X-Servers.


Fehlermeldung: package-query: requires pacman<4.3

flattr this!

Als ich heute ein Systemupdate per

yaourt -Syyu

ausführen wollte, bekam ich folgende Fehlermeldung:

error: failed to prepare transaction (could not satisfy dependencies)
:: package-query: requires pacman<4.3

Das Selbe passierte bei einem Update über Pacman.
Eine Überprüfung per

pacman -Qs pacman

sagte mir, dass ich pacman-4.2.1 installiert hatte, worduch die Bedingung <4.3 eigentlich erfüllt gewesen sein sollte. Eine kurze Recherche ergab, dass die aktuelle Pacman Version gerade von "testing" in "core" verschoben wurde. Es lag also nicht an der Pacmanversion, sondern an den Dependencies von package-query. Deshalb habe ich einfach explizit dieses aktualisiert:

yaourt -S package-query

und danach funktionierte ein normales Update wieder einwandfrei.

Da war ich wohl zu voreillig. Bei einem erneuten Updateversuch erscheint

package-query: error while loading shared libraries: cannot open shared object file: No such file or directory

was logisch ist, denn package-query muss nun natürlich noch einmal neu installiert werden, damit es gegen das neue Pacman kompiliert. Dafür verwenden wir am Besten ein Packagebuild aus dem AUR:

makepkg -sri

Danach haben wir die aktuelle package-query Version installiert.

Wer das antegros-repo eingebunden hat, bekommt die aktuelle Version per ganz normalem Update.

Danke an die Hinweise, zur Vervollständigung dieser Anleitung.

Amavis behindert Postfix

flattr this!

Kurzer Tipp. Falls ihr mal keine Mails bekommt und in den Logfiles sowas steht wie

connect to[]:10024: Connection refused)

dann liegt das an dem Spamvilter amavis. Kurzer Restart von postfix und amavis sollte helfen:

service amavis restart
service postfix restart

Mails gehen dabei nicht verloren, werden nur mit etwas Zeitverzögerung zugestellt.

Lokale WordPress Installation – Passwort vergessen

flattr this!

Kleiner Tipp noch. Falls man mal das Passwort für einen User bei seiner lokalen WordPressinstallation vergisst: Einfach in die wordpress mysql Datenbank wechseln und da dann folgendes ausführen:

update wp_users set user_pass=md5("password") where user_login="user";

damit setzt man für den Nutzer user das Passwort password. Dieses kann man nach erfolgreichem Login dann im Dashboard wieder ändern. Das alte Passwort ist damit pasé.

Schlüsselproblem unter Arch

flattr this!

Eben hatte ich das Problem, dass unter Arch ein PGP Schlüssel für ein Paket beim Installationsvorgang nicht abgerufen werden konnte und somit die Installation fehlschlug.

Mein Versuch, den angezeigten Schlüssel manuell per

pacman-key --recv-key key

herunterzuladen schlug mit diser Fehlermeldung fehl:

gpg: connecting dirmngr at ‘/root/.gnupg/S.dirmngr’ failed: IPC connect call failed
gpg: keyserver receive failed: No dirmngr

==> ERROR: Remote key not fetched correctly from keyserver.

Der Fehler lag offenbar darin, dass aus irgend einem Grund im Root Verzeichnis der Ordner .gnupg und die darin enthaltente Config für die ldapserver fehlten.

Doch das lässt sich ja schnell lösen und damit auch der Fehler:

mkdir  /root/.gnupg/
touch  /root/.gnupg/dirmngr_ldapservers.conf