Skip to main content

Posts

Entgendern nach Phettberg

Liebe Lesys, Ich möchte auch auf Deutsch gerne inklusiv schreiben, ohne dass jeder Satz nach identitätspolitischem Kulturkampf schreit. Mein Vorzug wäre ein grammatisches Neutrum für Personenbezeichnungen, das nicht zu bürokratisch tönt ("Zielpersonen des staatlichen Bildungsauftrages") oder semantisch ungenau ist, wie substantivierte Partizipien ("Lernende" oder "zur Schule Gehende"). Dieser Vorschlag ( link ) des Germanistys  Thomas Kronschläger für eine neue Klasse von genderneutralen Personenbezeichnungen ist mir bisher am sympathischsten: Aus dem Stamm eines bestehenden Nomen und einer neuen Endung auf -y für Singular und -ys für Plural entsteht ein neues generisches Neutrum  (z.B "das Schüly", "die Schülys"). Diese neue Y-Form erinnert auch irgendwie an den Diminutiv im Schweizerdeutschen, den viele Deutsche scheinbar niedlich finden. Und dieser Jöh-Faktor könnte vielleicht auch der neutralen Y-Form im deutschsprachigen Raum zum ...

Career Development for Senior Engineers on the technical ladder

As part of the yearly performance review process, we are supposed to describe what we do and how we add value to the organisation. I am a software engineer by training and once prided myself on being a pretty decent programmer. At this point in my career, I am in a fairly senior position on the technical ladder at a large innovation driven tech company. This means that I now write a lot less code than most people in my team. The dual-ladder career system allows for formal career advancement of employees in technical roles without having to necessarily change into management roles - but still expects similar levels of strategic impact. So what do I actually do? When moving through levels of seniority, my focus has shifted from writing code to reviewing code, then from writing design documents to commenting on other people's designs. I still remember succinctly the advice of a more senior colleague, that the key to getting promoted to the next level would be by becoming comfo...

Email to Disaspora* posting Bot

What I still miss the most after moving from G+ to Diaspora* for a my casual public social network posting is a well integrated mobile app for posting on the go. The main use-case for me is posting photos on the go, which I now mostly take on my cellphone and minimally process with Google Photos. One of the problems with the mobile app for Diaspora* (Dandelion in the case of Android) is that the size limit for photo uploads is quite small compared to the resolution of todays cellphone cameras. There is also not much point of uploading  high-resolution images for purely on-screen consumption to an infrastructure managed by volunteers on a shoestring budget. I also liked the ability to geo-tag the mobile posts by explicitly selecting a nearby landmark to obfuscate a bit the current location. For a few weeks now, I have been sharing my account with a  G+ archive bot  that is uploading recycled posts from the takeout archive (see here for the first part of the series ...

Extracting location information from Photos

Photos exported from digital cameras often contain meta-data in Exif format (Exchangeable Image File Format). For images taken with cellphone cameras, this info typically also includes (GPS) location information of where the photo was taken. Inspired by this previous post on the mapping of GPS lat/lon coordinates from Google+ location data to a rough description of the location, we could also use the location encoded in the photo itself. We are using again the reverse geocoding service from OpenStreetMap to find the names of the country and locality in which the GPS coordinates are included in. For the purpose of public posting, reducing the accuracy of the GPS location to the granularity of the city town or village provides some increased confidentiality of where the picture was taken compared to the potentially meter/centimeter resolution accuracy of GPS data that generally allows to pinpoint the location down to a building and street address. Fractional numbers are repre...

The Fallacy of distributed = good

I have recently been looking for an alternative social media platform and started using Diaspora* via the diasporing.ch pod. Not unlike the cryptocurrency community, the proponents of the various platforms in the Fediverse seem to rather uncritically advocate the distributed nature of these platforms as an inherently positive property in particular when it comes to privacy and data protection. I tend to agree with Yuval Harari who argues in  "Sapiens"    that empires or scaled, centralized forms of organization are one of Homo Sapiens' significant cultural accomplishments. A majority of humans through history have lived as part of some sort of empire. Empires can provide prosperity and ensure lasting peace and stability - like the Pax Romana or in my generation, the Pax Americana. We often have a love/hate relationship with empires - even many protesters who are busy burning American flags during the day, secretly hope that their children some day will get into Har...

Google+ Migration - Part VIII: Export to Diaspora*

<- Part VII: Conversion & Staging The last stage of the process is to finally export the converted posts to Diaspora* the chosen target system. As we want these post to appear slowly and close to their original post date anniversary, this process is going to be drawn out over at least one year. While we could do this by hand, it should ideally be done by some automated process. For this to work, we need some kind of server-type machine that is up and running and connected to the Internet frequently enough during a whole year. The resource requirements are quite small, except for storing the staged data which for some users could easily be in multiple gigabytes, mostly depending on the number posts with images. Today it is quite easy to get small & cheap virtual server instances from any cloud provider, for example the micro sized compute engine instances on Google Cloud should be part of the free tier even. I also still have a few of the small, low power Rasbper...

Google+ Migration - Part VII: Conversion & Staging

<- Part VI: Location, Location, Location  Part VIII: Export to Diaspora* -> We are now ready to put all the pieces together for exporting to Diaspora*, the new target platform. If we had some sort of " Minitrue " permissions to rewrite history on the target system, the imported posts could appear to always have been there since their original G+ posting date. However since we have only have regular user permissions, the only choice is to post them as new posts at some future point in time. The most straightforward way to upload the archive would be to re-post in chronological order as quickly as possible without causing overload. If the new account is not only used for archive purposes, we may want to maximize the relevance of the archive posts in the new stream. In this case, a better way would be to post each archive post on the anniversary of its original post-date, creating some sort of "this day in history" series. This would require that the ...

Google+ Migration - Part VI: Location, Location, Location!

<- Image Attachments Conversion & Staging -> Before we focus on putting all the pieces together, here a small, optional excursion into how to make use of location information contained in G+ posts. We should consider carefully if and how we want to include geo location information as there might be privacy and safety implications. For such locations, it can make sense to choose the point of a nearby landmark or add some random noise to the location coordinates. Many of my public photo sharing post containing the location of near where the photos where taken. Diaspora* posts can contain a location tag as well, but it does not seem to be very informative and the diaspy API currently does not support adding post a post location. Instead we can process the location information contained in the post takeout JSON files and transform it to extract some information which we can use to format the new posts. In particular, we want to include a location link to the corre...

Google+ Migration - Part V: Image Attachments

< - Part IV: Visibility Scope & Filtering Part VI: Location, Location, Location -> Google+ has always been rather good at dealing with photos - the photo functions were built on the foundation of Picasa and later spun out as Google Photos. Not surprising that the platform was popular with photographers and many posts contain photos. In the takeout archive, photos or images/media file attachments to posts are rather challenging. In addition to the .json files containing each of the posts, the  Takeout/Google+ Stream/Posts directory also includes two files for each image attached to a post. The basename is the originally uploaded filename, with a .jpg extension for the image file itself and a jpg.metadata.csv for for some additional information about the image. If we originally attached an image cat.jpg to a post, there should now be a cat.jpg and cat.jpg.metadata.csv file in the post directory. However if over the years, we have been unimaginative in naming files...

Google+ Migration - Part IV: Visibility Scope & Filtering

<- Part III: Content Transformation Part V: Image Attachments -> Circles and with them the ability to share different content with different sets of people was one of the big differentiators of Google+ over other platforms at the time, which typically had a fixed sharing model and visibility scope. Circles were based on the observation that most people in real life interact with several "social circles" and often would not want these circles to mix. The idea of Google+ was that it should be possible to manage all these different circles under a single online identity (which should also match the "real name" identity of our governments civil registry). It turns out that while the observation of disjoint social circles was correct, most users prefer to use different platform and online identities to manage to make sure they don't inadvertently mix. Google+ tried hard to make sharing scopes obvious and unsurprising, but the model remained complex, ...

Google+ Migration - Part III: Content Transformation

<- Part II: Understanding the takeout archive  Part IV:  -> Visibility Scope & Filtering -> After we have had a look at the structure of the takeout archive, we can build some scripts to translate the content of the JSON post description into a format that is suitable for import into the target system, which in our case is Diaspora*. The following script is a proof of concept conversion of a single post file from the takeout archive to text string that is suitable for upload to a Diaspora* server using the diaspy API. Images are more challenging and will be handled separately in a later episode. There is also no verification on whether the original post had public visibility and should be re-posted publicly. The main focus is on the parse_post and format_post methods. The purpose of the parse_post method is to extract the desired information from the JSON representation of a post, while the format_post method uses this data to format the input text ...

Google+ Migration - Part II: Understanding the Takeout Archive

<- Part I: Takeout Part II: Content Transformation -> Once we the takeout archive has been successfully generated we can download and unarchive/extract it to our local disks. At that point we should find a new directory called Takeout with the Google+ posts being located at the following directory location:  Takeout/Google+ Stream/Posts . This posts directory contains 3 types of files: File containing data for each post in JSON format Media files of images or videos uploaded and attached to posts, for example in JPG format Metadata files for each media-file in CSV   forma with an additional extensions of .metadata.csv The filenames are generated as part of the takeout archive generation process with the following conventions: the post filenames are structured as a date in YYYYMMDD format followed by a snippet of of the post text or the word "Post" if there is not text. The media filenames seem to be close to the original names of the files when they we...