Moe's Useful Things

A collection of findings, hints and what not.

Everybody is talking about Lambda. Let’s talk Azure Functions. And test drive it! — 26. May 2016

Everybody is talking about Lambda. Let’s talk Azure Functions. And test drive it!

Serverless computation is a hot topic at the moment. What AWS did with Lambda was nothing short of a paradigm change – introducing the promise of never having to think about the infrastructure of your applications again. Pay only for what you use. Autoscaling, CloudFormation, Load Balancer, routing in and out etc etc – not your concern. What many don’t understand is that just because you run something on a cloud platform it does not mean it will scale – scale is only the result of an appropriate architecture, architecture principles and of course technical skills/acumen.

Obviously serverless computation is all early days and while that holds true for AWS Lambda that is even more so the case for the runner-ups. You might not be aware but AWS Lambda is the first but not the only serverless (cloud) function option out there. In fact Azure and Google have their own flavors of Lambda, namely Azure Functions and Google Functions.

Think about it, just running your code and doing what needs to be done in order to make the world a better place or earn your rent, that’s pretty tempting isn’t it? And the fact it’s inherently built with being event driven in mind just adds to the appeal: on all the different platforms the serverless functions can be triggered by events stemming from storage, database, queues and more.

And it is, whatever you want to think about the current state of affairs. Pretty much all cloud function flavors have their weak spots and surely limitations, like comparably clunky usage – compared to the more mature services on all these platforms. It starts with code control, goes on to monitoring and timeout/RAM limitations. Put that aside serverless, event-driven computation is a phenomenon to be taken seriously.

I am a fan of diversity and choice, so while I love and embrace Lambda I have to say Azure Functions made a good entry. And thus I will spend some words on it. This is not about microservices by the way, because yes – serverless functions can be used for that purpose, at last in theory. But no, not every cloud function can be considered a microservice. Another topic – not for this post though.

You can use NodeJS, C# and some other languages are coming up including Python. (Not Java at the moment – April 2016.)

Similar to AWS as well, you can choose from some templates to start with that feature pre-defined (event) bindings. The number is steadily growing and there are plenty of experimental bindings templates and some you’d expect like bindings of functions to events related Blob Storage events, Service Bus.

Bildschirmfoto 2016-05-26 um 21.33.53

There is, similar to AWS, an online editor you could use to write your code right from the portal, see screenshots in this folder. The interface is easy to the eyes and a lot of things can probably coded right via the interface. However in real production environments you will reach the limitations quickly, and it starts with managing module imports which you cannot do there. For that reason you’d rather use e. g. GIT to push your code for an Azure Function. That is supported right away, so you can use a private GIT hosted via Azure.

You should know of course the basic conventions around file structure incl. the various configuration files and the expected inputs, check out this file for details:

Azure Functions is meant for on-demand computing that is rather short-lived (and see limitations below for details), in response to events. Azure ServiceFabric on the other hand is for long-running jobs, “management” and “state” as Matt Snider put it. You can compare ServiceFabric to Azure Simple Workflow Service (SWF) in that respect.

Azure API Management + Azure Functions
Of course you can combine the two (similar to AWS Gateway + Lambda) and benefit from sophisticated API Management functionality, how that could work is explained in that video there, around minute 7 or so. Speaking of which, Azure API Management supports Swagger/RAML descriptions, so if you have one for a new or existing API (you want to replace) that will come in handy.

Event-driven design with Azure Functions
Azure Functions can be bound to different services as depicted here:
Neatly enough, you can combine Functions with Logic Apps (, forming a powerful combo.  Like the supreme version of IFTTT.

Bildschirmfoto 2016-05-25 um 18.57.39

Timeouts: At the moment (May 2016) there is no timeout for Azure Funtions, however that will be introduced similar to what you have for AWS Lambda. Difference there though, if you want to believe that resource here:
… is that you can define the timeout duration yourself. That’s great for long-running processes! And something you do not have for Lambda today.

Another obvious (possible) limitation is the amount of RAM you assign. Right now the limit is 128 MB.

And there is limitation that is based on how frequently you use Azure Functions: if your function has to be initialized / has not been used for a while you will experience a hefty warm-up penalty. In my tests using the HTTP triggered function depicted in this post up to 30 seconds! This behavior will probably improve with the maturity of Azure Functions – more on that at the end of this post. However, for rarely used apps that require quick responses this might be deal breaker unless you keep polling your function. Resulting in more resource usage and costs, of course.

Okay, so that’s plenty of high-level talks here, let’s get dirty if you will and test-drive some Azure Function, shall we?

Getting started with Azure Functions:

Start from the Azure Portal dashboard: Create a new Function using any of the existing templates. Go to App Service (advanced) settings. Configure continous development and pick for example a private GIT provisioned by Azure.

From there all it takes is cloning the GIT repository.

Quick notes on that part as well to get you started quickly:

create a folder you want to operate with.

change to folder, type:

git init

Let’s say you want to clone your private GIT repository created for any Azure App Service, for example Azure Functions.

Go to the “Publishing” section of your App Service and there to “Publishing” > “Deployment credentials”. Deployment credentials go usually for both FTP and GIT.

That is where you define the deployment user and password – you will need these credentials to clone anything.

Then just go

git clone <private GIT url> 

Pushing back your changes then works via a simple git push. More useful GIT examples here. An Azure Function URL looks like this:<code>&name=blubb

In that case with an additional parametername that can be used by the Function as input.

Some CURL examples for illustration:

curl -G https://<Your Function App><Your Function Name>?code=<your access code>

That’s the HTTP GET version, you can use as well POST – for which the query parameter needs to be passed as data (-d) in the cURL command:

curl https://<Your Function App><Your Function Name>?code=<your access code> -d name=<Enter some name>

Check out invocation logs:

Working with the Function

So you have the source code and all set up, but how to get started? Well, a very easy Azure Function code looks like this.

module.exports = function(context, req) { 

    context.log('Node.js HTTP trigger function processed a request. RequestUri=%s', req.originalUrl); 

    //let's set up loggly 

      var loggly = require('loggly'); 
      var client = loggly.createClient({ 

        token: "eb7f49f0-pe8e-4bc8-azz0-baaf6e797b3b", 

        subdomain: "whatever", 

        auth: { 

          username: "fake", 

          password: "pwd" 



        // Optional: Tag to send with EVERY log message  


        tags: ['helloworld-azurefunction'] 


    //done with loggly 

    if ( || (req.body && { 

        client.log('we received a valid input, parameter name was '+( ||, function (err, result) { 

            if(err)context.log('Logging with loggly failed:'+err); 

            // Do something once you've logged  


        context.res = { 

            // status: 200, /* Defaults to 200 */ 

            body: "Helloooo " + ( || 



    else { 

        context.res = { 

            status: 400, 

            body: "Please pass a name on the query string or in the request body" 





Using Loggly here to have one consolidated place for your logging output and since the current monitoring tools around Azure Functions – now being mid of May 2016 – are not yet full-blown and useful. If you have worked with CloudWatch on AWS you might have felt the same urge to use a tool like Loggly – as CW is not the greatest tool for log inspection either.



The Azure Function application structure reference (  says you can include a package.json, yet you can not expect Azure to download the package contents for you.

It is exactly like for AWS Lambda: all the module directories need to be already installed and uploaded.
That means for package management you have to do as follow:

Create a package.json file in your Azure Function root; can be really as simple as this:


  "name": "AzureFunctionFun", 

  "version": "1.0.0", 

  "dependencies": { 

    "loggly": "1.1.x" 



Then issue

npm install

Now you have to add all the files to the GIT repository, overriding .gitignore entries:

git add node_modules -f 
git commit -am "something important"
git push

The push might take a little bit depending on the number of modules you use.


That is pretty much it. Find in the appendix some more considerations and links. Again, you will quickly notice things are not quite mature – where that is absolutely vividly apparent is the whole monitoring piece. With that in mind you might want to consider when to actually use Azure Functions for mission-critical applications. After all the ability to compute is only one of many aspects.

Should you keep an eye on Azure Functions? I would say yes. On serverless computation and how it evolves in general? That is big, fat unconditional YES!

So long!



Appendix: Some more considerations


Functions come with an authorization code that is part of the parameters passed, however you might need more fine-grained access control. Why not employ auth0?

Question I raised on access management:


The monitoring functionality is as of yet (April 2016) very limited and the log output could be towards some BLOB storage and that should work exactly like for any other App Service. Recommendation: You might want to think about a consolidated logging strategy using or your own logging mechanism. Pretty much like with AWS CloudWatch the logging functionalities are limited.

NodeJS examples

Azure Storage binding example (using Storage BLOBs)

Azure Functions MSDN Forum


A case for embracing static content — 3. May 2016

A case for embracing static content

A case for serving static rather than dynamic contents is easy to make: static contents are always easier to cache, serve, load-balance. Today’s world dictates JavaScript powered frontends and lose coupling with backend systems via APIs. Even the most dynamic websites feature plenty of static contents – think of online newspapers, stock exchange, retail websites, and whatever else. Nothing is more inefficient than have dynamic server tiers like app servers process static content, even with reverse proxy caching in place. And nothing is less cost effective: cheap storage for static content like S3 comes in bulks and in case of AWS there is at least one if not more very straight-forward reference architectures for hosting static content out of S3 as origin to anywhere on earth via CloudFront. No performance compromise and great cost efficiency.

So yes, static content is far from static websites and make a lot of sense, can actually help to serve a lot more customers simultaneously and improve the customer experience.

That pretty much explains the interest in bringing in tools like Lambda to help with the challenge of maintaining and distributing static content, which is in general one of the big (E)CMS challenges you have to master if you want to take full advantage of serving static content but at the same time remain dynamic as far as applying changes goes.

This tutorial here talks about an architecture for triggering automated static content generation and distribution and one element of it is a Lambda that would “trigger a Travis re-build” (as explained as well in the tutorial description):

The architecture could be different, of course – the entire static content generation could be either triggered by Lambda which would kick off a SWF job or you could push all content items into a Kinesis stream and have Lambdas process each item separately (in the order pushed into the Kinesis stream). To just name two possible scenarios. As soon as Lambda restrictions like max. 300 seconds and RAM limitations are gone you could probably do everything using Lambda and skip all the other services and tools altogether.

AWS comes with useful tools for the job, however the platform is by far not the only option to look at when it comes to manage static content in a sophisticated fashion. In fact some services take away a lof of the heavy lifting and, even if you do not plan to use them, are quite inspirational.

Case in point:

You can deploy static content with a GIT Push on Aerobatic, all you need is a BitBucket account. Seems like the perfect solution if you have to deal with plenty of websites with static content and different domains. (That’s almost every website out there as static content makes modern web frontends.)
Aerobatic website:

The offering from Aerobatic comes with automated SSL certificate management:

More than that, Aerobatic provides automated builds for static content, you can e. g. utilize the wintersmith framework ( to do things like these in an automated fashion for every website push:
-Apply Markdown (
-Bundle JavaScript and CSS with Browserify, write reusable styles with LESS, SASS or Stylus, include reusable web components
-Obfuscate your scripts

An alternative to Aerobatic with some fancy features as well: +

Starting from 9$ for one domain per month – at least that’s the price by April 2016. They take care of SSL renewals and stuff -things you can manage on your own, certainly. Do you want to, though?

Run e-mail campaigns with SendGrid, the easy way — 30. January 2016

Run e-mail campaigns with SendGrid, the easy way

You want to create a recurring e-mail campaign, let’s say for a newsletter promoting your great services and ideas. People need to sign up for it and that sign-up site should be easy to maintain and ideally be only a matter of configuration and no added coding.

As for the back end you are okay to perform some magic – control the data flows.

Alright, if all of this is true for you then here comes how you can make it happen using free tiers all the way to get started – and you are free to get some paid tiers for all services used once you have gained some momentum.

The two services we’ll combine to accomplish what I outlined above are SendGrid – which now provides marketing campaigns – and CognitoForms, a service that allows to create surveys or the landing page we are going to put to use.

The tools you need will be something like PostMan to configure the HTTP calls you are about to fire off, any way to run a simple HTTP endpoint online – PHP will do, I personally used NodeJS – and that is pretty much it. Oh yeah, accounts for both SendGrid and CognitoForms of course.


::: SendGrid is what will get your message out of the door

The APIs in question for SendGrid:

Campaigns can be run using (recipient) lists, and those lists can be created via the SendGrid API. (

When you create a new list, you’ll get an ID for that list back – let’s say 40500. Then the list will be available via (You see, REST API in action.)

SendGrid does not have any out of the box widgets that could be embedded, nor any kind of landing page generator though, so for the frontend – since we do not want to deal with that, you remember – we have that great CognitoForms option available to us:

CognitoForms is a good way to host the form for capturing new subscribers. You can attach every data submit to a webhook and CognitoForms would send a JSON payload away every time somebody submits the form. Get some details on how that works here:

So all you need to do is implement any kind of http endpoint in any technology you want, wherever you want, as long as it is accessible via the internet – parsing that JSON. And, of course, dealing with the SendGrid APIs, because that is after all the service that will send out our sweet marking e-mails.

::: Your webhook does need to do some SendGrid magic
So, you will need to create a new contact as explained here:

And then add the contact to an existing list, see:

Okay, where do we stand? We have a landing page (Cognito Form) for collecting the subscriber data and using the Contact API from SendGrid we can turn that data into registered subscribers we can enlighten with great e-mail content going onward.

Now what about managing the actual send-outs?

In fact what we aspire to do is firing campaigns in regular intervals. How would we do that with SendGrid? Well from the documentation and online resources I could gather as of early 2016, it does not look like you can duplicate existing campaigns via the API, nor can you use templates to ease the campaign creation. So you are left with creating campaigns using the Campaign API every single time before send-out:

You need to manage some campaign permissions and you can do that via the SendGrid web interface or use the API to assign the right permissions:
Namely those permissions are:
marketing_campaigns.create Perform POST operations on marketing campaigns
marketing_campaigns.delete perform DELETE operations on marketing campaign APIs perform GET operations on marketing campaign APIs
marketing_campaigns.update perform PUT/PATCH operations on marketing campaign APIs

Use BASIC Authorization to get the new Marketing API key as using an existing key for the other APIs will not suffice.

More on permissions:

You have the Marketing API permissions, you have everything you need to create campaigns and assign an existing subscriber list (by its ID). Using the campaigns API, you then get that new campaign out of the door by POSTing a HTTP call towards:<your campaign ID>/schedules/now

::: Almost there

What’s left to do? In fact only some trigger point for sending those campaigns. It could be some kind of cron job. There are plenty of services out there that will allow you to configure some cron jobs. Take if you are too lazy to search. That job then would hit another endpoint of yours – not the one I advised to create at the beginning to capture the subscriber data ! – and it would do nothing else but create that new campaign as explained before and hit a POST against the campaigns API to get it going.

You need some actual code? You want some diagrams because this long text was really tiring? Well, then tell me! I might do. I hope this little tutorial that was more explaining how-to was informative anyway.

In case you want to see a reference implementation in action, why not visit my little sign-up page here and sign up for daily Dilbert comic strips:

So you know it works. Unless I hit the free tier’s quotas for SendGrid or CognitoForms. In that case I am truly sorry, blame it on the spammers out there alright.

Kudos to #SendGrid and #CognitoForms for their great services!

Installing Chrome on CentOS — 5. December 2015

Installing Chrome on CentOS

Cutting to the chase, I tested this on CentOS 6.2 and works fine there:

Step #1, as suggested here:

Create a file called /etc/yum.repos.d/google-chrome.repo and add the following lines of code to it.


Then all it would take is

yum install google-chrome-stable

This will fail with a message like:

Error: Package: google-chrome-stable-28.0.1500.45-205727.x86_64 (google-chrome)

Step #2:

Install that library and fix the dependency:

rpm -ivh libstdc++47-4.7.0-16.1.x86_64.rpm

Step #3:

Now re-try installing Google Chrome:

yum install google-chrome-stable


Virtual Machines with VMWare on Retina displays – how to get over the mess — 8. November 2015

Virtual Machines with VMWare on Retina displays – how to get over the mess

You have all reason in the world to love the Retina display of your Macbook. How unsatisfactory is it though to see your Linux machine GUIs not being able to adopt the high resolution and therefore being left with tiny icons and just one big mess.

Well, that is the reality I faced with VMWare Fusion, until recent. Getting beyond this impediment is not that hard after all.

Go to your installed Applications and select “Informations” in the context menu (right click).

There you will notice a tick box that is well hidden – saying “open in low resolution”.

Switch the resolution tick box
Select the “Open in Low Resolution” tick box

Now it’s just a matter of making sure the Virtual Machine setting is configured to not make use of the Retina resolution, too.

Bildschirmfoto 2015-11-08 um 20.56.00
The “use full resolution” box needs to be ticked off

With these settings your Linux machine – at least I tested it with Cent OS 6.2, assume it will work equally fine with Ubuntu – should present you a nice, crisp UI without any further trouble.

I just wonder why those kind of things is not easily found in the VMWare documentation. Like there were a lack of users with Retina displays. By the way, I imagine the exact same tip could be useful on other high-resolution devices too, so if you run into similar trouble using whatever you use then just give it a shot.

The simple way of sharing your pictures and videos – without internet access —

The simple way of sharing your pictures and videos – without internet access

Maybe this is somewhat outdated, in fact I created the simple little tool shared with you in 2010 or so, however back then I found myself stuck with tools and services that either require online access – like – or some kind of server within the (corporate) network to share pictures and videos with. That turned out to be a really stupid limitation and the actual use case at hands was sharing pictures and videos from some internal panel discussion or something like that.

However, if your corporate network is shut tightly to the outside, what do you do. Further, not all these services would easily let you share images AND videos, and if so what formats would they accept? In many instances I found myself confronted with services and tools way too heavy for the little job asked for.

So I just went ahead and sported a little bit of .NET magic, simply because I knew there would always be some Windows machine to run the tool on – and the tool itself will generate static HTML code and transform videos into FLV to make it possible to play it all in your browser no problem.

Would I do all of this different today? Yes, for starters probably just use Java to start with, to stay interoperable,  and if it is a company really still sealed off the internet believing that is what it takes to be safe then leave that company. Another interesting option today could be Spring Boot or any similar framework with leight-weight embedded Java container. Oh yeah and flexible templates would have made sense, too.

Anyway, maybe somebody finds this hastily – but working! –  tool useful:

What you need to do is unpack it, run the AlbumGenerator exe in the cmd shell to get some info on how to use the tool and off you go. No specific requirements at all, will work with the most basic .NET framework version installed.

If you like my tips and tricks and the rest of it, why not spare me a tweet?

Install a Java Container in a container environment and put it ONLINE- with access to your container. For free. — 9. August 2015

Install a Java Container in a container environment and put it ONLINE- with access to your container. For free.

So you like dealing with the nitty-gritty of the system environment, setting ports and running startup scripts and all the crap thankfully other services are taking away for us – yet at times you still have no other choice but playing the sys-admin just because the software you have to use is “too special”? Or maybe because you happen to be a control freak?

Well, well. You should have a look at Fire up a fresh Ubuntu machine with them for free and knock yourself out. Let me explain how you would e. g. install Tomcat on such a fresh container image:

First step is creating a blank Ubuntu container on Nitrous.

Then go to the “Port” settings and add port 8080.

Remember the name of your system environment in the Container tab under “slug” – I will call it xxx going forth.

So you know, once you go into the IDE, the folder “code” is located in /nitrous/code

In the IDE, let us apply some magic by getting hold of the Java JDK first:

sudo add-apt-repository ppa:webupd8team/java

sudo apt-get update

sudo apt-get install oracle-java8-installer

(Accept Terms and Conditions by typing in “YES”)

find out about the installation folder by issuing:

sudo update-alternatives –config java

you should have only one displayed, copy the location over and create a JAVA_HOME environment variable:

sudo nano /etc/environment

JAVA_HOME=”/usr/lib/jvm/java-8-oracle” (this is just an example, your Java Home directory may look different – see alternatives you checked for before!)

Leave the Nano-Editor with CTRL+X, confirm you want to save the file.

Reload the env. variable with:

source /etc/environment 

Now let’s get Tomcat installed:

cd ~ (to go to your home directory)

Now check out for a version of Tomcat 8 you want to get and do exactly that, e.g.:


Let’s install it in opt/tomcat

sudo mkdir /opt/tomcat

And deflate the compressed file right into that new folder:

sudo tar xvf apache-tomcat-8*tar.gz -C /opt/tomcat –strip-components=1

Now apply some new permission set so we do not run into “cannot modify bla-bla-bla” issues:

sudo chmod -R 777 /opt/tomcat

cd /opt/tomcat/bin

Fire it up:


Is it running? Let us see:


In case you have added that 8080 port as I advised you to do at the beginning of this tutorial, you are now able to access the server via:

That should give you some nice HTML back. Work’s done. Explore more options and tricks on your own! If you are serious about deploying something with Nitrous you will not get around getting a paid plan with them, so no illusions there alright. (I think that is fair enough if you really enjoy what they offer.)

So long!

Securily get rid of your USB drives and sticks — 21. June 2015

Securily get rid of your USB drives and sticks

Except bringing on extreme means to make your drives, sticks unrecoverable before you dump them or hopefully give them away for some good purpose like donate to schools or similar, there is a very straight-forward way that few people know about.

Assuming your drive is connected to a Linux machine (be it via a VM or physical), do this:

First, find out the device the drive is mounted to:
Then, use the device registered, like /dev/sdb1, to shred it:

sudo shred -v /dev/sdb1

Read this  page for more details on how to use shred.
Do note though it will be time consuming. Works fine on Ubuntu. No other tools required.


How to import an calendar — 20. May 2015

How to import an calendar

Let’s say you want to import an diary into your Apple calendar app, there is an easy way to do it – even though not really obvious.

Log into your account

Share your calendar – copy the curl with the ics suffix.

Now replace the webcals suffix with https – and subscribe to the URL you end up with.

Done – easy, right?

E-Mail Parsing with the Google Cloud Platform, the charmingly easy way — 10. May 2015

E-Mail Parsing with the Google Cloud Platform, the charmingly easy way

Ever wanted to easily capture an incoming e-mail and do something great with it? Not talking about your surfing through grandma’s “Help I deleted the internet” messages. Programmatically parsing an incoming e-mail and automatically action on it, that’s the topic.

Now, happens that if you develop in Java on Google Cloud Platform, all you need is a) some servlet and b) some servlet configs.

Just create a new “Appengine” powered Eclipse project. (You’ll need the Eclipse Plugin to create such a project.)

Edit the web.xml:


Now create that servlet:

package moes.mailheaven.server;

import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.Date;
import java.util.TimeZone;

import javax.mail.BodyPart;
import javax.mail.Message;
import javax.mail.MessagingException;
import javax.mail.Multipart;
import javax.mail.internet.MimeMessage;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
public class MailHandlerServlet extends HttpServlet {
public void doPost(HttpServletRequest req, HttpServletResponse resp)
throws IOException {
try {
//MimeUtil c=new MimeUtil();
MimeMessage message = MimeUtil.createMimeMessage(req);

if (processMessage(message)) {
System.out.println(“Incoming email handled”);
} else {
System.err.println(“Failed to handle incoming email”);
} catch (MessagingException e) {
System.err.println(“MessagingException: ” + e);

private boolean processMessage(MimeMessage message) {
String date = getMessageDate(message);
String from = “unknown”;

try {
from = message.getFrom()[0].toString();
Object content = MimeUtil.getContent(message);
if (message.getContentType().startsWith(“text/plain”)||message.getContentType().startsWith(“text/html”)) {
processMail(message.getSubject(), from, date, (String) content);
return true;
} else if (content instanceof Multipart) {
Multipart mp = (Multipart) content;
for (int i = 0; i < mp.getCount(); i++) {
if (handlePart(message.getSubject(), from, date, mp.getBodyPart(i))) {
return true;
return false;
} else {
System.err.println(“Unable to process message content – unknown content type”);
} catch (IOException e) {
System.err.println(“Exception handling incoming email ” + e);
} catch (MessagingException e) {
System.err.println(“Exception handling incoming email ” + e);
} catch (Exception e) {
System.err.println(“Exception handling incoming email ” + e);

return false;

* KNock yourself out! And visit my blog @
* @param subject
* @param from
* @param date
* @param content
* @return
* @throws UnsupportedEncodingException
private String processMail(String subject, String from, String date, String content) throws UnsupportedEncodingException {

String response=””;

//TODO: Do something with that processed e-mail.

return response;

private boolean handlePart(String subject, String from, String date, BodyPart part)
throws MessagingException, IOException {
if (part.getContentType().startsWith(“text/plain”)
|| part.getContentType().startsWith(“text/html”)) {
processMail(subject, from, date, (String) part.getContent());
return true;
} else {
if (part.getContent() instanceof Multipart) {
Multipart mp = (Multipart) part.getContent();
System.err.println(“Handling a multipart sub-message with ” + mp.getCount() + ” sub-parts”);
for (int i = 0; i < mp.getCount(); i++) {
if (handlePart(subject, from, date, mp.getBodyPart(i))) {
return true;
System.err.println(“No text or HTML part in the multipart mime sub-message”);
return false;

private String getMessageDate(Message message) {
Date when = null;
try {
when = message.getReceivedDate();
if (when == null) {
when = message.getSentDate();
if (when == null) {
return null;
} catch (MessagingException e) {
System.err.println(“Cannot get message date: ” + e);
return null;

DateFormat format = new SimpleDateFormat(“EEE, d MMM yyyy HH:mm:ss”);
return format.format(when);


The code could sure be leaner but hey. Be my guest. The code should contain nothing unfamililar if you have ever laid hands on e-mails in conjunction with Java. And yes, some general stuff like what’s MIME – I am not going to explain it to you, if are serious about doing anything with e-mails please use the power of Google for enlightenment.

Anyway, once that is deployed, easy enough with the Google Cloud Platform Eclipse plugin – and someone should give me some money for saying that -, you are up and done. (To sound less GCP evanglist kind of, know that as for everything you do on GCP quotas apply. At least there is a permantent free quota. And it is good enough for testing stuff. You hear that, Microsoft? Free quota! Permanently! Just wanted to AZURE you catch my drift. Ha.)

BACK to our little e-mail parsing service.

Send an e-mail to

and you can see your work in action. The Cloud Platform logging service is useful, so at least for debugging I’d keep it quite verbose; you can still change log levels. So yes, good ole good old. BTW I would recommend to look into a centralized logging service like and start logging everything there, from your mobile app to your backend app etc etc. But that’s just me.

For your convenience, I added some Cloud Platform proof helper classes around Base64, HTTP GET/POSTs, MimeUtil. The stuff you’d need to have that servlet above compile.

You can get it here. Kudos to the authors, you will find their name in the respective classes.

I hope that’s useful, now do something good with e-mails. Nonsense there is more than enough being done with e-mails, that is for sure.