Moe's Useful Things

A collection of findings, hints and what not.

Master your Drupal install — 31. October 2017

Master your Drupal install

Drupal, one of the most interesting and capable CMS out there with plenty of developers working on it and a rich history. This day and age getting started with the CMS by provisioning your own databases and webservers appears like a big waste of time, right?

True and it’s not needed at all. One of the easiest way to get started with Drupal and even a certain “flavour” of Drupal (=with ready-made customizations) like Burda Medias Thunder is Acquia Cloud.

  • First get an Acquia Cloud account, you can start for free
  • Then get the latest thunder distribution URL ( -> copy the .gz file URL)
  • Create a free application on acquia cloud console, go to manage, select the dev environment, install from URL, wait
  • Don’t select any of the custom features and just run through installation

And you have it.

So that would be the PAAS approach with limited control over the Drupal installation. If you want SAAS then get a subscription with Acquia Cloud and never think of any systems under the hood ever again.

If you prefer having full control of the installation and rather go IAAS then there are some interesting options to look.

The supersimple AWS AMI route

That is of course by far not enough to get you going but rather a building block to play with. Have fun building your own Lego castle if you want, with the help of Elastic Beanstalk.

A proper reference architecture at your fingertips

One of the best reference architectures comes from – for a highly scalable Drupal installation:

Drupal and Azure

Even though I pointed above to AWS, getting it to run scalable and resilient on Azure is perfectly possible too. This article even makes the case that Drupal and Azure are a match made in heaven.

No matter whether you buy into that or not, here is a great reference architecture:

Mani Bindra provided an ARM template for a scalable Drupal deployment on Azure:


What to do with a PAAS that gets missing? Go FAAS (serverless functions), what else! — 29. October 2017

What to do with a PAAS that gets missing? Go FAAS (serverless functions), what else!

So this story is an easy one. I had written a simple NodeJS+Express API and some Angular page to create a small fun app I all “Youtube Guesser”. It’s all about trying to guess how many hits a random YouTube video received and it using the YouTube API to that end. Formerly it all was hosted on OpenShift v2 which just ran out of support and my API went offline. It’s just a fun coding exercise and the loss no big deal.

However I thought to myself, why not use this as an excuse to exchange the backend with serverless functions? I was sort of investigating Azure Functions for a while now as you can see from other blog posts of mine, so I went for it.

Put aside the usual debugging hassle with serverless functions which is certainly true for Azure Functions as well but recently improved, there are some other things to be aware of.

To start with, the structure of the code is significantly different simply because you start coding right “inside a function”, hence the whole notion of “serverless functions” of course. That is not the case with code like the one I used, so I had to carefully copy & replace code. That is still no big deal but running the whole thing revealed one big downside: there are no “global objects”.

That is not a big deal if your code does not depend on it. If you use frameworks like Mongoose to connect to a MongoDB more than likely there will be some global objects however. In my case the code broke and I had to put in a conditional check like this in:

try {
youTubeVideo = mongoose.model(‘YouTubeVideo’);
} catch (error) {
youTubeVideo = mongoose.model(‘YouTubeVideo’,youTubeVideoSchema);

Only then would it all work again.

The next big catch is that serverless functions come, given the way they are procured, with widely unpredictable response times. In case of Azure Functions that is certainly the case and a cold function call (=a function not frequently called) means a hefty startup penalty. If you depend on snappy responses a clear recommendation is to keep the Function warm by pinging it every couple of minutes. Azure can cover you on that one or you use a tool outside of Azure.

Anyway, the advantages are vast – cost wise Azure Functions are very attractive with 1 MM calls free no matter of your subscription tier and the team behind Functions has done a great job of supporting a whole lot of programming languages. Deployments are easy, use a GIT repository that comes “with” your Function project or GitHub or some other options if those are not good enough for you. Integration with Application Insights is an appealing option, of course you can and probably should for professional usage scenarios consider hooking up your Functions with API Management.

Could this have worked with AWS Lambda and AWS API Gateway? Certainly. Google Cloud Functions? I guess so, have not tried it yet and it still not generally available anyway (October 2017). The principles behind serverless functions are similar enough either way, so porting that piece of Node code to another platform is not painful.

I do not feel the urge to do that however, the backend was successfully replaced. Want to guess the counts of some YouTube videos? Be my guest and try it now. (Only the web page is in a Google Cloud bucket, all data is fed in via Azure Functions reading from a Mongo DB that does not sit on Azure. Yeah it is an interesting setup I know, should have migrated the DB into CosmosDB but then again, point proven and other things waiting so – nah.)

Masking website origins with the help of Azure Function Proxy — 25. July 2017

Masking website origins with the help of Azure Function Proxy

Azure Function Proxy (AZP) was meant for helping you to re-write your multitude of API URLs in a uniform fashion by the design of it, however it can be used for a rather unorthodox use case just as well.

Imagine you have two web resources you would like to get under one URL, with two path extensions – mapped to these web resources. As we live in a modern, mobile first world these web resources happen to be in AMP format.

So these two resources are a) one page from my pet blog as well as b) one from Now what I did with the help of AZP: create a new Proxy endpoint with route template and mapped the content origins to sub-paths of my Function endpoint. (Sounds a lot more complex than it really is, see screen caps below.)

This slideshow requires JavaScript.

And presto, your AZP will expose those resources to any path element you wish.

The pulled-through web content from the Guardian AMP page.
The pulled-through web content from the Guardian AMP page.
Bildschirmfoto 2017-07-23 um 20.51.25
The same for!

There are some caveats, however.

The origin URLs need to be fully descriptive in the sense that e.g. JavaScripts that are dependent are pointed to via fully qualified URLs and not only relative ones; and the proxied URL templates do require a trailing slash as it appears, URL would not work without. Maybe there is a trick for that, if so let me know please. Enjoy!

Show page + download file simultanously —

Show page + download file simultanously

This might not be the most complex topic on earth but I never said that my aim is to support only the tech gurus out there with my open sourced ideas, so here we go.

Imagine you want to to show a “thank you page” and at the same time initiate a download. Let’s say a whitepaper your visitor is interested in.

There is an easy way of getting it done with a little bit of JavaScript, all ready to play around with for you on JSFiddle:

Have fun and please do consider that this approach will not work on mobile. You should rather point to a download destination the user has to actively visit. That is anyway best practice and you will see in my little example that there is a download link to click on, too.

A scaling site is nice, now make it AMP — 19. July 2017

A scaling site is nice, now make it AMP

This guy here is describing how he made a website built for scale with some basic components and a website architecture dedicated to static content, using Hexo for static content creation and a combination of AWS S3 and AWS CloudFront for hosting.

I can confirm that static content put into the right architecture is just crazy fast (and easy to scale out), check out my little playground site And I am using Hexo there as well, together with a theme called “Icarus”. In this case the hosting works via CloudFlare (DNS only) + Firebase Hosting (for everything else). Firebase Hosting uses Google’s CDN and is a top choice for this purpose.

When I put up the page I told myself it’s great how all of this works, now I want to AMPlify my already mobile friendly site. In case you have not heard about it, it’s Google’s way of making the mobile experience lightning fast.

So how to do that with my existing Hexo + Icarus themed site?

First, get this plugin for Hexo:

Useful in this context is this article explaining how the server would know about an AMP version of your page:

Deviating from the tutorial provided on the npmjs package description, I had to do take the steps as follow instead.

a) Fix “could not generate content.json”: 

I was able to fix that annoying error like this:

npm install hexo-generator-json-content@1 --save (which uses version 1 only and downloads it updating package.json as well)

b) Changing head.ejs – but somewhere else!

Then edit the file “head.ejs” in my icarus theme directory, like it says on the npm how-2 – only that this file is not under (..)/_partial but themes/icarus/layout/common!

In head.ejs I pasted this jade snippet right before the first <link rel> starts:

<% if (is_post() && config.generator_amp){ %> 

  <link rel="amphtml" href="<%= config.url %>/<%= page.path %>amp/index.html"> 

<% } %>

(The config.url in your page _config.yml file needs to be correctly set, otherwise the URL will be malformed!)

Run the test server and check whether the content is valid AMP via this site:

Your AMP urls would look like this:


Like this for example, an actual AMP page:

There is one big caveat for me and my pet site log2talk though: even though there are clear links to the AMP versions for every article on my site, Google would not index the AMP versions of my site. Why that is I have no idea, they all happily pass the AMP validator so that cannot be the reason.

Have fun!

DISCLAIMER: It might be that I am not going to extend the log2talk domain ownership and therefore the site is gone by the time your read this, while I am still linking to it. Don’t get distracted too much by that and Hexo is still an interesting tool to look at. Just not the perfect choice for all occasions by all means.

The appeal of static content website generators like Hexo — 1. June 2017

The appeal of static content website generators like Hexo

Static website generators are a new way of managing your website. A tad nerdy, a tad complex. Certainly with a very unique touch. Here is a little site I created using Hexo – more on Hexo later:

So what is it about? We love our dynamic websites but ironically the best performing, most scalable sites you can have consist of “static” assets: HTML, CSS, JavaScript. And then of course all the media you are using. Is that the reason why static content generators are popular enough to justify the existence of a website that features the most popular static content generators, a list that keeps growing and is anyway far longer than you would expect?

Partially, maybe. The actual reason however is that “traditional” CMS, as powerful as they are – and there is a reason why WordPress accounts for 27% of all website CMS! – are not friendly for programmatic access. Just try to roll out a whole new website as part of one deployment script that might be worked on by any numbers of people simultaneously, residing in a code repository like GitHub. That is not the traditional way of managing a website – there are usually several people involved in publishing just anything. Let alone e.g. creating a new flavor of an existing site by issuing terminal commands. Since most development teams nowadays work that way that kind of approach translated to managing a website certainly has its appeal.

To the developer community, that is. If you are running a site for the sake of focusing on the content then that is the first thing to realize: you will only gain convenience, speed by using static site generators if you have a development acumen. Otherwise there is a steep learning curve ahead, certainly not comparable to easily getting going with e.g. WordPress.

So I tried it myself and created a Hexo generated site that is being built and managed by Netlify, a platform that supports the build of static websites like no other I have seen. You have to have your GIT repository, choose a content generator, specify the parameters you want or just keep the defaults – and the rest is just about getting the website skeleton pushed to GitHub. Netlify would take over from there and get it all deployed, distributed via its own CDN. So yes, it performs well. Hexo is is written in NodeJS and works perfectly fine in my favorite IDE Cloud9, so you can get started without ever leaving your browser and run as well the “development server” there for checking what you have done before you push any changes. The thing to get used to is that you would have to clean and re-build the static files every now and then for bigger changes in order to see the changes reflected. (Reminds me of debugging Java code.)

Hexo comes with plenty of nice features and a rich set of themes to choose from. Clone a theme into your site folder – and of course those themes would sit in GitHub like pretty much anything else you will be working with – and you are almost done. There are some configurations you can do for your basic site and the theme and probably you will have to, yet it’s not necessary to get started.

Another thing to get used to is that you articles will have to be written in Markdown. Did I mention that static content generators are geared towards developers?

Now you wonder, what to do about the dynamic contents that your site requires? Form captures and what not. Don’t you need servers for that? Well, no. The new-school way of thinking about that is to go serverless and rely on the capabilities of the clouds. Here is a nice article about how to create your serverless file uploads. And then of course there would be SAAS solutions for these kind of tasks that take everything away for some fee, to go down the super lazy lane.

So that is all nice and dandy, yet what is the bottom line? Should you go static site generator?

For me I have to say it’s fun because of the additional burden of “mastering” Hexo, slightly exaggerating. It’s certainly not for everyone and if your website is not part of ongoing builds then the big question really is what the reason would be to manage your site that way. Once you have set everything up you will however enjoy a lot of nice features like asset optimizations (minimizing etc), easy management of SSL certificates right out of your terminal and things alike. Is that cool? I don’t think so. All in all there is no reason to consider serverless static generators the new way to go. It is just an alternative way of going. And to be frank, it is not even a new way: pre-rendering static assets is a technique that some CMS already applied years back and there are plenty of libraries e.g. for SpringBoot that can do it. Well! The choice is yours.

Little LogicApp study: using it to ingest Uptimerobot data — 28. April 2017

Little LogicApp study: using it to ingest Uptimerobot data

Uptimerobot is a fantastic and affordable solution for checking any number of webpages for availability, types of issues occurred if any (based on HTTP status headers).

Sometimes you want the data gathered in your own systems however – let’s say for your own data analytics, combining outage and error information about your websites with other data available. Uptimerobot features a functionality to send multiple types of notifications in parallel: you can receive emails and push the error data in parallel to any web endpoint you control. The technical term for that endpoint is “web hook”. The data payload pushedinto “your” direction from Uptimerobot looks like this, stripped off headers:

 "queries": { 

        "monitorID": "778784728", 

        "monitorURL": "(...)", 

        "monitorFriendlyName": "My shaky website", 

        "alertType": "2", 

        "alertTypeFriendlyName": "Up", 

        "alertDetails": "HTTP 200 - OK", 

        "monitorAlertContacts": "(...)", 

        "alertDateTime": "1492651585" 


Now that webhook Uptimerobot would push the payload to can be perfectly implemented using Azure Logicapps which comes with multiple benefits like:

  • Detailed Monitoring
  • Plethora of integrations ready to use with LogicApps, all visually aided by the LogicApp Designer
  • All can be done from the Azure Portal without any need to leave it

In fact by the time I wrote this I only had a (rather simple, cheaper flavour of) Chromebook available and found no issues building an integration ad-hoc.

So what I would be doing is capture the Uptimerobot error/outage data and pump it into a DocumentDB instance on Azure for further transformation later on. (Not covered in this tutorial. Maybe more on that later.) This could have worked with Azure SQL Database of course as well, in fact the support for SQL Server in LogicApps is excellent.

How would that all work?

1) Create a LogicApp workflow and design a new “HTTP” flow step. You can define the schema of the payload there by providing sample payloads received by the HTTP step. Your LogicApp webhook would receive POST and GET calls and the LogicApp blade will show your the URL. I opted for processing POST requests going forth.

2) Finish the flow by having the output point to a Document DB instance.

3) All of this is extremely straight-forward, now what I did was crafting the JSON data to be put into the Document DB database by using Logic App expressions. That’s some sort of meta language – downside with that is that you have to get familiar with it, upside is that it’s powerful.

Screenshot 2017-04-28 at 17.35.26

So with a couple of functions we can include easily some useful extra-information.


              "body": { 

                  "id": "@guid()", 

                   "time_utc": "@utcnow()", 

                   "uptimedata": "@triggerOutputs()['queries']" 


Create a new id and assign a GUID to it, capture the current timestamp in another variable, for the rest pass on everything into one variable with the input from the Webhook trigger filtering out everything but the “queries” node. Since that is what I am interested in at least.

4) Save + put the LogicApp flow active.

5) In Uptimerobot, specify to send HTTP POST notifications right at that URL mentioned in 1.

And that is about it. New data will keep flowing into my Document DB instance. I can check for every single webhook call captured what data was flowing in and how it got transformed.

Screenshot 2017-04-28 at 17.43.21.png

You could use the same flow to dispatch any type of event data and e.g. chain it all up with other LogicApp components that screen for new Twitter posts or anything alike. So there is a quite some complex workflow you could build up there, comparably easily.

Yet, not everything is dandy about LogicApps as it is still in a maturity process – the  LogicApp code viewer and editor will remain one of your best friends when working with LogicApps. I read the ambition is to change that and include e.g. intellisense (more than already available).

On the other hand, this little “how to” is only scratching the surface of course – you could add a lot more security, pre-validation and more by hooking in for example Azure Logic App proxies. With Azure Monitor you could add a great deal of governance to the whole data ingest.

All in all, this little and of course simple use case was again a very pleasant experience with little hurdles in my way. LogicApps remains promising. It certainly is very useful already. I’ll keep watching it for sure.

Supercharge your marketing automation with IFTTT — 29. January 2017

Supercharge your marketing automation with IFTTT

Sometimes the most obvious ways of getting some automation behind your marketing or lead management activities get out of sight, so here is a quick reminder:

IFTTT can help.

There is an article dedicated to this topic and I recommend the read:

The number of integrations available is amazing. Let’s say you are an actor and you need to do market yourself and your company, then putting plenty of content out there regularly is key, right? Why not make sure your digital drumroll is automatically catered for? Think of a recipe like this one:

The fact that IFTTT is as easy to use as it gets certainly helps. You can however do complex things using Maker applets, using e.g. your own APIs. And you can have multiple applets executed in response to events.

And that’s that – just give it a try and judge yourself. I came to love it.

Using Azure Media Services for everyday video content delivery — 4. November 2016

Using Azure Media Services for everyday video content delivery

What’s Azure Media Services?

If you want the detailed explanation, here you go:

The short version, it’s a set of services geared towards enabling scalable video transcoding and publishing. If you are familiar with AWS then think of something along the lines of ElasticTranscoder. You will not need these types of services for making videos available online once in a while, to that end a simple file upload and exposing the file online does the job. However things change if you have to manage a lot of videos and video formats with constraints like Digital Rights Management (DRM) and on top need to cater for great scale and resilience. That is where Media Services kicks in. Ultimately, it does a lot of processing for you and makes your video content available via so called Streaming Endpoints.

In Microsoft Azure Media Services, a Streaming Endpoint represents a streaming service that can deliver content directly to a client player application, or to a Content Delivery Network (CDN) for further distribution. Media Services also provides seamless Azure CDN integration. The outbound stream from a Streaming Endpoint service can be a live stream or a video on demand Asset in your Media Services account.

Scaling out media delivery via Media Services works via scaling the Streaming Endpoints and/or using a CDN. At the moment that would have to be Verizon, though. For other CDN providers you have to apply some manual configuration.

How to put it all to practical use

In the next paragraphs I will give you a practical, technology/programming language independent implementation guide – if you are a NodeJS developer this guide will be in particular interesting, given that at the moment this article was wrote (1st of November 2016) there was no official Microsoft SDK for Media Services.

Couple of things on taxonomy and some preparation

So let us assume a simple video lifecycle: upload video, en-/transcode video, make that video available for streaming. Things could be more sophisticated thinking about DRM and I will shortly touch on that later but for now let us keep it all clean and simple.


Key thing for uploading – the Media Services REST API does not handle the actual Upload. That is done through the Storage REST API.

So all in all the Media Service API and the Storage API altogether with the more general ACS (Access Control Service) API are the main APIs we are looking at.

An asset is a container for multiple types or sets of objects in Media Services, including video, audio, images, thumbnail collections, text tracks, and closed caption files.

You need permissions for both the Media Service and the Storage Service it uses.

Here is a Postman collection that will make working with the API endpoints a lot easier, kindly contributed by John: get it now.

A recipe for using the Media Services

First things first, obtain an access token from ACS (Access Control Services employed by Azure).

Find out about the closest Media Endpoint API using with the header x-ms = 2.11.

That would be something like:

NOTE: The root URI for Media Services is You should initially connect to this URI, and if you get a 301 redirect back in response, you should make subsequent calls to the new URI. In addition, do not use any auto-redirect/follow logic in your requests. HTTP verbs and request bodies will not be forwarded to the new URI.

Next, you would create an asset first. (It’s metadata.) That works via {{Media Endpoint API}}/Assets.

A call to /Assets will create new Blob service container, making use of the Storage Account associated with the Media Service. The container URI looks like this in the response: “Uri”: “

Then you upload the physical video file, that happens via the Storage API, Blob service to be more precise.

Alternatively use the Azure CLI or one of the SDKs, like the NodeJS storage SDK that deals with BLOBs.

(NOTE: Storage of media files might or might not be that straight-forward as just dropping a file into Azure Blob Storage. You might have to encode the file, think of DRM – here is a C# coded Azure Function that demonstrates how this would work. Not relevant for content without DRM, yet good to be aware of.)

The AssetFile entity – metadata again, like the Asset – represents a video or audio file that is stored in a blob container. An asset file is always associated with an asset, and an asset may contain one or many AssetFiles. The Media Services Encoder task fails if an asset file object is not associated with a digital file in a blob container.

That is why you have to “merge” the existing BLOB previously uploaded with the Asset metadata using the {{Media Endpoint API}}/Files API.

For that you will need the asset id that was returned as id in your initial /Assets call and looks like this:

"Id": "nb:cid:UUID:8264c73e-6ba4-4c68-ba9c-62caf49e84a9"

Once this is done it’s time for encoding.

For this, create a /Jobs job (HTTP POST endpoint) and specify some default media processor to get started like the one that goes with the id: nb:mpid:UUID:ff4df607-d419-42f0-bc17-a481b1331e56. This is the H264 Multiple Bitrate 720p configuration and results in neat MP4s you can store right into the BLOB container where they will be ready for publishing.

Get more out-of-the-box media processors via /MediaProcessors.

Check the state of the job via the GET /Jobs endpoint. Status 3 indicates processing is done. (More details here.)

Alternatively you can establish a webhook using /NotificationEndPoints.
Last but not least you need to publish it all. For this you will have to do two additional things.

Going forth, you will need a so called Access Policy. (Can be shared furthermore, so you will need this only once.)

Hence, generate an Access Policy which will allow you to create a so-called Locator.

Both operations, for going via the REST API, will require you to deliver a whole bunch of HTTP headers. The Postman collection provided earlier will tell you in detail which ones.

In both cases you need to use the Media Endpoint API you found out about in the beginning of the sequence described here, not!

For the Access Policy you will get an ID like this:

"Id": "nb:pid:UUID:853c765f-04ca-46c3-a519-26ac2c817f4a"

Use this in conjunction with the Locator. The Locator finally determines when the video is published.

Note that there a choice of Progressive vs Streaming locators, reflected by the “Type” parameter in the call for creating the Locator. 1 = Progressive, 2 = On Demand Streaming.

How to test the streaming endpoints

In Azure you are given a couple of Streaming endpoints with every published video, simply because of different supported asset formats.

GET {{ApiEndpoint}}/Locators(‘<locatorid>’) returns a property named Path that you can use to construct the publishing URL.

Like so: <Path to streaming endpoint>/<name of the encoded asset file without suffix>.ism/manifest
Let’s say the movie you encoded was named then the URI should be something like this:


The .ism/manifest is for streaming. Whereas “Progressive Streaming” (=downloading) ones usually end in .mp4.

Test both types of endpoints in a browser by copying the full URL and into the resource text box here: 

Just make sure to cut out the “http:” part at the beginning, the streaming source URL starts with “//”.

If you want to set up the Azure Media Player yourself, well, that is easy enough, check out this JSFiddle for inspiration. (I might have taken the published streaming endpoint down by the time you read this and no video is displayed therefore – sorry for that.)


Doing all of this with Java:

Another NB:

Azure Media Services come with these limitations: 

Most of them are soft limitations though and can be lifted up on request.


Chatbots for Facebook Messenger at a glance — 4. October 2016

Chatbots for Facebook Messenger at a glance

Whether or not you are in favor of chatbots or not, in many ways they can be game changers. That is a bigger topic on its own right, in this very post I will just spend some words on writing about observations made and how to utilize  Facebook Messenger chatbots. So first things first, of course the official API documentation is the first great entry point. With version 1.2 some very interesting features around payment were introduced and since the whole platform is in flux a lot more will certainly come soon.

So basically you need a Facebook page and wire it up with a custom endpoint that features webhooks Facebook will (extensively) use. All very well explained here. After the page is wired up that webhook of yours will receive events like this:

{"sender":{"id":"925740457553273"},"recipient":{"id":"1387842364565702"},"timestamp":1472298240493,"message":{"mid":"mid.1472298240485:5db7a3720a27f7b785","seq":61,"text":"bla bla bla toller chat"}}

The so called “Page Access Token” is the number one token you will need to just do anything with your bot and it is directly tied to your  Facebook page. Try the Graph explorer to easily get access to: User Access Tokens, Page Access Tokens and more.

The tokens you acquire are directly linked to permissions you have asked the user for, of course. The type of tokens determine the type of action you are able to perform.

General thumb of rule, User Access Tokens are required for pulling rich information from the user’s timeline and you have to explicitly gain the permissions via the user beforehand  meaning the user has to be prompted with a dialog asking for all the permissions you need. People do not like to give extensive permissions so be very careful with that.

Page Access Tokens on the other hand originate from your Facebook page and attached app and can deliver you insights about users without prompting them first for permissions. In fact you can use these tokens to start conversations – however only if the user initiates the conversation and of course there are Facebook guide lines around what type of content can be messaged, in particular for push (chat) messaging.


Nice, but I want to know about who is talking with my bot

Well, you can get some user data just using a page access token:<USER_ID>?access_token=PAGE_ACCESS_TOKEN.

The catch here is, you have to use the recipient id gained when the conversation started and apply a valid Page Access Token as the access_token parameter. Only in that combination will it work.

The returned data is limited and looks like this:

  "first_name": "Zorro", 
  "last_name": "Montana", 
  "profile_pic": "", 
  "locale": "en_US", 
  "timezone": 2, 
  "gender": "male"

Clearly, the downside there is that you do not get as much data as you could get via the Graph User API.

So in short, you will have to work with landing pages / deep links in order to interact with the Graph User API (and/or other resources at hands) in order to learn more about your chat bot consumers.

How to manage the conversation

In fact the given chat samples on their own will not help you much having a half-way decent interaction with your consumers. The reason for that is that your bot needs to make sense of the language, the intent of messages thrown at it and then act based on that intent. Services like Microsoft’s help to translate user input into intents you can work with.


A typical link looks like this:

Id is the id of your app instance, subscription-key is tied to your account, the query is the URL encoded payload containing the “user query”. That is the question you want to post against

For the sample query the response would look like this, using pre-built sets of intents:


  "query": "where can I get pizza?", 
  "intents": [ 
      "intent": "builtin.intent.places.find_place" 
  "entities": [ 
      "entity": "pizza", 
      "type": "builtin.places.product" 


The tedious part here is that you need to train the system and specify the entities to differentiate between, however is well done and makes it fairly easy to deal with that part. Keep in mind is very young and plenty of changes are expected, so be prepared for code changes.

And the chatbot is still not responsive (to everybody)!

When you utilize a chatbot for the first time it will only respond to your own inquiries. To make it work for the rest of the world you have to “submit if tor review”. To be more precise, you need to submit its capabilities for review.

In particular for pages_messaging_subscriptions application permissions this is kinda intense – you have to provide a screen cast of why and how you want to message people pro-actively after the first conversation. Oh and your users need to keep chatting with your bot at least within 24 hours since the last time you deliver a message, otherwise your messages will no longer be considered. Facebook wants to make very sure you do not annoy anybody and you have keep these constraints in mind, while new ones might pop up anytime.

Important: You can find about the status of the pending review via the developer console. After the application permissions were granted you still have to put your app to “public” via the Developer cosole -> App review. Only then will it be possible for others to chat with your bot at all.

Have fun creating your own chat bots and what I would love to see are some well made chat bot (text) adventures! Go for it!