Moe's Useful Things

A collection of findings, hints and what not.

Masking website origins with the help of Azure Function Proxy — 25. July 2017

Masking website origins with the help of Azure Function Proxy

Azure Function Proxy (AZP) was meant for helping you to re-write your multitude of API URLs in a uniform fashion by the design of it, however it can be used for a rather unorthodox use case just as well.

Imagine you have two web resources you would like to get under one URL, with two path extensions – mapped to these web resources. As we live in a modern, mobile first world these web resources happen to be in AMP format.

So these two resources are a) one page from my pet blog log2talk.com as well as b) one from digiday.com. Now what I did with the help of AZP: create a new Proxy endpoint with route template and mapped the content origins to sub-paths of my Function endpoint. (Sounds a lot more complex than it really is, see screen caps below.)

This slideshow requires JavaScript.

And presto, your AZP will expose those resources to any path element you wish.

The pulled-through web content from the Guardian AMP page.
The pulled-through web content from the Guardian AMP page.
Bildschirmfoto 2017-07-23 um 20.51.25
The same for log2talk.com!

There are some caveats, however.

The origin URLs need to be fully descriptive in the sense that e.g. JavaScripts that are dependent are pointed to via fully qualified URLs and not only relative ones; and the proxied URL templates do require a trailing slash as it appears, URL would not work without. Maybe there is a trick for that, if so let me know please. Enjoy!

Show page + download file simultanously —

Show page + download file simultanously

This might not be the most complex topic on earth but I never said that my aim is to support only the tech gurus out there with my open sourced ideas, so here we go.

Imagine you want to to show a “thank you page” and at the same time initiate a download. Let’s say a whitepaper your visitor is interested in.

There is an easy way of getting it done with a little bit of JavaScript, all ready to play around with for you on JSFiddle:

https://jsfiddle.net/blorp/v04zt6w8/2/

Have fun and please do consider that this approach will not work on mobile. You should rather point to a download destination the user has to actively visit. That is anyway best practice and you will see in my little example that there is a download link to click on, too.

A scaling site is nice, now make it AMP — 19. July 2017

A scaling site is nice, now make it AMP

This guy here is describing how he made a website built for scale with some basic components and a website architecture dedicated to static content, using Hexo for static content creation and a combination of AWS S3 and AWS CloudFront for hosting.

I can confirm that static content put into the right architecture is just crazy fast (and easy to scale out), check out my little playground site log2talk.com. And I am using Hexo there as well, together with a theme called “Icarus”. In this case the hosting works via CloudFlare (DNS only) + Firebase Hosting (for everything else). Firebase Hosting uses Google’s CDN and is a top choice for this purpose.

When I put up the page I told myself it’s great how all of this works, now I want to AMPlify my already mobile friendly site. In case you have not heard about it, it’s Google’s way of making the mobile experience lightning fast.

So how to do that with my existing Hexo + Icarus themed site?

First, get this plugin for Hexo:

https://www.npmjs.com/package/hexo-generator-amp

Useful in this context is this article explaining how the server would know about an AMP version of your page:

https://stackoverflow.com/questions/37103814/how-does-the-server-know-when-to-serve-an-amp-page

Deviating from the tutorial provided on the npmjs package description, I had to do take the steps as follow instead.

a) Fix “could not generate content.json”: 

I was able to fix that annoying error like this:

npm install hexo-generator-json-content@1 --save (which uses version 1 only and downloads it updating package.json as well)

b) Changing head.ejs – but somewhere else!

Then edit the file “head.ejs” in my icarus theme directory, like it says on the npm how-2 – only that this file is not under (..)/_partial but themes/icarus/layout/common!

In head.ejs I pasted this jade snippet right before the first <link rel> starts:

<% if (is_post() && config.generator_amp){ %> 

  <link rel="amphtml" href="<%= config.url %>/<%= page.path %>amp/index.html"> 

<% } %>

(The config.url in your page _config.yml file needs to be correctly set, otherwise the URL will be malformed!)

Run the test server and check whether the content is valid AMP via this site:

https://validator.ampproject.org/

Your AMP urls would look like this:

http://<site>/2017/06/25/life/amp/

Like this for example, an actual AMP page: https://log2talk.com/2017/07/12/society/amp

There is one big caveat for me and my pet site log2talk though: even though there are clear links to the AMP versions for every article on my site, Google would not index the AMP versions of my site. Why that is I have no idea, they all happily pass the AMP validator so that cannot be the reason.

Have fun!

DISCLAIMER: It might be that I am not going to extend the log2talk domain ownership and therefore the site is gone by the time your read this, while I am still linking to it. Don’t get distracted too much by that and Hexo is still an interesting tool to look at. Just not the perfect choice for all occasions by all means.

The appeal of static content website generators like Hexo — 1. June 2017

The appeal of static content website generators like Hexo

Static website generators are a new way of managing your website. A tad nerdy, a tad complex. Certainly with a very unique touch. Here is a little site I created using Hexo – more on Hexo later: log2talk.com.

So what is it about? We love our dynamic websites but ironically the best performing, most scalable sites you can have consist of “static” assets: HTML, CSS, JavaScript. And then of course all the media you are using. Is that the reason why static content generators are popular enough to justify the existence of a website that features the most popular static content generators, a list that keeps growing and is anyway far longer than you would expect?

Partially, maybe. The actual reason however is that “traditional” CMS, as powerful as they are – and there is a reason why WordPress accounts for 27% of all website CMS! – are not friendly for programmatic access. Just try to roll out a whole new website as part of one deployment script that might be worked on by any numbers of people simultaneously, residing in a code repository like GitHub. That is not the traditional way of managing a website – there are usually several people involved in publishing just anything. Let alone e.g. creating a new flavor of an existing site by issuing terminal commands. Since most development teams nowadays work that way that kind of approach translated to managing a website certainly has its appeal.

To the developer community, that is. If you are running a site for the sake of focusing on the content then that is the first thing to realize: you will only gain convenience, speed by using static site generators if you have a development acumen. Otherwise there is a steep learning curve ahead, certainly not comparable to easily getting going with e.g. WordPress.

So I tried it myself and created a Hexo generated site that is being built and managed by Netlify, a platform that supports the build of static websites like no other I have seen. You have to have your GIT repository, choose a content generator, specify the parameters you want or just keep the defaults – and the rest is just about getting the website skeleton pushed to GitHub. Netlify would take over from there and get it all deployed, distributed via its own CDN. So yes, it performs well. Hexo is is written in NodeJS and works perfectly fine in my favorite IDE Cloud9, so you can get started without ever leaving your browser and run as well the “development server” there for checking what you have done before you push any changes. The thing to get used to is that you would have to clean and re-build the static files every now and then for bigger changes in order to see the changes reflected. (Reminds me of debugging Java code.)

Hexo comes with plenty of nice features and a rich set of themes to choose from. Clone a theme into your site folder – and of course those themes would sit in GitHub like pretty much anything else you will be working with – and you are almost done. There are some configurations you can do for your basic site and the theme and probably you will have to, yet it’s not necessary to get started.

Another thing to get used to is that you articles will have to be written in Markdown. Did I mention that static content generators are geared towards developers?

Now you wonder, what to do about the dynamic contents that your site requires? Form captures and what not. Don’t you need servers for that? Well, no. The new-school way of thinking about that is to go serverless and rely on the capabilities of the clouds. Here is a nice article about how to create your serverless file uploads. And then of course there would be SAAS solutions for these kind of tasks that take everything away for some fee, to go down the super lazy lane.

So that is all nice and dandy, yet what is the bottom line? Should you go static site generator?

For me I have to say it’s fun because of the additional burden of “mastering” Hexo, slightly exaggerating. It’s certainly not for everyone and if your website is not part of ongoing builds then the big question really is what the reason would be to manage your site that way. Once you have set everything up you will however enjoy a lot of nice features like asset optimizations (minimizing etc), easy management of SSL certificates right out of your terminal and things alike. Is that cool? I don’t think so. All in all there is no reason to consider serverless static generators the new way to go. It is just an alternative way of going. And to be frank, it is not even a new way: pre-rendering static assets is a technique that some CMS already applied years back and there are plenty of libraries e.g. for SpringBoot that can do it. Well! The choice is yours.

Little LogicApp study: using it to ingest Uptimerobot data — 28. April 2017

Little LogicApp study: using it to ingest Uptimerobot data

Uptimerobot is a fantastic and affordable solution for checking any number of webpages for availability, types of issues occurred if any (based on HTTP status headers).

Sometimes you want the data gathered in your own systems however – let’s say for your own data analytics, combining outage and error information about your websites with other data available. Uptimerobot features a functionality to send multiple types of notifications in parallel: you can receive emails and push the error data in parallel to any web endpoint you control. The technical term for that endpoint is “web hook”. The data payload pushedinto “your” direction from Uptimerobot looks like this, stripped off headers:

 "queries": { 

        "monitorID": "778784728", 

        "monitorURL": "(...)", 

        "monitorFriendlyName": "My shaky website", 

        "alertType": "2", 

        "alertTypeFriendlyName": "Up", 

        "alertDetails": "HTTP 200 - OK", 

        "monitorAlertContacts": "(...)", 

        "alertDateTime": "1492651585" 

    }

Now that webhook Uptimerobot would push the payload to can be perfectly implemented using Azure Logicapps which comes with multiple benefits like:

  • Detailed Monitoring
  • Plethora of integrations ready to use with LogicApps, all visually aided by the LogicApp Designer
  • All can be done from the Azure Portal without any need to leave it

In fact by the time I wrote this I only had a (rather simple, cheaper flavour of) Chromebook available and found no issues building an integration ad-hoc.

So what I would be doing is capture the Uptimerobot error/outage data and pump it into a DocumentDB instance on Azure for further transformation later on. (Not covered in this tutorial. Maybe more on that later.) This could have worked with Azure SQL Database of course as well, in fact the support for SQL Server in LogicApps is excellent.

How would that all work?

1) Create a LogicApp workflow and design a new “HTTP” flow step. You can define the schema of the payload there by providing sample payloads received by the HTTP step. Your LogicApp webhook would receive POST and GET calls and the LogicApp blade will show your the URL. I opted for processing POST requests going forth.

2) Finish the flow by having the output point to a Document DB instance.

3) All of this is extremely straight-forward, now what I did was crafting the JSON data to be put into the Document DB database by using Logic App expressions. That’s some sort of meta language – downside with that is that you have to get familiar with it, upside is that it’s powerful.

Screenshot 2017-04-28 at 17.35.26

So with a couple of functions we can include easily some useful extra-information.

{ 

              "body": { 

                  "id": "@guid()", 

                   "time_utc": "@utcnow()", 

                   "uptimedata": "@triggerOutputs()['queries']" 

                    }

Create a new id and assign a GUID to it, capture the current timestamp in another variable, for the rest pass on everything into one variable with the input from the Webhook trigger filtering out everything but the “queries” node. Since that is what I am interested in at least.

4) Save + put the LogicApp flow active.

5) In Uptimerobot, specify to send HTTP POST notifications right at that URL mentioned in 1.

And that is about it. New data will keep flowing into my Document DB instance. I can check for every single webhook call captured what data was flowing in and how it got transformed.

Screenshot 2017-04-28 at 17.43.21.png

You could use the same flow to dispatch any type of event data and e.g. chain it all up with other LogicApp components that screen for new Twitter posts or anything alike. So there is a quite some complex workflow you could build up there, comparably easily.

Yet, not everything is dandy about LogicApps as it is still in a maturity process – the  LogicApp code viewer and editor will remain one of your best friends when working with LogicApps. I read the ambition is to change that and include e.g. intellisense (more than already available).

On the other hand, this little “how to” is only scratching the surface of course – you could add a lot more security, pre-validation and more by hooking in for example Azure Logic App proxies. With Azure Monitor you could add a great deal of governance to the whole data ingest.

All in all, this little and of course simple use case was again a very pleasant experience with little hurdles in my way. LogicApps remains promising. It certainly is very useful already. I’ll keep watching it for sure.

Supercharge your marketing automation with IFTTT — 29. January 2017

Supercharge your marketing automation with IFTTT

Sometimes the most obvious ways of getting some automation behind your marketing or lead management activities get out of sight, so here is a quick reminder:

IFTTT can help.

There is an article dedicated to this topic and I recommend the read:

http://www.seerinteractive.com/blog/ifttt-recipes-for-marketers/

The number of integrations available is amazing. Let’s say you are an actor and you need to do market yourself and your company, then putting plenty of content out there regularly is key, right? Why not make sure your digital drumroll is automatically catered for? Think of a recipe like this one:

https://ifttt.com/applets/103249p-tweet-your-instagrams-as-native-photos-on-twitter

The fact that IFTTT is as easy to use as it gets certainly helps. You can however do complex things using Maker applets, using e.g. your own APIs. And you can have multiple applets executed in response to events.

And that’s that – just give it a try and judge yourself. I came to love it.

Using Azure Media Services for everyday video content delivery — 4. November 2016

Using Azure Media Services for everyday video content delivery

What’s Azure Media Services?

If you want the detailed explanation, here you go:https://azure.microsoft.com/en-us/documentation/articles/media-services-overview/

The short version, it’s a set of services geared towards enabling scalable video transcoding and publishing. If you are familiar with AWS then think of something along the lines of ElasticTranscoder. You will not need these types of services for making videos available online once in a while, to that end a simple file upload and exposing the file online does the job. However things change if you have to manage a lot of videos and video formats with constraints like Digital Rights Management (DRM) and on top need to cater for great scale and resilience. That is where Media Services kicks in. Ultimately, it does a lot of processing for you and makes your video content available via so called Streaming Endpoints.

In Microsoft Azure Media Services, a Streaming Endpoint represents a streaming service that can deliver content directly to a client player application, or to a Content Delivery Network (CDN) for further distribution. Media Services also provides seamless Azure CDN integration. The outbound stream from a Streaming Endpoint service can be a live stream or a video on demand Asset in your Media Services account.

Scaling out media delivery via Media Services works via scaling the Streaming Endpoints and/or using a CDN. At the moment that would have to be Verizon, though. For other CDN providers you have to apply some manual configuration.

How to put it all to practical use

In the next paragraphs I will give you a practical, technology/programming language independent implementation guide – if you are a NodeJS developer this guide will be in particular interesting, given that at the moment this article was wrote (1st of November 2016) there was no official Microsoft SDK for Media Services.

Couple of things on taxonomy and some preparation

So let us assume a simple video lifecycle: upload video, en-/transcode video, make that video available for streaming. Things could be more sophisticated thinking about DRM and I will shortly touch on that later but for now let us keep it all clean and simple.

UPLOAD-ENCODE-PUBLISH

Key thing for uploading – the Media Services REST API does not handle the actual Upload. That is done through the Storage REST API.

So all in all the Media Service API and the Storage API altogether with the more general ACS (Access Control Service) API are the main APIs we are looking at.

An asset is a container for multiple types or sets of objects in Media Services, including video, audio, images, thumbnail collections, text tracks, and closed caption files.

You need permissions for both the Media Service and the Storage Service it uses.

Here is a Postman collection that will make working with the API endpoints a lot easier, kindly contributed by John: get it now.

A recipe for using the Media Services

First things first, obtain an access token from ACS (Access Control Services employed by Azure).

https://azure.microsoft.com/en-us/documentation/articles/media-services-rest-connect-programmatically/

Find out about the closest Media Endpoint API using https://media.windows.net/ with the header x-ms = 2.11.

That would be something like: https://wamsamsclus001rest-hs.cloudapp.net/api/

NOTE: The root URI for Media Services is https://media.windows.net/. You should initially connect to this URI, and if you get a 301 redirect back in response, you should make subsequent calls to the new URI. In addition, do not use any auto-redirect/follow logic in your requests. HTTP verbs and request bodies will not be forwarded to the new URI.

Next, you would create an asset first. (It’s metadata.) That works via {{Media Endpoint API}}/Assets.

A call to /Assets will create new Blob service container, making use of the Storage Account associated with the Media Service. The container URI looks like this in the response: “Uri”: “https://mediabla.blob.core.windows.net/asset-8264c73e-6ba4-4c68-ba9c-62caf49e84a9

Then you upload the physical video file, that happens via the Storage API, Blob service to be more precise.

Alternatively use the Azure CLI or one of the SDKs, like the NodeJS storage SDK that deals with BLOBs.

(NOTE: Storage of media files might or might not be that straight-forward as just dropping a file into Azure Blob Storage. You might have to encode the file, think of DRM – here is a C# coded Azure Function that demonstrates how this would work. Not relevant for content without DRM, yet good to be aware of.)

The AssetFile entity – metadata again, like the Asset – represents a video or audio file that is stored in a blob container. An asset file is always associated with an asset, and an asset may contain one or many AssetFiles. The Media Services Encoder task fails if an asset file object is not associated with a digital file in a blob container.

That is why you have to “merge” the existing BLOB previously uploaded with the Asset metadata using the {{Media Endpoint API}}/Files API.

For that you will need the asset id that was returned as id in your initial /Assets call and looks like this:

"Id": "nb:cid:UUID:8264c73e-6ba4-4c68-ba9c-62caf49e84a9"

Once this is done it’s time for encoding.

For this, create a /Jobs job (HTTP POST endpoint) and specify some default media processor to get started like the one that goes with the id: nb:mpid:UUID:ff4df607-d419-42f0-bc17-a481b1331e56. This is the H264 Multiple Bitrate 720p configuration and results in neat MP4s you can store right into the BLOB container where they will be ready for publishing.

Get more out-of-the-box media processors via /MediaProcessors.

Check the state of the job via the GET /Jobs endpoint. Status 3 indicates processing is done. (More details here.)

Alternatively you can establish a webhook using /NotificationEndPoints.
Last but not least you need to publish it all. For this you will have to do two additional things.

Going forth, you will need a so called Access Policy. (Can be shared furthermore, so you will need this only once.)

Hence, generate an Access Policy which will allow you to create a so-called Locator.

Both operations, for going via the REST API, will require you to deliver a whole bunch of HTTP headers. The Postman collection provided earlier will tell you in detail which ones.

In both cases you need to use the Media Endpoint API you found out about in the beginning of the sequence described here, not media.windows.net!

For the Access Policy you will get an ID like this:

"Id": "nb:pid:UUID:853c765f-04ca-46c3-a519-26ac2c817f4a"

Use this in conjunction with the Locator. The Locator finally determines when the video is published.

Note that there a choice of Progressive vs Streaming locators, reflected by the “Type” parameter in the call for creating the Locator. 1 = Progressive, 2 = On Demand Streaming.

How to test the streaming endpoints

In Azure you are given a couple of Streaming endpoints with every published video, simply because of different supported asset formats.

GET {{ApiEndpoint}}/Locators(‘<locatorid>’) returns a property named Path that you can use to construct the publishing URL.

Like so: <Path to streaming endpoint>/<name of the encoded asset file without suffix>.ism/manifest
Let’s say the movie you encoded was named xyz.mov then the URI should be something like this:

//moesmediamadness.streaming.mediaservices.windows.net/bd2e7934-4028-4bc0-a128-7fb5d922c21f/xyz.ism/manifest

The .ism/manifest is for streaming. Whereas “Progressive Streaming” (=downloading) ones usually end in .mp4.

Test both types of endpoints in a browser by copying the full URL and into the resource text box here:

https://ampdemo.azureedge.net/ 

Just make sure to cut out the “http:” part at the beginning, the streaming source URL starts with “//”.

If you want to set up the Azure Media Player yourself, well, that is easy enough, check out this JSFiddle for inspiration. (I might have taken the published streaming endpoint down by the time you read this and no video is displayed therefore – sorry for that.)

NB: 

Doing all of this with Java: https://azure.microsoft.com/en-us/documentation/articles/media-services-java-how-to-use/

Another NB:

Azure Media Services come with these limitations:

https://azure.microsoft.com/en-us/documentation/articles/media-services-quotas-and-limitations/ 

Most of them are soft limitations though and can be lifted up on request.

 

Chatbots for Facebook Messenger at a glance — 4. October 2016

Chatbots for Facebook Messenger at a glance

Whether or not you are in favor of chatbots or not, in many ways they can be game changers. That is a bigger topic on its own right, in this very post I will just spend some words on writing about observations made and how to utilize  Facebook Messenger chatbots. So first things first, of course the official API documentation is the first great entry point. With version 1.2 some very interesting features around payment were introduced and since the whole platform is in flux a lot more will certainly come soon.

So basically you need a Facebook page and wire it up with a custom endpoint that features webhooks Facebook will (extensively) use. All very well explained here. After the page is wired up that webhook of yours will receive events like this:

{"sender":{"id":"925740457553273"},"recipient":{"id":"1387842364565702"},"timestamp":1472298240493,"message":{"mid":"mid.1472298240485:5db7a3720a27f7b785","seq":61,"text":"bla bla bla toller chat"}}

The so called “Page Access Token” is the number one token you will need to just do anything with your bot and it is directly tied to your  Facebook page. Try the Graph explorer to easily get access to: User Access Tokens, Page Access Tokens and more.

The tokens you acquire are directly linked to permissions you have asked the user for, of course. The type of tokens determine the type of action you are able to perform.

General thumb of rule, User Access Tokens are required for pulling rich information from the user’s timeline and you have to explicitly gain the permissions via the user beforehand  meaning the user has to be prompted with a dialog asking for all the permissions you need. People do not like to give extensive permissions so be very careful with that.

Page Access Tokens on the other hand originate from your Facebook page and attached app and can deliver you insights about users without prompting them first for permissions. In fact you can use these tokens to start conversations – however only if the user initiates the conversation and of course there are Facebook guide lines around what type of content can be messaged, in particular for push (chat) messaging.

 

Nice, but I want to know about who is talking with my bot

Well, you can get some user data just using a page access token:

https://graph.facebook.com/v2.6/<USER_ID>?access_token=PAGE_ACCESS_TOKEN.

The catch here is, you have to use the recipient id gained when the conversation started and apply a valid Page Access Token as the access_token parameter. Only in that combination will it work.

The returned data is limited and looks like this:

{ 
  "first_name": "Zorro", 
  "last_name": "Montana", 
  "profile_pic": "https://scontent.xx.fbcdn.net/v/t1.0-1/p200x200/13606582_10154237760705900_4094680234132927774_n.jpg?oh=d2fc650700705c015a63e601e645984&oe=5842946C", 
  "locale": "en_US", 
  "timezone": 2, 
  "gender": "male"
}

Clearly, the downside there is that you do not get as much data as you could get via the Graph User API.

So in short, you will have to work with landing pages / deep links in order to interact with the Graph User API (and/or other resources at hands) in order to learn more about your chat bot consumers.

How to manage the conversation

In fact the given chat samples on their own will not help you much having a half-way decent interaction with your consumers. The reason for that is that your bot needs to make sense of the language, the intent of messages thrown at it and then act based on that intent. Services like Microsoft’s luis.ai help to translate user input into intents you can work with.

screen-shot-2016-10-04-at-16-58-14

A typical luis.ai link looks like this:

https://api.projectoxford.ai/luis/v1/application?id=abc&subscription-key=def&q=where%20can%20I%20get%20pizza%3F

Id is the id of your luis.ai app instance, subscription-key is tied to your outlook.com account, the query is the URL encoded payload containing the “user query”. That is the question you want to post against luis.ai.

For the sample query the response would look like this, using pre-built sets of intents:

{ 

  "query": "where can I get pizza?", 
  "intents": [ 
    { 
      "intent": "builtin.intent.places.find_place" 
    } 
  ], 
  "entities": [ 
    { 
      "entity": "pizza", 
      "type": "builtin.places.product" 
    } 
  ] 

}

The tedious part here is that you need to train the system and specify the entities to differentiate between, however luis.ai is well done and makes it fairly easy to deal with that part. Keep in mind luis.ai is very young and plenty of changes are expected, so be prepared for code changes.

And the chatbot is still not responsive (to everybody)!

When you utilize a chatbot for the first time it will only respond to your own inquiries. To make it work for the rest of the world you have to “submit if tor review”. To be more precise, you need to submit its capabilities for review.

In particular for pages_messaging_subscriptions application permissions this is kinda intense – you have to provide a screen cast of why and how you want to message people pro-actively after the first conversation. Oh and your users need to keep chatting with your bot at least within 24 hours since the last time you deliver a message, otherwise your messages will no longer be considered. Facebook wants to make very sure you do not annoy anybody and you have keep these constraints in mind, while new ones might pop up anytime.

Important: You can find about the status of the pending review via the developer console. After the application permissions were granted you still have to put your app to “public” via the Developer cosole -> App review. Only then will it be possible for others to chat with your bot at all.

Have fun creating your own chat bots and what I would love to see are some well made chat bot (text) adventures! Go for it!

 

Database as a Service – with CRUD APIs on top — 22. August 2016

Database as a Service – with CRUD APIs on top

Everybody loves databases and non-relational (document oriented or key-value) databases are immensely popular these days. Easy to use and powerful for the most common usage scenarios, MongoDB, Cassandra and so on thrive for good reason. Managing these on your own can be a headache nevertheless and then there is still that necessity to deal with API Management. The good news is, there are some nice services out there that will take care of all your data persistence need and come with some nice features like geospatial or graph search, “auto” creation of a HTML5 front-end and more. (The bad news is that you should be willing to give up control over the persistence nitty-gritty and the APIs on top if you want to take advantage. Let’s focus on the bright side for now shall we.)

 

Restdb.io

The nice thing about restdb.io is the extra-functionality that comes with it, quote:

Collections (tables), fields and relations are declared without any coding. restdb.io automatically provides data forms, navigation, search and a (CORS) REST API. 

Add / render HTML/javascript pages with your own functionality. (Example)

So not only do you gets support with APIs and rendering (dynamic) HTML based ony our data, you can tweak and tune the output to create full applications based on these functions if you want.

All in all, RestDB.io is a good companion if you want to store data (temporarily) on the go and focus on developing a web or mobile front-end, simply because setting data models up and making them operational is as straight-forward as it can be.

Note that the number of objects you can store with the free tier is highly limited, however more than enough for small side projects or proof of concepts: https://restdb.io/pricing/

 

Are these services utilities for each and all solution? Certainly not. Nevertheless, they can save you a lot of time and effort and that could very well justify the price – on the other hand, if speed is not your main concern or you are already well prepared to deal with all the aspects these tools cover then, well, keep them in mind for your next hackathon participation.

Compare servereless computation pricing the easy way — 6. August 2016