Quantcast
Channel: Badoo Tech Blog
Viewing all 132 articles
Browse latest View live

Fixed Headers in Mobile Webapps

$
0
0

When building mobile web apps there is often a desire to try and make it look and feel as “native” as possible. Whether it is the styling of components, the use of transitions, or just general speed and performance, actually achieving these things can often be much more difficult than it first seems. This article explains some of the challenges we faced when trying to implement one of these “native” features - a fixed header.

Normally you would expect fixed headers to work by setting their css to position: fixed; which works in most of the cases except for when you need to type something in a form element. Almost all mobile browsers push your page up to make room for the keyboard and your text element to be on the screen thus pushing your fixed header out of the way. This is a bad user experience because headers in mobile applications are the entrypoints for most user interactions.

When developing the chat page for our Hot or Not application for iOS we ran into the same problem. Here is an example of what it looks like:

Fail

Demo

Make sure you are viewing the demo page in a mobile browser to actually see it work

Before we start to fix this problem there are two things we need to do first.

Hide the browser address bar

If a user is visiting your website from a mobile browser, you can hide the address bar in certain cases to give you more screen space. There are plenty of good solutions you can find on the internets that will help you do that. For the sake of our demo we will use the snippet below.

varhideAddressbar=function(){document.body.style.minHeight='1000px';window.scrollTo(0,0);document.body.style.minHeight=window.innerHeight+'px';}window.addEventListener('load',function(){hideAddressbar();},false);window.addEventListener('touchstart',hideAddressbar);

Remove the user interaction delay on mobile browsers

In short, the mobile browsers have a noticieable lag (~300 milliseconds) from when you tap on something to an action being taken for that tap. That’s because the browser is waiting to see if you wanted to do a double tap. This shouldn’t have been an issue if mobile browsers respected the user-scalable and device-width property better. Chromium has a ticket and a patch for it already.

In the meanwhile we have to fix this because if you let the browser delay you for that long, it’s already too late and your page would have begun it’s scroll animation.

To fix the delay I recommend the usage of FastClick, however be aware that there is a bug in the library which makes it fail sometimes on input elements. There is a ticket for that here.

As well as removing the delay for click events, FastClick also speeds up focus events on form fields. The snippet below is a very simplified version of what’s going on inside FastClick.

document.querySelector('input').addEventListener('touchend',function(){this.focus();});

Now to prevent the page from scrolling, we have to listen to the window.onscroll event and set the scroll to 0 everytime it happens to prevent the browser from moving the page.

varpreventScrolling=function(){window.onscroll=function(){// prevent further scrollingdocument.body.scrollTop=0;}// set the scroll to 0 initiallydocument.body.scrollTop=0;}varallowScrolling=function(){window.onscroll=null;}window.addEventListener('focus',preventScrolling,true);window.addEventListener('blur',allowScrolling,true);

So on the focus of an input element we prevent any kind of page scrolling, and enable it back when the user has finished typing. Here is how it looks like now:

Fail again

Demo

Well that wasn’t very helpful was it? The keyboard completely hides our input when it comes up because we didn’t let it scroll. To fix that problem we simply have to measure the amount by which our page scrolled when the user input came in focus, that will tell us the height of the keyboard so that we can move our input element into the view manually.

varfocusHeight=0;varpreventScrolling=function(){// Android devices don't deploy their keyboards immediately, so we wait// for them to deploy every 100msif(document.body.scrollTop<1){setTimeout(function(){preventScrolling();},100);return;}focusHeight=document.body.scrollTop;window.onscroll=function(){document.body.scrollTop=0;}document.body.scrollTop=0;// move the input into the viewinput.style.marginBottom=focusHeight+'px';};// Allow page scrollingvarallowScrolling=function(){window.onscroll=null;input.style.marginBottom='0px';};document.body.addEventListener('focus',preventScrolling,true);document.body.addEventListener('blur',allowScrolling,true);

Let’s see how that looks now:

Great Success

Demo

Great Success ! We now have a fixed header.


Scaling beyond one developer

$
0
0

Yesterday I gave a talk to the London iPhone Developer Group (#lidg) about the lessons we’ve learnt in the iOS team at Badoo as we’ve scaled the development team.

Hopefully it was interesting to other developers; I certainly enjoyed talking to many of you afterwards. We’re all facing the same challenges, and I think it’s great to share these ‘war stories’. And, I’ll be looking to share more in the future.

I’m attaching the slides here, for those who are interested.

AIDA - Badoo's journey into Continuous Integration

$
0
0
AIDA

It’s hardly news to anyone that product development and testing involves a lot of boring routine work, which can lead to human error. To avoid complications stemming from this, we use AIDA.

AIDA (Automated Interactive Deploy Assistant) is a utility that automatically performs many of the processes in Git, TeamCity and JIRA.

In this post, we focus on how through using AIDA we were able to automate multiple workflows and create a scheme of continuous integration.

We’ll start by looking at the version control system (VCS) we use here at Badoo, specifically how Git is used to automate creation of release branches, and their subsequent merging. Then we’ll discuss AIDA’s major contribution to both JIRA integration and TeamCity.

Git flow

The Badoo Team uses Git as a version control system. Our model ensures each task is developed and tested in a separate branch. The branch name consists of the ticket number in JIRA and a description of the problem.

BFG-9000_All_developers_should_be_given_a_years_holiday_(paid)

A release is built and tested in its own branch, which is then merged with the branches for completed issues. We deploy code to production servers twice a day, so two release branches are created daily.

Names of release branches are simple:

build_{name of the component}_{release date}_{time}

This structure means the team immediately knows the date and time of release from the branch name. The hooks that prevent changes being made to a release branch use the same time-stamp. For example, developers are prevented from adding a task to a branch release two hours before deploy to production servers. Without such restrictions the QA team wouldn’t have time to check all the tasks on the release branch.

Our master branch is a copy of production. As soon as a code from ‘Build’ is deployed to servers, it is merged to the master branch. Devs also deploy hot fixes to production servers from this branch.

The scheme we use is shown below:

Six stages of testing

  • Code Review: Every task undergoes code review. Each department’s reviewer is chosen according to varying criteria; i.e. it may be the person with the most experience or the development team leader.
  • Unit Tests: Unit tests are run in each branch. They occur automatically when the reviewer changes status to ‘resolved’. After performing tests (22,000 tests in 3-4 minutes) AIDA provides a report in JIRA, in table form.

  • Devel: The first stage of manual testing. Each task is checked in development environment and databases for testing.
  • Shot: The task is checked on the battlefield. Shot is a folder on the server that is а cloned branch repository and configured Nginx, and has its own top-level domain: -.shot. At this stage, translations to major languages are generated, and the issue is tested in the production environment (databases, scripts, services).
  • Staging: The release is tested in the production environment, translated into all languages, and fully monitored. All tasks included in the build are re-tested.
  • Production: If the task is very important, it is checked again in the production environment.

If a task in the release contains an error we remove its branch from the release branch with Git rebase. We use a special script that performs this operation in semi-automatic mode. 

Note:

We don’t use Git revert in release branches. If we removed a task from the release branch with Git revert, after the release was merged into the master, the developer of the problematic task would have to revert the commit in order to get his or her changes back.

AIDA and Git

Due to the sheer number of branches in the described model, a lot of issues arise concerning merge, release and code-control. These can be solved automatically:

  • Automatic creation of a new release - first of all, AIDA creates a ‘release’ branch. AIDA tracks changes in the master branch, and once the previous release branch is merged into the master, a new release branch is created.

  • Automatic generation of a new release - Every minute, JIRA tasks that have been resolved and tested are merged into a release branch (with the exception of tasks specifically marked in the JIRA flow). In case of a conflict, the developer and release engineer are notified, and the task is forwarded to the developer.

  • Release automatically kept up to date with master - Since the master branch is a copy of the production code, and developers add hot fixes to it via the special tool Deploy Dashboard, the master branch needs to continuously be merged with the branch release. AIDA completes this merge when new changes are executed in the master branch. A message appears if a conflict arises.

  • If the developer adds a change to the task branch after a merge with a release branch, this will be caught and AIDA will report it.

Deploy Dashboard

For hot fixes to production servers, we use patches. Applying a patch to the master branch and release branch takes place in semi-automatic mode. For this we use our tool Deploy Dashboard.

Deploy Dashboard is a special web interface for data collection, monitoring and recording, as well as formalisation of patches with a full list of information, and automatic notification.

If we need to fix something in production, the developer creates and attaches the patch. Then the release engineer checks and applies it to the master branch in the central repository. Following this the patch will deploy to our production servers.

AIDA and JIRA

To monitor development, testing and formation of a release we use JIRA. Workflow is planned in detail and fully formalised. Some work in the bug-tracking system is performed by AIDA. Basically, AIDA functions to move tasks or display particular information about them.

Here are a few examples:

  • The dev makes a change to code in a central repository. Status of the ticket is automatically changed from ‘Open’ to ‘In Progress’.
  • If the ticket tester creates a Shot (code deploy into a single production environment), the task status is automatically changed to ‘In Shot’.
  • The ticket is reopened automatically when the task is rolled back from the release branch.
  • If changes to the task branch happen after the task has been resolved, the issue is returned to review mode.
  • When a task branch is pushed to the central repository for the first time, the branch name is registered in the corresponding JIRA ticket.
  • After running unit tests for the branch, a table is displayed containing the results.
  • AIDA monitors status in JIRA and sends the issue back to the developer when there are problems with merging.

AIDA tells us about all actions that have been performed with tasks.

This automation greatly simplifies workflow and eliminates routine activities.

Continuous integration

Earlier, we wanted to get rid of routine activities related to the assembly and automatic deployment to a test environment, but were stuck with manually assigning new names to the branches of each release in the project’s CI-server. Now TeamCity catches changes in all branches on a given mask (in this case mask build_*) and starts the build.

Consider the process of automatic assembly and deploy in the test environment:

  1. The project is set up in TeamCity for a branch with a mask build_*.

  2. If there’s a new change in the release branch, TeamCity starts automatic build.

  3. If successful, the script will start deploying to the test servers.

  4. With the rapid smoke-test (using a simple curl) AIDA checks the release in the test environment.

  5. If the tests don’t pass, the release version is marked as bad and is rolled back to the previous (good) version of the release.

The entire process takes three minutes. Tests reveal only fatal errors.

In this case, all unit, auto and functional tests are run in parallel.

This is done in order for the tester to be able to see the task in the test environment ASAP.

In Summary

To review what processes are automated using AIDA:

  1. AIDA works with Git, creating branches, merging them and warning us when something goes wrong.

  2. It starts automated tests and provides a convenient report in JIRA.

  3. AIDA deletes the task from release in semi-automatic mode.

  4. It interacts with JIRA, automatically changing status and updating the information in tasks.

  5. AIDA uses a system of patches in semi-automatic mode for hot fixes in a special web interface.

  6. It works with TeamCity, running scripts, tests and deploys to the test environment.

If you are interested in reading a more detailed report on each type of automation, please comment and we’ll be happy to continue our series of articles on this subject.

P.S. Create good assistants, which won’t let you down when you’re in a pinch!

Type Checking in JavaScript

$
0
0

Here at Badoo we write a lot of JavaScript, our mobile web app contains about 60,000 lines of the stuff, and maintaining that much code can be challenging. One of the trickier aspects of working with a client side JavaScript application of this scale is avoiding exceptions. In this post I want to discuss a particular type of exception that you have probably seen a few times - a TypeError.

As the MDN link above explains:

“A TypeError is thrown when an operand or argument passed to a function is incompatible with the type expected by that operator or function” - MDN

So to avoid TypeError’s we need to be checking that the values we pass into functions are correct, and that any code we write checks the validity of an operand before using an operator on it. For example the . operator is not compatible with null or undefined and the instanceof operator is not compatible with anything that isn’t a function. Using these operators on an operand that is not compatible with it will throw a TypeError. If you are coming from a statically typed language like Java where you normally don’t need to worry about things like this then this may seem totally horrible, in which case you might want to consider using a “compile to JavaScript” language that has static typing, for example Dart or TypeScript. If however you quite like writing JavaScript, or already have a large JavaScript code base, all is not lost as performing this type checking does not need to be painful, and can also have a pleasant side effect of helping others to understand you code.

Lets start by looking a fairly straight forward example of getting some data from the server, performing some operations on that data, and then using it to render some HTML.

Api.get('/conversations',function(conversations){varintros=conversations.map(function(c){varname=c.theirName;varmostRecent=c.messages[0].text.substring(0,30);returnname+': '+mostRecent;});App.renderMessages(intros);});

The first thing to note is that from looking at this code we don’t actually know what conversations is supposed to be. We could assume that since it’s obviously expected to have a map function that it should be an array, but assumptions are bad and in reality it could be anything that implements a map method. The function passed to map makes a lot of assumptions about the c variable. If any of those assumptions are wrong then a TypeError will be thrown and renderMessages() will never be called.

So how we can go about checking the validity of types in this example? Well first let’s look at the different methods of checking for types in JavaScript.

typeof

The typeof operator returns a string indicating the type of the operand, but the types it returns are very limited. For example the following all return “object”

typeof{};typeof[];typeofnull;typeofdocument.createElement('div');typeof/abcd/;

instanceof

The instanceof operator is used to determine if an object’s prototype chain contains the prototype property of a given constructor.

[]instanceofArray;// truevarFoo=function(){};newFoo()instanceofFoo;// true

Although this will work, using instanceof for checking the type of a native object is not a great idea as it does not work for primitives values.

'a'instanceofString;// false5instanceofNumber;// falsetrueinstanceofBoolean;//false

Object.prototype.toString

The toString method on Object.prototype is used by many JavaScript frameworks to infer type and this is basically because the spec for this method is very clear and has been implemented consistently across all browsers. Point 15.2.4.2 of the ECMA-262 spec says:

  • If the this value is undefined return "[object Undefined]".
  • If the this value is null, return "[object Null]".
  • Let O be the result of calling ToObject passing the this value as the argument.
  • Let class be the value of the [[Class]] internal property of O.
  • Return the String value that is the result of concatenating the three Strings "[object ", class, and "]".

So basically this method will always return a String in the form “[object Foo]” where Foo is going to be “Null”, “Undefined”, or the internal Class used to create this. By using the call method to change the this value and a simple regular expression to parse the result we can get a string representing the type of anything.

vartype=function(o){vars=Object.prototype.toString.call(o);returns.match(/\[object (.*?)\]/)[1].toLowerCase();}type({});// "object"type([]);// "array"type(5);// "number"type(null);// "null"type();// "undefined"type(/abcd/);// "regex"type(newDate());// "date"

So this must be problem solved, right? Sadly not quite yet. There are still a few instances where this method will return values other than we would expect.

type(NaN);// "number"type(document.body);// "htmlbodyelement"

Both of these cases return values that we probably wouldn’t expect. In the case of NaN it returns "number" because technically NaN is a type of number, although in nearly all cases we want to know if something is a number, not NOT a number. The internal class used to implement the <body> element is HTMLBodyElement (at least in Chrome and Firefox) and there are specific classes for every element. In most cases we would just want to know if something is an element or not, if we then cared about the tag name of that element we can use the tagName property to retrieve it. However we can modify our existing method to handle these cases.

vartype=function(o){// handle null in old IEif(o===null){return'null';}// handle DOM elementsif(o&&(o.nodeType===1||o.nodeType===9)){return'element';}vars=Object.prototype.toString.call(o);vartype=s.match(/\[object (.*?)\]/)[1].toLowerCase();// handle NaN and Infinityif(type==='number'){if(isNaN(o)){return'nan';}if(!isFinite(o)){return'infinity';}}returntype;}

So now we have a method that will return the correct type for all the things we are interested in we can improve the original example to ensure that we don’t have any TypeError’s.

Api.get('/conversations',function(conversations){// anyone reading this now knows// that conversations should be an arrayif(type(conversations)!=='array'){App.renderMessages([]);return;}varintros=conversations.map(function(c){if(type(c)!=='object'){return'';}varname=type(c.theirName)==='string'?c.theirName:'';varmostRecent='';if(type(c.messages)==='array'&&type(c.messages[0])==='object'&&type(c.messages[0].text)==='string'){mostRecent=c.messages[0].text.substring(0,30);}returnname+': '+mostRecent;});// much more likely to make it here nowApp.renderMessages(intros);});

Obviously there is no getting away from the fact that we have had to add quite a lot of additional code to avoid the risk of TypeError’s, but at Badoo we would always rather send a few extra bytes of JavaScript down the wire if it means our application is more stable.

Finally, the rather obvious downside of the type method is that it requires checking the return value against a string every time. This is easily improved though. We can create an API similar to Underscore / LoDash / jQuery by doing the following:

['Null','Undefined','Object','Array','String','Number','Boolean','Function','RegExp','Element','NaN','Infinite'].forEach(function(t){type['is'+t]=function(o){returntype(o)===t.toLowerCase();};});// examples:type.isObject({});// truetype.isNumber(NaN);// falsetype.isElement(document.createElement('div'));// truetype.isRegExp(/abc/);// true

This is the approach we take to type checking in JavaScript in our mobile web application and we have found it makes code easier to read and less likely to fail. The code for the type method explained in this post is available as a gist.

JSConfEU 2013

$
0
0

A few of us here at Badoo were lucky enough to travel to Berlin in October for JSConfEU. It was a fantastic conference, the quality of the talks was incredibly high covering topics from the future of JavaScript, to why sometimes you need to draw animals. There were so many great talks, but I just want to give a quick overview of the ones that those of us who went enjoyed the most.

Nat Duca: Advanced Jank Busting in Chrome

When trying to get to the bottom of rendering issues we often turn to the Timeline panel in the Chrome dev tools, which is great for showing forced layouts or paints that are taking a long time. However Nat Duca gave a whistlestop tour of a relatively new feature in Chrome called Tracing which allows you to see right into the heart of the rendering process. It is available by visiting chrome://tracing in the browser, and it initially looks a bit underwhelming but don’t be fooled, this is an incredibly powerful feature. In Nat’s talk he is using Chrome Canary, but I think most of the new features he discusses are available in Chrome 30.

Addy Osmani: Object.observe()

Many popular JavaScript frameworks around at the moment have data binding built in and each implements it in a slightly different way, but all currently have to result to what Addy Osmani refers to as “dirty checking”. This is because there has historically been no way to “watch” an object in JavaScript, and be notified when it changes, but that is changing with the introduction of Object.observe(). Although currently only available in Chrome Canary it should be landing in Chrome soon. Addy’s talk contains some very detailed examples of Object.observe() could be used to improve the data binding implementations in JS frameworks.

Forbes Lindsay: Promises and Generators

I think that the introduction of Promises into JavaScript has to be one of the biggest single improvements to the language. The asynchronous nature of many operations leads many developers into a callback hell that makes code hard to read and error prone. Forbes’s talk goes beyond Promises though and talks about generator functions and the yield keyword and how when all these things are brought together we can bring beauty and elegance to asynchronous JS code.

Bartek Szopka: Everything you never wanted to know about JavaScript numbers

This is an amazing delve into numbers in JavaScript and how they work. My personal favourite is at 22.00 where NaN is explained, although this whole talk is numerical gold. Even if you are not dealing with complicated maths day to day I would still recommend taking half an hour to see what’s going on under the hood.

Alex Feyerke: I have a Dreamcode: Build Apps, not Backends

Hood.ie recently posted a great blog post about “offline first” which has had lots of positive feedback from the web development community. Essentially it is an extension of progressive enhancement, in that you build your application with the assumption that the user will be offline most of the time. Alex Feyerke from Hood.ie and made a compelling case at JSConfEU for a backend-as-a-service style architecture in which the web developer doesn’t need to worry about the implementation of registering users or saving their data. The API they have put together looks elegant and I think this could be the future of prototyping web applications.

Brendan Eich: JS Responsibilities

If you can understand more than 50% of this talk you deserve a big shiny medal. It’s really excellent to see that the creator (if you will, father) of JavaScript still so passionate about pushing it forward, and also to see Doom running inside of Unread Tournament running in Firefox (27mins in).

I couldn’t possibly list all the talks we enjoyed here, but they are all available on JSConfEU’s YouTube channel so go watch them!

5 Advanced Javascript and Web Debugging Techniques You Should Know About

$
0
0

In this article we will discuss 5 advanced techniques which web developers can use to reduce the time they spend debugging and squash challenging bugs by using new tools available to us and taking advantage of the new features offered by debuggers.

  1. Weinre
  2. DOM Breakpoints
  3. The ‘debugger’ statement
  4. Hooking into native methods
  5. Mapping remote scripts locally

Weinre

According to the official description weinre stands for web inspector remote. It is a debugger for web pages, like FireBug (for FireFox) and Web Inspector (for WebKit-based browsers), except it’s designed to work remotely, and in particular, to allow you debug web pages on a mobile device such as a phone.

Weinre

Weinre essentially allows you to remotely debug webpages on devices and browsers which don’t come with native debugging support. It aims to replicate the UI of Chrome Developer Tools and provide most of it’s functionality. This tool is extremely handy for debugging DOM/CSS issues but also works really well for debugging javascript as well.

Installation

$ sudo npm -g install weinre

Weinre can be installed using npm. Or you can download binary packages from here.

After installation just run the command weinre in your terminal and it would start up the weinre server on it’s default port 8080. You can customize the port if you need to.

Next navigate to your_hostname:8000 where you will have two options to inject weinre into the page you want to debug:

  • Copy the script block and paste it in your page’s html
  • Add the bookmarklet on your phone to allow weinre to run on any site.

Once you do that you can debug any page on any browser or device as if you were using the Chrome Devtools on it! It does have some limitations however. Because weinre is simply a script injector it won’t provide you with the ability to put breakpoints inside your javascript code. But the console in weinre is really good for seeing javascript logs and doing other debugging tasks.

Note: If you don’t want the complexity of setting up weinre, you can also use a remotely hosted version of it at http://debug.phonegap.com/

Future possibilities

Using js.js (a javascript interpreter in javascript) combined with weinre it’s possible enable true javascript debugging inside any environment/device with full breakpoint support. :-)


DOM Breakpoints

DOM breakpoint is a feature provided by Firebug and Chrome Devtools that allows you to pause your script execution as soon as a certain node in the DOM is modified.

The benefit of using DOM breakpoints is that because of the asynchronous nature of javascript it’s sometimes easier to know when a DOM tree will change rather than setting breakpoints at every possible location in your code which might modify it.

To use a DOM breakpoint:

  • Go to the elements view in your debugger
  • Right click on the node you want to break on modification
  • Select the desired break action

Weinre

Note: In Firebug you can find all the breakpoints in the Script>Breakpoints tab. In Chrome Devtools you can find them in the Elements>DOM Breakpoints tab.


The ‘debugger’ statement

The debugger statement allows you to pause the javascript execution at arbitrary points in your code provided your debugger is open at that moment.

This can be extremely handy because it let’s you strategically put breakpoints in your code to when certain conditions are met. This is much easier to pull off compared to using conditional breakpoints.

To use it all you have to do it put the statement inside your javascript code where you want the break to happen.

if(waldo){
    debugger;}

Now with your console open, whenever the javascript interpreter will hit that condition it will cause a break in your script execution. Just don’t release it with your production code :)

Note: If you didn’t know about conditional breakpoints. Here is a quick overview on how to use breakpoints in Chrome DevTools


Hooking into native methods

Because the browser and window javascript methods aren’t protected you can hook into them to add your own functionality or debugging code. This technique is really useful when you know the problem occurring but can’t track down the source of it. Or if you want to mock some javascript methods.

Let’s take an example: Suppose you are noticing an unexpected attribute being modified on a DOM element. You know the attribute or it’s value but you find it harder to track down the line of the code which does that.

In that case you can hook into the setAttribute method with your own and add debug code in it to find out the problem like so:

var oldFn= Element.prototype.setAttribute;

Element.prototype.setAttribute =function(attr, value){if(value==="the_droids_you_are_looking_for"){
        debugger;}
    oldFn.call(this, attr, value);}

Now whenever an element’s attribute is modified to the value you want the script will pause executing and you can find out the cause of the problem from the call stack.

Note: This is generally how Prototype and SinonJS work. But it’s not guaranteed to work in all browsers, for example in iOS Safari in Private mode you can’t access or attempt to modify the localStorage methods.


Mapping remote scripts locally

This method simply allows you to proxy any remote script url’s to a local file on your disk. Where you can modify the file according to your wish and have it act as if the file was modified from the source. This can come in really handy if you are debugging a problem where the source is minified and/or you don’t have the ability to modify the file (think production environments).

Note: This does require you to download and install a paid third party app on your machine. There are free alternatives to this method but they require a manual setup of proxies and http servers.

How to do it:

  • Download Charles Proxy which is a fantastic proxy tool for debugging network connections
  • Enable Charles for either the whole system or your browser
  • Download the remote file that you wish to debug and save it locally
  • Unminify the file and add any debug code that you wish to
  • In Charles: Tools > Map Local
  • Choose the local file and then modify the remote file to point like the screenshot below. You can even map entire hostnames.
  • Reload the page
  • The remote URL is now read from your locally saved file.

Charles

Benefits of mapping remote files locally

  • Allows you to debug production code when sourcemaps aren’t available.
  • Allows you to actually make modifications to the code where it wasn’t previously possible.

Note: Chrome developer tools has a file mapping system as well but it currently only works one way. As in it allows you to edit files in the devtools and save it on the disk and reflect the changes for that session. But as soon as you reload your page the files are fetched from the server and not actually read from the disc because it assumes your save actions will have modified the source file. It would be a great feature to have in devtools in the future.


If you had any problems following something in the article please drop a comment.

The technology of billing - how we do it at Badoo

$
0
0

There are many ways to monetize your project, but all of them have one thing in common – the transfer of money from the user to a company account. In this article we will discuss how this process works at Badoo.

What do we mean by ‘billing’?

Billing for us concerns all things related to the transfer of money. For example: pricing, payment pages and payment processing, the rendering of services and promo campaigns as well as the monitoring all of these things.

In the beginning, as with most startups, we had no paid services at all. The first steps towards monetization took place in 2008 (well after the official site launch in 2006.) We selected France as our guinea-pig and the only available payment method at that time worked via SMS. For payment processing we used a file system. Each incoming request was put into a file and moved between directories by bash-scripts, meaning its status changed during processing. A database was used only for registering successful transactions. This worked pretty well for us, but after a year this system became difficult to maintain and we decided to switch to using just a database.

This new system had to be re-worked quickly, as up till then we had been accepting payments in only a limited number of countries. But this system had one weak point – it was designed solely for SMS payments. To this day we still have some odd leftovers of this system in our database structure, such as fields MSISDN (mobile phone number) and short code (short number for premium SMS) in a table of successfully processed payments.

Now we receive payments from countries all over the world. At any given second at least a few users are trying to buy something on Badoo or through our mobile applications. Their locations are represented in this “Earth at Night” visual:

Earth

We accept payments using more than 50 payment methods. The most popular are credit card, SMS and direct billing, and purchases via the Apple Store and Google Play.

Pay

Among them you can find such leftfield payment options as IP-billing (direct payments from your internet provider account), landline payments (you have to call from your landline and confirm payment). Once we even received a payment via regular mail!

Letter

Credit card and bank payments

All payment systems have an API and work by accepting payments from their users. Such direct integrations work well if you have only a few of them and everything runs smoothly. But if you work with local payment systems it starts to become a problem. It is becoming harder and harder to support a lot of different APIs for several reasons: local laws and regulations are different, a popular local payment system provider may refuse to work with foreign clients, even signing a contract can draw out the process substantially. Despite the complexity of local payment methods though, adopting many of them has proven to be quite a profitable decision. An example of this is the Netherlands, which had not previously been a strong market for us. After we enabled a local payment system named iDeal, however, we started to take in 30-40% more profit.

Where there is demand, usually there’s someone ready to meet it. Many companies known as ‘payment gateways’ work as aggregators and unify popular payment systems – including country-specific ones – under one single API. Via such companies, it suffices to perform an integration only once and after that one gets access to many different payment system around the world. Some of them even provide a fully customizable payment page where you can upload your own CSS & JS files, change images, texts and translations. You can make this page look like part of your site and even register it in a subdomain such as “payments.example.com”. Even tech-savvy users might not understand that they just made a payment on a third-party site.

Which is better to use? Direct integration or payment gateways? First of all it depends on the specific requirements of the business. In our company we use both types, because we want to work with many different payment gateways and sometimes make direct integrations with payment systems. Another important factor in making this decision is the quality of service provided by a payment system. Often payment gateways offer more convenient APIs, plus more stable and higher-quality service than source payment system.

SMS payments

SMS payments are very different to other systems. In many countries they are under very strict control, especially in Europe. Local regulators or governments can make demands regarding all aspects of SMS payments. For example specifying the exact text sent via SMS or the appearance of the payment page. You have to monitor changes and apply them in time. Sometimes requirements can seem very strange, for example in Belgium you must show short code white on black with price nearby. You can see how this looks on our site below.

SMS

Also there are different types of SMS-billing: MO (Mobile Originated) and MT (Mobile Terminated). MO-billing is very easy to understand and implement. As soon as a user sends an SMS to our short number we receive money. MT is a bit more complicated. The main difference is that a user’s funds are not deducted from the moment he or she sends the SMS, but when a message from us is recieved with a notification that he or she is being charged. Through this method, we get the money only after receiving delivery notification of this payment message.

The main goal of MT-billing is to add an additional check on our side before the user sends money, preventing errors that occur due to user-misspelled SMS texts. Using this method, the payment process consist of two phases. First, the user initiates payment and second, they receive confirmation. In some countries the payment process for MT-billing follows one of these variants:

  • the user sends an SMS on short number, we receive it and check that the text is correct, etc. We send a free message with custom text, which the user has to answer, confirming the payment. After that we send a message that they have been charged
  • same as above, but instead of responding directly to the free message the user has to enter a PIN code from it on the Badoo site
  • the user enters their phone number on Badoo, we send a free message with a PIN. The user then enters the PIN code on Badoo, and after checking this, we send the payment message

For SMS payments we use only aggregators. Direct integrations with operators are not profitable, because you have to support a lot of contracts in many countries, which increasingly requires the involvement of accountants and lawyers.

Technical details

Badoo works on PHP and MySql. For payment processing we also use the same technologies. However billing application works on separate pools of servers. These are divided into groups, such as servers to process income requests (payment pages, notification from aggregators, etc), servers for background scripts, database servers and special groups with increased security where we process credit cards payments. For card payments, servers must be compliant with PCI DSS. Its security standards were developed in coordination with Visa, Master Card, American Express, JCB and Discover for companies who process or store the personal information of their cardholders. The list of requirements which have to be met to use these systems is quite long.

As database servers we use two MySql Percona servers, working in master-master replication. All requests process via only one of them - the second is used for hot-backup and other infrastructure duties, such as heavy analytical queries, monitoring queries and so forth.

The whole billing system can be divided into few big parts:

  • Core - the base entities needed for payment processing such as Order, Payment and Subscription
  • Provider plugins - all provider-related functionality such as implementation of API and internal interfaces
  • Payment page - where you can choose a product and payment method

In order to integrate a new payment provider, we need to create a new plugin which is responsible for all communication between us and the payment gateway. These can be of two types, depending whether we initiate the request (pull requests) or the payment provider initiates it (push requests). The most popular protocol for pull-requests is HTTP, either in itself or as transport for JSON/XML. The REST API (which has gained a certain degree of popularity recently) we haven’t encountered very often. Only new companies or companies who reworked their API recently offer it. For example with the new PayPal API or the new payment system used by the UK’s GoCardless company. The second most popular transport for pull requests is SOAP. For push requests mostly HTTP is used (either pure or as transport), and SOAP only rarely. The only company that comes readily to mind that offers SOAP push notifications is the Russian payment system QIWI.

After the programming part is finished the testing process begins. We test everything several times in different environments: the test environment, in shot (internal domain with only one particular task and working production environment), in build (pre-production version of code which is ready to go to live) and in the live environment. For more details about release management at Badoo please visit our blog: (http://techblog.badoo.com/blog/2013/10/16/aida-badoos-journey-into-continuous-integration/).

For billing tasks there are some peculiarities. We have to test not only our own code but how it interacts with third party systems. It’s nice if the payment provider offers their own sandbox which works the same as our production system, but if not we create stubs for them. These stubs emulate a real aggregator system and allow us to do manual and automatic testing. This is an example of a stub for one of our SMS providers.

Letter

After passing through the test environment we check how it will work with the real system, i.e. making real payments. For SMS payments, we often need to get approval from local regulators, which can take a few months. We don’t want to deploy semi-ready code on production so as a solution we create a new type of environment external shot. This is our regular shot, a feature branch with one task, but accessible by external sub-domain. For security reasons we create them only if needed. We send links to external shots to our partners and they can test changes at any time. It’s especially convenient when you work with partners from another hemisphere where the time difference can be up to 12 hours!

Support and operation

After a new integration goes live we enter the stage of its support and operation. Technical support occupies about 60-70% of our work time.

Support

By support I mean primarily customer support. All easy cases are solved by the first line of support. Our employees know many different languages and can translate and attend to customer complaints quickly. So only very complicated cases end up on the desks of our team of developers.

The second component of support is bug fixing or making changes to current integrations. Bugs appear due to multiple reasons. Of course the majority are a result of human error, i.e. when something is implemented in the wrong way. But sometimes it can result from unclear documentation. For example, once we had to use a Skype chat with a developer of a new payment system instead of documentation. At other times a payment provider makes changes on their side and forgets to notify us. One more point of failure is third party systems, as a payment provider’s aggregate payment services error can occur not on their side, but on their partner’s side.

In order to solve such cases quickly we maintain detailed logs. These contain all communications between us and payment providers, all important events, errors during query processing and so on. Each query has its own unique identifier through which we can find all rows in logs and reconstruct the steps of an execution query. It’s especially helpful when we have to investigate cases that happened a few weeks or months ago.

So that’s how billing is organized at Badoo! There are still many interesting topics we plan to explore in future, such as monitoring, PCI DSS certification, and re-working bank-card payments. If you have questions or suggestions for future articles, please leave a comment for us below.

Localising into 46 languages

$
0
0

Localisation done right will earn your app loyal users in new countries. Done badly, it becomes a nagging pain like half the apps on my computer trying to correct the spelling of localisation to localization. The purpose of localisation is not only to make your app available in other languages, but also to make the entire user experience feel like it was specifically designed with the local user in mind. Here I will share some of the lessons learned from making Badoo available in 46 languages, and point out some of the tricky bits you should pay attention to.

A brief intro

The process of making your service available in other languages consists of two parts, internationalisation and localisation.

Broadly speaking internationalisation, aka i18n, is the part where you take out all hard-coded strings from your code and replace them with reference keys. Once the strings have been translated, the reference keys will be used to fetch text in the requested language.

Localisation, aka l10n, is the part of actually adapting your content for different locales. The bulk of this will be translation, however you should also adapt non-text content for each market. For example in promotional pictures such as app store screenshots, use screenshots of the app in local language. Also, user names in screenshots should be names that will look familiar to people in the area, and people in photos should look like they are from that country or at least region.

That’s the theory. In practice the process is more complicated than that. Language translation aside, additional locale-specific conventions need to be adhered to your app to make sense and feel completely native to its users in other locales.

Formats and units

There are some subtle but important differences in the formatting of dates and numbers that may have opposite meanings in different locales. A common example is dates.

03/07/2013

The date above can mean 3rd July or 7th March depending on the local conventions. This is a frequent source of confusion between UK and US where, despite both speaking English, the date formats are different. Do not assume that because two countries speak the same language, all will be understood or correctly interpreted.

The same goes for number formats.

  1. 000

The number above could be interpreted as either 1 or 1000 depending on which decimal point convention is used. For example, in Korea, a full stop (.) denotes a decimal sign, but in Germany a full stop is used as a thousands separator.

If you use any kind of measurements you will also need to ensure you use units that are easily understood in the region you are targeting. For example you will probably want to use miles rather than kilometres to denote distances in the United States as they use the Imperial system. Also, if you are displaying a temperature to a user in continental Europe you should use Celsius, as Fahrenheit is unlikely to be understood.

Direction

While most languages are written from left to right, there are some notable exceptions such as Arabic and Hebrew that are written from right to left. Localising into these languages requires considerably more work than just translating. In most cases, the user interface is likely to be language direction specific and will require reworking to ensure your app retains its usability when direction is reversed.

Gender specific grammar and pluralisation

English is in some ways a simple language. It has no gender-specific grammatical rules and in most cases all you need to do to create a plural of a noun is stick an ‘s’ on the end. Other languages can be more complicated - often the endings of words will change depending on whether the actor in the sentence is male or female.

In some languages the plural form rules can be quite complex. For example, in Russian a different form may be used depending on the exact number of the objects being counted. If there are between 2 and 4 objects, one form of plural is used, while if there are more another is used. However, if the number ends with a 1, then the singular form is used, unless it also ends with 11 in which case the first form of plural is used. Like I said, complex.

Tone, context and string length

In most cases where strings are translated as short snippets, there is a lot of interpretation that can be applied to each translation. Words rarely have exactly the same meaning when translated into other languages and can have additional connotations. A lot of the time you will use language in your app that conveys your app’s personality and will want to preserve that tone in all languages.

An important catch to look out for is re-using the same string in different places. The problem you may run into there is that the wording may be the same in English for both cases, but other languages may require different phrasing due to the variation in context.

Working on mobile projects you will need to pay extra attention to string length. Screen space will be at a premium and you will need to ensure your text snippet can fit into the space allocated. In many languages, especially for some technical terms, you may not have a convenient direct translation and what may be a short word in one language can end up being a full sentence in another.

Our solution

At Badoo we have an in-house localisation team, with translators for all our top markets based in the office full-time. Our team members translate and test content, and also work closely with developers to continuously improve our in-house translation system, and address language-related issues.

Dashboard

The screen above is the main client side interface our developers use when adding a new string to the localisation system. The very first text input contains the key we use to look up the snippet. As you can see we try to keep the key names as descriptive as possible. It should be fairly obvious from the key name what it is and where it is used.

To get around gender-specific grammar rules in different languages we use different keys for references to male and female people. While the original strings will be exactly the same when in English, in many languages there will be differences and this is the easiest way to take them into account. The translated text snippet is a simple template that can accept parameters as inputs. For example this could be the name of the person referred to in the string. To give the translators some additional context we also include a screenshot of the screen where the translated strings will be inserted.

As a precaution to prevent truncation and to ensure that the translated text will be displayed within the allocated space we also specify a maximum length for the string. The translated text in the app is manually checked to ensure it fits well and works in the context. Where it’s not possible to create sensible translation within the limit, a layout adjustment may be required. To avoid problems for smaller screen sizes, we mostly test translations on small-screen ldpi devices/emulators.

We generate unique language files for each app and platform on our network. To keep the size of these to a minimum you can specify which app/platform files need to have the particular key included.

As a final step, translation managers kick off a build and deploy a new version of the app to the test devices so that devs, testers and translators can see the latest version in action. For formats, units and number-dependent pluralisation, our solutions are server based.

To recap on the main points:

  • Start by extracting all strings from your app
  • Pay attention to number formats, units and plural forms in your translation
  • Not all languages are read from left to right
  • Remember that the translation may vary depending on the gender of the person in the text
  • Make sure translated strings fit the context such as tone and space available

Finally, the purpose of localisation is to make all users feel like first class citizens in your app irrespective of their language and location. Often, that requires taking extra steps that may not be immediately obvious, but we can say from 7 years of experience that it’s well worth the effort.


Fixed Headers in Mobile Webapps

$
0
0

When building mobile web apps there is often a desire to try and make it look and feel as “native” as possible. Whether it is the styling of components, the use of transitions, or just general speed and performance, actually achieving these things can often be much more difficult than it first seems. This article explains some of the challenges we faced when trying to implement one of these “native” features - a fixed header.

Normally you would expect fixed headers to work by setting their css to position: fixed; which works in most of the cases except for when you need to type something in a form element. Almost all mobile browsers push your page up to make room for the keyboard and your text element to be on the screen thus pushing your fixed header out of the way. This is a bad user experience because headers in mobile applications are the entrypoints for most user interactions.

When developing the chat page for our Hot or Not application for iOS we ran into the same problem. Here is an example of what it looks like:

Fail

DemoMake sure you are viewing the demo page in a mobile browser to actually see it work

Before we start to fix this problem there are two things we need to do first.

Hide the browser address bar

If a user is visiting your website from a mobile browser, you can hide the address bar in certain cases to give you more screen space. There are plenty of good solutions you can find on the internets that will help you do that. For the sake of our demo we will use the snippet below.

varhideAddressbar=function(){document.body.style.minHeight='1000px';window.scrollTo(0,0);document.body.style.minHeight=window.innerHeight+'px';}window.addEventListener('load',function(){hideAddressbar();},false);window.addEventListener('touchstart',hideAddressbar);

Remove the user interaction delay on mobile browsers

In short, the mobile browsers have a noticieable lag (~300 milliseconds) from when you tap on something to an action being taken for that tap. That’s because the browser is waiting to see if you wanted to do a double tap. This shouldn’t have been an issue if mobile browsers respected the user-scalable and device-width property better. Chromium has a ticket and a patch for it already.

In the meanwhile we have to fix this because if you let the browser delay you for that long, it’s already too late and your page would have begun it’s scroll animation.

To fix the delay I recommend the usage of FastClick, however be aware that there is a bug in the library which makes it fail sometimes on input elements. There is a ticket for that here.

As well as removing the delay for click events, FastClick also speeds up focus events on form fields. The snippet below is a very simplified version of what’s going on inside FastClick.

document.querySelector('input').addEventListener('touchend',function(){this.focus();});

Now to prevent the page from scrolling, we have to listen to the window.onscroll event and set the scroll to 0 everytime it happens to prevent the browser from moving the page.

varpreventScrolling=function(){window.onscroll=function(){// prevent further scrollingdocument.body.scrollTop=0;}// set the scroll to 0 initiallydocument.body.scrollTop=0;}varallowScrolling=function(){window.onscroll=null;}window.addEventListener('focus',preventScrolling,true);window.addEventListener('blur',allowScrolling,true);

So on the focus of an input element we prevent any kind of page scrolling, and enable it back when the user has finished typing. Here is how it looks like now:

Fail again

Demo

Well that wasn’t very helpful was it? The keyboard completely hides our input when it comes up because we didn’t let it scroll. To fix that problem we simply have to measure the amount by which our page scrolled when the user input came in focus, that will tell us the height of the keyboard so that we can move our input element into the view manually.

varfocusHeight=0;varpreventScrolling=function(){// Android devices don't deploy their keyboards immediately, so we wait// for them to deploy every 100msif(document.body.scrollTop<1){setTimeout(function(){preventScrolling();},100);return;}focusHeight=document.body.scrollTop;window.onscroll=function(){document.body.scrollTop=0;}document.body.scrollTop=0;// move the input into the viewinput.style.marginBottom=focusHeight+'px';};// Allow page scrollingvarallowScrolling=function(){window.onscroll=null;input.style.marginBottom='0px';};document.body.addEventListener('focus',preventScrolling,true);document.body.addEventListener('blur',allowScrolling,true);

Let’s see how that looks now:

Great Success

Demo

Great Success ! We now have a fixed header.

Scaling beyond one developer

$
0
0

Yesterday I gave a talk to the London iPhone Developer Group (#lidg) about the lessons we’ve learnt in the iOS team at Badoo as we’ve scaled the development team.

Hopefully it was interesting to other developers; I certainly enjoyed talking to many of you afterwards. We’re all facing the same challenges, and I think it’s great to share these ‘war stories’. And, I’ll be looking to share more in the future.

I’m attaching the slides here, for those who are interested.

AIDA - Badoo's journey into Continuous Integration

$
0
0

AIDA It’s hardly news to anyone that product development and testing involves a lot of boring routine work, which can lead to human error. To avoid complications stemming from this, we use AIDA.

AIDA (Automated Interactive Deploy Assistant) is a utility that automatically performs many of the processes in Git, TeamCity and JIRA.

In this post, we focus on how through using AIDA we were able to automate multiple workflows and create a scheme of continuous integration.

We’ll start by looking at the version control system (VCS) we use here at Badoo, specifically how Git is used to automate creation of release branches, and their subsequent merging. Then we’ll discuss AIDA’s major contribution to both JIRA integration and TeamCity.

Git flow

The Badoo Team uses Git as a version control system. Our model ensures each task is developed and tested in a separate branch. The branch name consists of the ticket number in JIRA and a description of the problem.

BFG-9000_All_developers_should_be_given_a_years_holiday_(paid)

A release is built and tested in its own branch, which is then merged with the branches for completed issues. We deploy code to production servers twice a day, so two release branches are created daily.

Names of release branches are simple:

build_{name of the component}_{release date}_{time}

This structure means the team immediately knows the date and time of release from the branch name. The hooks that prevent changes being made to a release branch use the same time-stamp. For example, developers are prevented from adding a task to a branch release two hours before deploy to production servers. Without such restrictions the QA team wouldn’t have time to check all the tasks on the release branch.

Our master branch is a copy of production. As soon as a code from ‘Build’ is deployed to servers, it is merged to the master branch. Devs also deploy hot fixes to production servers from this branch.

The scheme we use is shown below:

Six stages of testing

  • Code Review: Every task undergoes code review. Each department’s reviewer is chosen according to varying criteria; i.e. it may be the person with the most experience or the development team leader.
  • Unit Tests: Unit tests are run in each branch. They occur automatically when the reviewer changes status to ‘resolved’. After performing tests (22,000 tests in 3-4 minutes) AIDA provides a report in JIRA, in table form.

  • Devel: The first stage of manual testing. Each task is checked in development environment and databases for testing.
  • Shot: The task is checked on the battlefield. Shot is a folder on the server that is а cloned branch repository and configured Nginx, and has its own top-level domain: -.shot. At this stage, translations to major languages are generated, and the issue is tested in the production environment (databases, scripts, services).
  • Staging: The release is tested in the production environment, translated into all languages, and fully monitored. All tasks included in the build are re-tested.
  • Production: If the task is very important, it is checked again in the production environment.

If a task in the release contains an error we remove its branch from the release branch with Git rebase. We use a special script that performs this operation in semi-automatic mode. 

Note:

We don’t use Git revert in release branches. If we removed a task from the release branch with Git revert, after the release was merged into the master, the developer of the problematic task would have to revert the commit in order to get his or her changes back.

AIDA and Git

Due to the sheer number of branches in the described model, a lot of issues arise concerning merge, release and code-control. These can be solved automatically:

  • Automatic creation of a new release - first of all, AIDA creates a ‘release’ branch. AIDA tracks changes in the master branch, and once the previous release branch is merged into the master, a new release branch is created.

  • Automatic generation of a new release - Every minute, JIRA tasks that have been resolved and tested are merged into a release branch (with the exception of tasks specifically marked in the JIRA flow). In case of a conflict, the developer and release engineer are notified, and the task is forwarded to the developer.

  • Release automatically kept up to date with master - Since the master branch is a copy of the production code, and developers add hot fixes to it via the special tool Deploy Dashboard, the master branch needs to continuously be merged with the branch release. AIDA completes this merge when new changes are executed in the master branch. A message appears if a conflict arises.

  • If the developer adds a change to the task branch after a merge with a release branch, this will be caught and AIDA will report it.

Deploy Dashboard

For hot fixes to production servers, we use patches. Applying a patch to the master branch and release branch takes place in semi-automatic mode. For this we use our tool Deploy Dashboard.

Deploy Dashboard is a special web interface for data collection, monitoring and recording, as well as formalisation of patches with a full list of information, and automatic notification.

If we need to fix something in production, the developer creates and attaches the patch. Then the release engineer checks and applies it to the master branch in the central repository. Following this the patch will deploy to our production servers.

AIDA and JIRA

To monitor development, testing and formation of a release we use JIRA. Workflow is planned in detail and fully formalised. Some work in the bug-tracking system is performed by AIDA. Basically, AIDA functions to move tasks or display particular information about them.

Here are a few examples:

  • The dev makes a change to code in a central repository. Status of the ticket is automatically changed from ‘Open’ to ‘In Progress’.
  • If the ticket tester creates a Shot (code deploy into a single production environment), the task status is automatically changed to ‘In Shot’.
  • The ticket is reopened automatically when the task is rolled back from the release branch.
  • If changes to the task branch happen after the task has been resolved, the issue is returned to review mode.
  • When a task branch is pushed to the central repository for the first time, the branch name is registered in the corresponding JIRA ticket.
  • After running unit tests for the branch, a table is displayed containing the results.
  • AIDA monitors status in JIRA and sends the issue back to the developer when there are problems with merging.

AIDA tells us about all actions that have been performed with tasks.

This automation greatly simplifies workflow and eliminates routine activities.

Continuous integration

Earlier, we wanted to get rid of routine activities related to the assembly and automatic deployment to a test environment, but were stuck with manually assigning new names to the branches of each release in the project’s CI-server. Now TeamCity catches changes in all branches on a given mask (in this case mask build_*) and starts the build.

Consider the process of automatic assembly and deploy in the test environment:

  1. The project is set up in TeamCity for a branch with a mask build_*.

  2. If there’s a new change in the release branch, TeamCity starts automatic build.

  3. If successful, the script will start deploying to the test servers.

  4. With the rapid smoke-test (using a simple curl) AIDA checks the release in the test environment.

  5. If the tests don’t pass, the release version is marked as bad and is rolled back to the previous (good) version of the release.

The entire process takes three minutes. Tests reveal only fatal errors.

In this case, all unit, auto and functional tests are run in parallel.

This is done in order for the tester to be able to see the task in the test environment ASAP.

In Summary

To review what processes are automated using AIDA:

  1. AIDA works with Git, creating branches, merging them and warning us when something goes wrong.

  2. It starts automated tests and provides a convenient report in JIRA.

  3. AIDA deletes the task from release in semi-automatic mode.

  4. It interacts with JIRA, automatically changing status and updating the information in tasks.

  5. AIDA uses a system of patches in semi-automatic mode for hot fixes in a special web interface.

  6. It works with TeamCity, running scripts, tests and deploys to the test environment.

If you are interested in reading a more detailed report on each type of automation, please comment and we’ll be happy to continue our series of articles on this subject.

P.S. Create good assistants, which won’t let you down when you’re in a pinch!

Type Checking in JavaScript

$
0
0

Here at Badoo we write a lot of JavaScript, our mobile web app contains about 60,000 lines of the stuff, and maintaining that much code can be challenging. One of the trickier aspects of working with a client side JavaScript application of this scale is avoiding exceptions. In this post I want to discuss a particular type of exception that you have probably seen a few times - a TypeError.

As the MDN link above explains:

“A TypeError is thrown when an operand or argument passed to a function is incompatible with the type expected by that operator or function” - MDN

So to avoid TypeError’s we need to be checking that the values we pass into functions are correct, and that any code we write checks the validity of an operand before using an operator on it. For example the . operator is not compatible with null or undefined and the instanceof operator is not compatible with anything that isn’t a function. Using these operators on an operand that is not compatible with it will throw a TypeError. If you are coming from a statically typed language like Java where you normally don’t need to worry about things like this then this may seem totally horrible, in which case you might want to consider using a “compile to JavaScript” language that has static typing, for example Dart or TypeScript. If however you quite like writing JavaScript, or already have a large JavaScript code base, all is not lost as performing this type checking does not need to be painful, and can also have a pleasant side effect of helping others to understand you code.

Lets start by looking a fairly straight forward example of getting some data from the server, performing some operations on that data, and then using it to render some HTML.

Api.get('/conversations',function(conversations){varintros=conversations.map(function(c){varname=c.theirName;varmostRecent=c.messages[0].text.substring(0,30);returnname+': '+mostRecent;});App.renderMessages(intros);});

The first thing to note is that from looking at this code we don’t actually know what conversations is supposed to be. We could assume that since it’s obviously expected to have a map function that it should be an array, but assumptions are bad and in reality it could be anything that implements a map method. The function passed to map makes a lot of assumptions about the c variable. If any of those assumptions are wrong then a TypeError will be thrown and renderMessages() will never be called.

So how we can go about checking the validity of types in this example? Well first let’s look at the different methods of checking for types in JavaScript.

typeof

The typeof operator returns a string indicating the type of the operand, but the types it returns are very limited. For example the following all return “object”

typeof{};typeof[];typeofnull;typeofdocument.createElement('div');typeof/abcd/;

instanceof

The instanceof operator is used to determine if an object’s prototype chain contains the prototype property of a given constructor.

[]instanceofArray;// truevarFoo=function(){};newFoo()instanceofFoo;// true

Although this will work, using instanceof for checking the type of a native object is not a great idea as it does not work for primitives values.

'a'instanceofString;// false5instanceofNumber;// falsetrueinstanceofBoolean;//false

Object.prototype.toString

The toString method on Object.prototype is used by many JavaScript frameworks to infer type and this is basically because the spec for this method is very clear and has been implemented consistently across all browsers. Point 15.2.4.2 of the ECMA-262 spec says:

  • If the this value is undefined return "[object Undefined]".
  • If the this value is null, return "[object Null]".
  • Let O be the result of calling ToObject passing the this value as the argument.
  • Let class be the value of the [[Class]] internal property of O.
  • Return the String value that is the result of concatenating the three Strings "[object ", class, and "]".

So basically this method will always return a String in the form “[object Foo]” where Foo is going to be “Null”, “Undefined”, or the internal Class used to create this. By using the call method to change the this value and a simple regular expression to parse the result we can get a string representing the type of anything.

vartype=function(o){vars=Object.prototype.toString.call(o);returns.match(/\[object (.*?)\]/)[1].toLowerCase();}type({});// "object"type([]);// "array"type(5);// "number"type(null);// "null"type();// "undefined"type(/abcd/);// "regex"type(newDate());// "date"

So this must be problem solved, right? Sadly not quite yet. There are still a few instances where this method will return values other than we would expect.

type(NaN);// "number"type(document.body);// "htmlbodyelement"

Both of these cases return values that we probably wouldn’t expect. In the case of NaN it returns "number" because technically NaN is a type of number, although in nearly all cases we want to know if something is a number, not NOT a number. The internal class used to implement the <body> element is HTMLBodyElement (at least in Chrome and Firefox) and there are specific classes for every element. In most cases we would just want to know if something is an element or not, if we then cared about the tag name of that element we can use the tagName property to retrieve it. However we can modify our existing method to handle these cases.

vartype=function(o){// handle null in old IEif(o===null){return'null';}// handle DOM elementsif(o&&(o.nodeType===1||o.nodeType===9)){return'element';}vars=Object.prototype.toString.call(o);vartype=s.match(/\[object (.*?)\]/)[1].toLowerCase();// handle NaN and Infinityif(type==='number'){if(isNaN(o)){return'nan';}if(!isFinite(o)){return'infinity';}}returntype;}

So now we have a method that will return the correct type for all the things we are interested in we can improve the original example to ensure that we don’t have any TypeError’s.

Api.get('/conversations',function(conversations){// anyone reading this now knows// that conversations should be an arrayif(type(conversations)!=='array'){App.renderMessages([]);return;}varintros=conversations.map(function(c){if(type(c)!=='object'){return'';}varname=type(c.theirName)==='string'?c.theirName:'';varmostRecent='';if(type(c.messages)==='array'&&type(c.messages[0])==='object'&&type(c.messages[0].text)==='string'){mostRecent=c.messages[0].text.substring(0,30);}returnname+': '+mostRecent;});// much more likely to make it here nowApp.renderMessages(intros);});

Obviously there is no getting away from the fact that we have had to add quite a lot of additional code to avoid the risk of TypeError’s, but at Badoo we would always rather send a few extra bytes of JavaScript down the wire if it means our application is more stable.

Finally, the rather obvious downside of the type method is that it requires checking the return value against a string every time. This is easily improved though. We can create an API similar to Underscore / LoDash / jQuery by doing the following:

['Null','Undefined','Object','Array','String','Number','Boolean','Function','RegExp','Element','NaN','Infinite'].forEach(function(t){type['is'+t]=function(o){returntype(o)===t.toLowerCase();};});// examples:type.isObject({});// truetype.isNumber(NaN);// falsetype.isElement(document.createElement('div'));// truetype.isRegExp(/abc/);// true

This is the approach we take to type checking in JavaScript in our mobile web application and we have found it makes code easier to read and less likely to fail. The code for the type method explained in this post is available as a gist.

JSConfEU 2013

$
0
0

A few of us here at Badoo were lucky enough to travel to Berlin in October for JSConfEU. It was a fantastic conference, the quality of the talks was incredibly high covering topics from the future of JavaScript, to why sometimes you need to draw animals. There were so many great talks, but I just want to give a quick overview of the ones that those of us who went enjoyed the most.

Nat Duca: Advanced Jank Busting in Chrome

When trying to get to the bottom of rendering issues we often turn to the Timeline panel in the Chrome dev tools, which is great for showing forced layouts or paints that are taking a long time. However Nat Duca gave a whistlestop tour of a relatively new feature in Chrome called Tracing which allows you to see right into the heart of the rendering process. It is available by visiting chrome://tracing in the browser, and it initially looks a bit underwhelming but don’t be fooled, this is an incredibly powerful feature. In Nat’s talk he is using Chrome Canary, but I think most of the new features he discusses are available in Chrome 30.

Addy Osmani: Object.observe()

Many popular JavaScript frameworks around at the moment have data binding built in and each implements it in a slightly different way, but all currently have to result to what Addy Osmani refers to as “dirty checking”. This is because there has historically been no way to “watch” an object in JavaScript, and be notified when it changes, but that is changing with the introduction of Object.observe(). Although currently only available in Chrome Canary it should be landing in Chrome soon. Addy’s talk contains some very detailed examples of Object.observe() could be used to improve the data binding implementations in JS frameworks.

Forbes Lindsay: Promises and Generators

I think that the introduction of Promises into JavaScript has to be one of the biggest single improvements to the language. The asynchronous nature of many operations leads many developers into a callback hell that makes code hard to read and error prone. Forbes’s talk goes beyond Promises though and talks about generator functions and the yield keyword and how when all these things are brought together we can bring beauty and elegance to asynchronous JS code.

Bartek Szopka: Everything you never wanted to know about JavaScript numbers

This is an amazing delve into numbers in JavaScript and how they work. My personal favourite is at 22.00 where NaN is explained, although this whole talk is numerical gold. Even if you are not dealing with complicated maths day to day I would still recommend taking half an hour to see what’s going on under the hood.

Alex Feyerke: I have a Dreamcode: Build Apps, not Backends

Hood.ie recently posted a great blog post about “offline first” which has had lots of positive feedback from the web development community. Essentially it is an extension of progressive enhancement, in that you build your application with the assumption that the user will be offline most of the time. Alex Feyerke from Hood.ie and made a compelling case at JSConfEU for a backend-as-a-service style architecture in which the web developer doesn’t need to worry about the implementation of registering users or saving their data. The API they have put together looks elegant and I think this could be the future of prototyping web applications.

Brendan Eich: JS Responsibilities

If you can understand more than 50% of this talk you deserve a big shiny medal. It’s really excellent to see that the creator (if you will, father) of JavaScript still so passionate about pushing it forward, and also to see Doom running inside of Unread Tournament running in Firefox (27mins in).

I couldn’t possibly list all the talks we enjoyed here, but they are all available on JSConfEU’s YouTube channel so go watch them!

5 Advanced Javascript and Web Debugging Techniques You Should Know About

$
0
0

In this article we will discuss 5 advanced techniques which web developers can use to reduce the time they spend debugging and squash challenging bugs by using new tools available to us and taking advantage of the new features offered by debuggers.

  1. Weinre
  2. DOM Breakpoints
  3. The ‘debugger’ statement
  4. Hooking into native methods
  5. Mapping remote scripts locally

Weinre

According to the official description weinre stands for web inspector remote. It is a debugger for web pages, like FireBug (for FireFox) and Web Inspector (for WebKit-based browsers), except it’s designed to work remotely, and in particular, to allow you debug web pages on a mobile device such as a phone.

Weinre

Weinre essentially allows you to remotely debug webpages on devices and browsers which don’t come with native debugging support. It aims to replicate the UI of Chrome Developer Tools and provide most of it’s functionality. This tool is extremely handy for debugging DOM/CSS issues but also works really well for debugging javascript as well.

Installation

$ sudo npm -g install weinre

Weinre can be installed using npm. Or you can download binary packages from here.

After installation just run the command weinre in your terminal and it would start up the weinre server on it’s default port 8080. You can customize the port if you need to.

Next navigate to your_hostname:8000 where you will have two options to inject weinre into the page you want to debug:

  • Copy the script block and paste it in your page’s html
  • Add the bookmarklet on your phone to allow weinre to run on any site.

Once you do that you can debug any page on any browser or device as if you were using the Chrome Devtools on it! It does have some limitations however. Because weinre is simply a script injector it won’t provide you with the ability to put breakpoints inside your javascript code. But the console in weinre is really good for seeing javascript logs and doing other debugging tasks.

Note: If you don’t want the complexity of setting up weinre, you can also use a remotely hosted version of it at http://debug.phonegap.com/

Future possibilities

Using js.js (a javascript interpreter in javascript) combined with weinre it’s possible enable true javascript debugging inside any environment/device with full breakpoint support. :-)


DOM Breakpoints

DOM breakpoint is a feature provided by Firebug and Chrome Devtools that allows you to pause your script execution as soon as a certain node in the DOM is modified.

The benefit of using DOM breakpoints is that because of the asynchronous nature of javascript it’s sometimes easier to know when a DOM tree will change rather than setting breakpoints at every possible location in your code which might modify it.

To use a DOM breakpoint: * Go to the elements view in your debugger * Right click on the node you want to break on modification * Select the desired break action

Weinre

Note: In Firebug you can find all the breakpoints in the Script>Breakpoints tab. In Chrome Devtools you can find them in the Elements>DOM Breakpoints tab.


The ‘debugger’ statement

The debugger statement allows you to pause the javascript execution at arbitrary points in your code provided your debugger is open at that moment.

This can be extremely handy because it let’s you strategically put breakpoints in your code to when certain conditions are met. This is much easier to pull off compared to using conditional breakpoints.

To use it all you have to do it put the statement inside your javascript code where you want the break to happen.

if(waldo){
    debugger;}

Now with your console open, whenever the javascript interpreter will hit that condition it will cause a break in your script execution. Just don’t release it with your production code :)

Note: If you didn’t know about conditional breakpoints. Here is a quick overview on how to use breakpoints in Chrome DevTools


Hooking into native methods

Because the browser and window javascript methods aren’t protected you can hook into them to add your own functionality or debugging code. This technique is really useful when you know the problem occurring but can’t track down the source of it. Or if you want to mock some javascript methods.

**Let’s take an example: ** Suppose you are noticing an unexpected attribute being modified on a DOM element. You know the attribute or it’s value but you find it harder to track down the line of the code which does that.

In that case you can hook into the setAttribute method with your own and add debug code in it to find out the problem like so:

var oldFn= Element.prototype.setAttribute;

Element.prototype.setAttribute =function(attr, value){if(value==="the_droids_you_are_looking_for"){
        debugger;}
    oldFn.call(this, attr, value);}

Now whenever an element’s attribute is modified to the value you want the script will pause executing and you can find out the cause of the problem from the call stack.

Note: This is generally how Prototype and SinonJS work. But it’s not guaranteed to work in all browsers, for example in iOS Safari in Private mode you can’t access or attempt to modify the localStorage methods.


Mapping remote scripts locally

This method simply allows you to proxy any remote script url’s to a local file on your disk. Where you can modify the file according to your wish and have it act as if the file was modified from the source. This can come in really handy if you are debugging a problem where the source is minified and/or you don’t have the ability to modify the file (think production environments).

Note: This does require you to download and install a paid third party app on your machine. There are free alternatives to this method but they require a manual setup of proxies and http servers.

How to do it:

  • Download Charles Proxy which is a fantastic proxy tool for debugging network connections
  • Enable Charles for either the whole system or your browser
  • Download the remote file that you wish to debug and save it locally
  • Unminify the file and add any debug code that you wish to
  • In Charles: Tools > Map Local
  • Choose the local file and then modify the remote file to point like the screenshot below. You can even map entire hostnames.
  • Reload the page
  • The remote URL is now read from your locally saved file.

Charles

Benefits of mapping remote files locally

  • Allows you to debug production code when sourcemaps aren’t available.
  • Allows you to actually make modifications to the code where it wasn’t previously possible.

Note: Chrome developer tools has a file mapping system as well but it currently only works one way. As in it allows you to edit files in the devtools and save it on the disk and reflect the changes for that session. But as soon as you reload your page the files are fetched from the server and not actually read from the disc because it assumes your save actions will have modified the source file. It would be a great feature to have in devtools in the future.


If you had any problems following something in the article please drop a comment.

The technology of billing - how we do it at Badoo

$
0
0

There are many ways to monetize your project, but all of them have one thing in common – the transfer of money from the user to a company account. In this article we will discuss how this process works at Badoo.

What do we mean by ‘billing’?

Billing for us concerns all things related to the transfer of money. For example: pricing, payment pages and payment processing, the rendering of services and promo campaigns as well as the monitoring all of these things.

In the beginning, as with most startups, we had no paid services at all. The first steps towards monetization took place in 2008 (well after the official site launch in 2006.) We selected France as our guinea-pig and the only available payment method at that time worked via SMS. For payment processing we used a file system. Each incoming request was put into a file and moved between directories by bash-scripts, meaning its status changed during processing. A database was used only for registering successful transactions. This worked pretty well for us, but after a year this system became difficult to maintain and we decided to switch to using just a database.

This new system had to be re-worked quickly, as up till then we had been accepting payments in only a limited number of countries. But this system had one weak point – it was designed solely for SMS payments. To this day we still have some odd leftovers of this system in our database structure, such as fields MSISDN (mobile phone number) and short code (short number for premium SMS) in a table of successfully processed payments.

Now we receive payments from countries all over the world. At any given second at least a few users are trying to buy something on Badoo or through our mobile applications. Their locations are represented in this “Earth at Night” visual:

Earth

We accept payments using more than 50 payment methods. The most popular are credit card, SMS and direct billing, and purchases via the Apple Store and Google Play.

Pay

Among them you can find such leftfield payment options as IP-billing (direct payments from your internet provider account), landline payments (you have to call from your landline and confirm payment). Once we even received a payment via regular mail!

Letter

Credit card and bank payments

All payment systems have an API and work by accepting payments from their users. Such direct integrations work well if you have only a few of them and everything runs smoothly. But if you work with local payment systems it starts to become a problem. It is becoming harder and harder to support a lot of different APIs for several reasons: local laws and regulations are different, a popular local payment system provider may refuse to work with foreign clients, even signing a contract can draw out the process substantially. Despite the complexity of local payment methods though, adopting many of them has proven to be quite a profitable decision. An example of this is the Netherlands, which had not previously been a strong market for us. After we enabled a local payment system named iDeal, however, we started to take in 30-40% more profit.

Where there is demand, usually there’s someone ready to meet it. Many companies known as ‘payment gateways’ work as aggregators and unify popular payment systems – including country-specific ones – under one single API. Via such companies, it suffices to perform an integration only once and after that one gets access to many different payment system around the world. Some of them even provide a fully customizable payment page where you can upload your own CSS & JS files, change images, texts and translations. You can make this page look like part of your site and even register it in a subdomain such as “payments.example.com”. Even tech-savvy users might not understand that they just made a payment on a third-party site.

Which is better to use? Direct integration or payment gateways? First of all it depends on the specific requirements of the business. In our company we use both types, because we want to work with many different payment gateways and sometimes make direct integrations with payment systems. Another important factor in making this decision is the quality of service provided by a payment system. Often payment gateways offer more convenient APIs, plus more stable and higher-quality service than source payment system.

SMS payments

SMS payments are very different to other systems. In many countries they are under very strict control, especially in Europe. Local regulators or governments can make demands regarding all aspects of SMS payments. For example specifying the exact text sent via SMS or the appearance of the payment page. You have to monitor changes and apply them in time. Sometimes requirements can seem very strange, for example in Belgium you must show short code white on black with price nearby. You can see how this looks on our site below.

SMS

Also there are different types of SMS-billing: MO (Mobile Originated) and MT (Mobile Terminated). MO-billing is very easy to understand and implement. As soon as a user sends an SMS to our short number we receive money. MT is a bit more complicated. The main difference is that a user’s funds are not deducted from the moment he or she sends the SMS, but when a message from us is recieved with a notification that he or she is being charged. Through this method, we get the money only after receiving delivery notification of this payment message.

The main goal of MT-billing is to add an additional check on our side before the user sends money, preventing errors that occur due to user-misspelled SMS texts. Using this method, the payment process consist of two phases. First, the user initiates payment and second, they receive confirmation. In some countries the payment process for MT-billing follows one of these variants:

  • the user sends an SMS on short number, we receive it and check that the text is correct, etc. We send a free message with custom text, which the user has to answer, confirming the payment. After that we send a message that they have been charged
  • same as above, but instead of responding directly to the free message the user has to enter a PIN code from it on the Badoo site
  • the user enters their phone number on Badoo, we send a free message with a PIN. The user then enters the PIN code on Badoo, and after checking this, we send the payment message

For SMS payments we use only aggregators. Direct integrations with operators are not profitable, because you have to support a lot of contracts in many countries, which increasingly requires the involvement of accountants and lawyers.

Technical details

Badoo works on PHP and MySql. For payment processing we also use the same technologies. However billing application works on separate pools of servers. These are divided into groups, such as servers to process income requests (payment pages, notification from aggregators, etc), servers for background scripts, database servers and special groups with increased security where we process credit cards payments. For card payments, servers must be compliant with PCI DSS. Its security standards were developed in coordination with Visa, Master Card, American Express, JCB and Discover for companies who process or store the personal information of their cardholders. The list of requirements which have to be met to use these systems is quite long.

As database servers we use two MySql Percona servers, working in master-master replication. All requests process via only one of them - the second is used for hot-backup and other infrastructure duties, such as heavy analytical queries, monitoring queries and so forth.

The whole billing system can be divided into few big parts:

  • Core - the base entities needed for payment processing such as Order, Payment and Subscription
  • Provider plugins - all provider-related functionality such as implementation of API and internal interfaces
  • Payment page - where you can choose a product and payment method

In order to integrate a new payment provider, we need to create a new plugin which is responsible for all communication between us and the payment gateway. These can be of two types, depending whether we initiate the request (pull requests) or the payment provider initiates it (push requests). The most popular protocol for pull-requests is HTTP, either in itself or as transport for JSON/XML. The REST API (which has gained a certain degree of popularity recently) we haven’t encountered very often. Only new companies or companies who reworked their API recently offer it. For example with the new PayPal API or the new payment system used by the UK’s GoCardless company. The second most popular transport for pull requests is SOAP. For push requests mostly HTTP is used (either pure or as transport), and SOAP only rarely. The only company that comes readily to mind that offers SOAP push notifications is the Russian payment system QIWI.

After the programming part is finished the testing process begins. We test everything several times in different environments: the test environment, in shot (internal domain with only one particular task and working production environment), in build (pre-production version of code which is ready to go to live) and in the live environment. For more details about release management at Badoo please visit our blog: (http://techblog.badoo.com/blog/2013/10/16/aida-badoos-journey-into-continuous-integration/).

For billing tasks there are some peculiarities. We have to test not only our own code but how it interacts with third party systems. It’s nice if the payment provider offers their own sandbox which works the same as our production system, but if not we create stubs for them. These stubs emulate a real aggregator system and allow us to do manual and automatic testing. This is an example of a stub for one of our SMS providers.

Letter

After passing through the test environment we check how it will work with the real system, i.e. making real payments. For SMS payments, we often need to get approval from local regulators, which can take a few months. We don’t want to deploy semi-ready code on production so as a solution we create a new type of environment external shot. This is our regular shot, a feature branch with one task, but accessible by external sub-domain. For security reasons we create them only if needed. We send links to external shots to our partners and they can test changes at any time. It’s especially convenient when you work with partners from another hemisphere where the time difference can be up to 12 hours!

Support and operation

After a new integration goes live we enter the stage of its support and operation. Technical support occupies about 60-70% of our work time.

Support

By support I mean primarily customer support. All easy cases are solved by the first line of support. Our employees know many different languages and can translate and attend to customer complaints quickly. So only very complicated cases end up on the desks of our team of developers.

The second component of support is bug fixing or making changes to current integrations. Bugs appear due to multiple reasons. Of course the majority are a result of human error, i.e. when something is implemented in the wrong way. But sometimes it can result from unclear documentation. For example, once we had to use a Skype chat with a developer of a new payment system instead of documentation. At other times a payment provider makes changes on their side and forgets to notify us. One more point of failure is third party systems, as a payment provider’s aggregate payment services error can occur not on their side, but on their partner’s side.

In order to solve such cases quickly we maintain detailed logs. These contain all communications between us and payment providers, all important events, errors during query processing and so on. Each query has its own unique identifier through which we can find all rows in logs and reconstruct the steps of an execution query. It’s especially helpful when we have to investigate cases that happened a few weeks or months ago.

So that’s how billing is organized at Badoo! There are still many interesting topics we plan to explore in future, such as monitoring, PCI DSS certification, and re-working bank-card payments. If you have questions or suggestions for future articles, please leave a comment for us below.


Localising into 46 languages

$
0
0

Localisation done right will earn your app loyal users in new countries. Done badly, it becomes a nagging pain like half the apps on my computer trying to correct the spelling of localisation to localization. The purpose of localisation is not only to make your app available in other languages, but also to make the entire user experience feel like it was specifically designed with the local user in mind. Here I will share some of the lessons learned from making Badoo available in 46 languages, and point out some of the tricky bits you should pay attention to.

A brief intro

The process of making your service available in other languages consists of two parts, internationalisation and localisation.

Broadly speaking internationalisation, aka i18n, is the part where you take out all hard-coded strings from your code and replace them with reference keys. Once the strings have been translated, the reference keys will be used to fetch text in the requested language.

Localisation, aka l10n, is the part of actually adapting your content for different locales. The bulk of this will be translation, however you should also adapt non-text content for each market. For example in promotional pictures such as app store screenshots, use screenshots of the app in local language. Also, user names in screenshots should be names that will look familiar to people in the area, and people in photos should look like they are from that country or at least region.

That’s the theory. In practice the process is more complicated than that. Language translation aside, additional locale-specific conventions need to be adhered to your app to make sense and feel completely native to its users in other locales.

Formats and units

There are some subtle but important differences in the formatting of dates and numbers that may have opposite meanings in different locales. A common example is dates.>03/07/2013

The date above can mean 3rd July or 7th March depending on the local conventions. This is a frequent source of confusion between UK and US where, despite both speaking English, the date formats are different. Do not assume that because two countries speak the same language, all will be understood or correctly interpreted.

The same goes for number formats.>1.000

The number above could be interpreted as either 1 or 1000 depending on which decimal point convention is used. For example, in Korea, a full stop (.) denotes a decimal sign, but in Germany a full stop is used as a thousands separator.

If you use any kind of measurements you will also need to ensure you use units that are easily understood in the region you are targeting. For example you will probably want to use miles rather than kilometres to denote distances in the United States as they use the Imperial system. Also, if you are displaying a temperature to a user in continental Europe you should use Celsius, as Fahrenheit is unlikely to be understood.

Direction

While most languages are written from left to right, there are some notable exceptions such as Arabic and Hebrew that are written from right to left. Localising into these languages requires considerably more work than just translating. In most cases, the user interface is likely to be language direction specific and will require reworking to ensure your app retains its usability when direction is reversed.

Gender specific grammar and pluralisation

English is in some ways a simple language. It has no gender-specific grammatical rules and in most cases all you need to do to create a plural of a noun is stick an ‘s’ on the end. Other languages can be more complicated - often the endings of words will change depending on whether the actor in the sentence is male or female.

In some languages the plural form rules can be quite complex. For example, in Russian a different form may be used depending on the exact number of the objects being counted. If there are between 2 and 4 objects, one form of plural is used, while if there are more another is used. However, if the number ends with a 1, then the singular form is used, unless it also ends with 11 in which case the first form of plural is used. Like I said, complex.

Tone, context and string length

In most cases where strings are translated as short snippets, there is a lot of interpretation that can be applied to each translation. Words rarely have exactly the same meaning when translated into other languages and can have additional connotations. A lot of the time you will use language in your app that conveys your app’s personality and will want to preserve that tone in all languages.

An important catch to look out for is re-using the same string in different places. The problem you may run into there is that the wording may be the same in English for both cases, but other languages may require different phrasing due to the variation in context.

Working on mobile projects you will need to pay extra attention to string length. Screen space will be at a premium and you will need to ensure your text snippet can fit into the space allocated. In many languages, especially for some technical terms, you may not have a convenient direct translation and what may be a short word in one language can end up being a full sentence in another.

Our solution

At Badoo we have an in-house localisation team, with translators for all our top markets based in the office full-time. Our team members translate and test content, and also work closely with developers to continuously improve our in-house translation system, and address language-related issues.

Dashboard

The screen above is the main client side interface our developers use when adding a new string to the localisation system. The very first text input contains the key we use to look up the snippet. As you can see we try to keep the key names as descriptive as possible. It should be fairly obvious from the key name what it is and where it is used.

To get around gender-specific grammar rules in different languages we use different keys for references to male and female people. While the original strings will be exactly the same when in English, in many languages there will be differences and this is the easiest way to take them into account. The translated text snippet is a simple template that can accept parameters as inputs. For example this could be the name of the person referred to in the string. To give the translators some additional context we also include a screenshot of the screen where the translated strings will be inserted.

As a precaution to prevent truncation and to ensure that the translated text will be displayed within the allocated space we also specify a maximum length for the string. The translated text in the app is manually checked to ensure it fits well and works in the context. Where it’s not possible to create sensible translation within the limit, a layout adjustment may be required. To avoid problems for smaller screen sizes, we mostly test translations on small-screen ldpi devices/emulators.

We generate unique language files for each app and platform on our network. To keep the size of these to a minimum you can specify which app/platform files need to have the particular key included.

As a final step, translation managers kick off a build and deploy a new version of the app to the test devices so that devs, testers and translators can see the latest version in action. For formats, units and number-dependent pluralisation, our solutions are server based.

To recap on the main points: - Start by extracting all strings from your app - Pay attention to number formats, units and plural forms in your translation - Not all languages are read from left to right - Remember that the translation may vary depending on the gender of the person in the text - Make sure translated strings fit the context such as tone and space available

Finally, the purpose of localisation is to make all users feel like first class citizens in your app irrespective of their language and location. Often, that requires taking extra steps that may not be immediately obvious, but we can say from 7 years of experience that it’s well worth the effort.

A page control with style

$
0
0

When we set up to design the new Hot or Not version, our designers and developers came up with a really nice way to hint users that the photos they scroll horizontally, can be seen as a grid.

animated

We thought this is rarely done in iOS apps, so we would like to share it with the community. You can find the control here.

Technically, the component is very simple, but we set out to develop it’s internal logic in a TDD fashion, so what normally would be implemented as a single class, is actually split into 2, for testability purposes: A control and a driver.

Android Handler Memory Leaks

$
0
0

Android uses Java as a platform for development. This helps us with many low level issues including memory management, platform type dependencies, and so on. However we still sometimes get crashes with OutOfMemory. So where’s the garbage collector?

I’m going to focus on one of the cases where big objects in memory can’t be cleared for a lengthy period of time. This case is not ultimately a memory leak - objects will be collected at some point - so we sometimes ignore it. This is not advisable as it can sometimes lead to OOM errors.

The case I’m describing is the Handler leak, which is usually detected as a warning by Lint.

Basic Example

Basic Code Sample

This is a very basic activity. Notice that this anonymous Runnable has been posted to the Handler with a very long delay. We’ll run it and rotate the phone couple of times, then dump memory and analyze it.

Analyse HPROF

We have seven activities in memory now. This is definitely not good. Let’s find out why GC is not able to clear them.

The query I made to get a list of all Activities remaining in memory was created in OQL (Object Query Language), which is very simple, yet powerful.

Analyse HPROF

As you can see, one of the activities is referenced by this$0. This is an indirect reference from the anonymous class to the owner class.This$0 is referenced by callback, which is then referenced by a chain of next’s of Message back to the main thread.

Any time you create a non-static class inside the owner class, Java creates an indirect reference to the owner

Once you post Runnable or Message into Handler, it’s then stored in list of Message commands referenced from LooperThread until the message is executed. Posting delayed messages is a clear leak for at least the time of the delay value. Posting without delay may cause a temporary leak as well if the queue of messages is large.

Static Runnable Solution

Let’s try to overcome a memory leak by getting rid of this$0, by converting the anonymous class to static.

Static runnable code

Run, rotate and get the memory dump.

Analyse static runnable HPROF

What, again? Let’s see who keeps referring to Activities.

Analyse static runnable HPROF

Take a look at the bottom of the tree - activity is kept as a reference to mContext inside mTextView of our DoneRunnable class. Using static inner classes is not enough to overcome memory leaks, however. We need to do more.

Static Runnable With WeakReference

Let’s continue using iterative fixes and get rid of the reference to TextView, which keeps activity from being destroyed.

Static runnable with weak reference

Note that we are keeping WeakReference to TextView, and let’s run, rotate and dump memory.

Be careful with WeakReferences. They can be null at any moment, so resolve them first to a local variable (hard reference) and then check to null before use.

Analyse static runnable with weak reference HPROF

Hooray! Only one activity instance. This solves our memory problem.

So for this approach we should:

  • Use static inner classes (or outer classes)
  • Use WeakReference to all objects manipulated from Handler/Runnable

If you compare this code to the initial code, you might find a big difference in readability and code clearance. The initial code is much shorter and much clearer, and you’ll see that eventually, text in textView will be changed to ‘Done’. No need to browse the code to realise that.

Writing this much boilerplate code is very tedious, especially if postDelayed is set to a short time, such as 50ms. There are better and clearer solutions.

Cleanup All Messages onDestroy

Handler class has an interesting feature - removeCallbacksAndMessages - which can accept null as argument. It will remove all Runnables andMessages posted to a particular handler. Let’s use it in onDestroy.

Remove callbacks code

Let’s run, rotate and dump memory.

Analise remove callbacks HPROF

Good! Only one instance.

This approach is way better than the previous one, as it keeps code clear and readable. The only overhead is to remember to clear all messages onactivity/fragment destroy.

I have one more solution which, if you’re lazy like me, you might like even more. :)

Use WeakHandler

The Badoo team came up with the interesting idea of introducing WeakHandler - a class that behaves as Handler, but is way safer.

It takes advantage of hard and weak references to get rid of memory leaks. I will describe the idea in detail a bit later, but let’s look at the code first:

WeakHandler code

Very similar to the original code apart from one small difference - instead of using android.os.Handler, I’ve used WeakHandler. Let’s run, rotate and dump memory:

Analise remove callbacks HPROF

Nice, isn’t it? The code is cleaner than ever, and memory is clean as well! :)

To use it, just add dependency to your build.gradle:

repositories{maven{repositories{url'https://oss.sonatype.org/content/repositories/releases/'}}}dependencies{compile'com.badoo.mobile:android-weak-handler:1.0'}

And import it in your java class:

importcom.badoo.mobile.util.WeakHandler;

Visit Badoo’s github page, where you can fork it, or study it’s source code https://github.com/badoo/android-weak-handler

WeakHandler. How it works

The main aim of WeakHandler is to keep Runnables/Messages hard-referenced while WeakHandler is also hard-referenced. Once it can be GC-ed, all messages should go away as well.

Here is a simple diagram that demonstrates differences between using normal Handler and WeakHandler to post anonymous runnables:

WeakHandler diagram

Looking at the top diagram, Activity keeps a reference to Handler, which posts Runnable (puts it into queue of Messages referenced from Thread). Everything is fine except the indirect reference from Runnable to Activity. While Message is in the queue, all graphs can’t be garbage-collected.

By comparison, in the bottom diagram Activity holds WeakHandler, which keepsHandler inside. When we ask it to post Runnable, it is wrapped intoWeakRunnable and posted. So the Message queue keeps reference only toWeakRunnable. WeakRunnable keeps weak reference to the desired Runnable, so the Runnable can be garbage-collected.

Another little trick is that WeakHandler still keeps a hard reference to the desired Runnable, to prevent it from being garbage-collected while WeakRunnable is active.

The side-effect of using WeakHandler is that all messages and runnables may not be executed if WeakHandler has been garbage-collected. To prevent that, just keep a reference to it from Activity. Once Activity is ready to be collected, all graphs with WeakHandler will collected as well.

Conclusions

Using postDelayed in Android requires additional effort. To achieve it we came up with three different methods:

  • Use a static inner Runnable/Handler with WeakReference to owner class
  • Clear all messages from Handler in onDestroy of Activity/Fragment
  • Use WeakHandler from Badoo as a silver bullet

It’s up to you to choose your preferred technique. The second seems very reasonable, but needs some extra work. The third is my favourite, obviously, but it require some attention as well - WeakHandler should not be used without hard reference from outside.

Building a maintainable bi-directional cross platform protocol

$
0
0

Today I gave a talk along side Pavel Dovbush at (#jsconfeu) about our experiences building a cross platform api abstraction based on protobuf.

I’m attaching the slides here, for those who are interested.

Keep a look out for a further post about the conference and the some thoughts about some of the other awesome presentations.

Deobfuscating HPROF memory dumps

$
0
0

According to Crittercism 1, the second most common crash reported in Android apps is java.lang.OutOfMemoryError, so it stands to reason that analyzing these crashes should be one of the top priorities for any Android developer. If you are analyzing memory dumps from a debug build or if you are not using obfuscation this process is fairly straightforward. However, if your heap dump is coming from an app built using obfuscation (Proguard or Dexguard) you are in for quite a challenge (or at least you were, until now).

In the image below you can see a typical obfuscated instance dump in Eclipse Memory Analyzer (MAT), where most of the field names have been replaced with indecipherable one-character names.

Figure 1: Before deobfuscation

Before deobfuscation

Can we do anything about this then? Well, if you have the mapping files you could look up each symbol to figure out the name of the field and its value, but it would be an extremely time-consuming process. This article will outline a much more efficient and automated process to deobfuscate a HPROF heap dump. The end result of this process is shown in the image below. When compared to the first image it makes it much clearer what fields and values we are trying to analyze.

Figure 2: After deobfuscation

After deobfuscation

HPROF File Format

An HPROF file contains a Java heap dump taken at a given time. It is a VM-independent format (dumps can be taken from most JVMs) which means that the content of the file is not a byte-by-byte copy of the actual Java heap. The content includes (but is not limited to):

  • List of all classes loaded by the class loader
  • All strings
  • Class definitions (including constant values, static field values and in instance field declarations but not any information about methods)
  • Instance dumps (containing values of all instance fields associated with the object)
  • Heap roots, sticky objects, stack frames and stack traces

As mentioned, the HPROF files does not contain an exact copy of the heap. One interesting piece of information that is omitted is the actual physical location in memory of heap objects. This means that we cannot accurately calculate how fragmented the heap is, a condition that on Android can lead to OutOfMemoryErrors even when there is memory available. The reason for this is most likely that Sun’s JVM has been supporting compacting garbage collection2 since a very early version while Android is only planning to include this support in the upcoming Android L release.

HPROF files from Android (Dalvik) also contain several non-standard records. These records must either be converted to standard records or discarded before the file is read by any standard HPROF memory analyzer. These extensions are not documented and to be fully understood would require some digging into the Dalvik source code (comments are welcome on this topic!)

ProGuard/DexGuard Obfuscation

ProGuard DexGuard can perform several types of obfuscations and optimizations on your app but there are two in particular that affect memory dumps.

  • Renaming of classes and fields
  • Reuse of strings for field names

The first type of obfuscation is fairly straightforward. Class names and field names are simply replaced with a (shorter) unreadable string. The second type, though, requires a bit of background on how strings are handled in HPROF files in order to be explained clearly.

In the HPROF class definition you’ll not find the actual strings of the class or field names. Instead they contain a string identifier (usually a 4-byte ID that uniquely identifies the string). If two string fields have the same value they will also have the same string ID.

When the method fields are obfuscated, the obfuscated names are reused across classes. This means that two classes (A and B) which before obfuscation had fields with different names (say A.x and B.y) now have a field with the same name (A.q and B.q). As mentioned previously this means that the fields in both classes will have the same string ID for their names.

As can be seen in the next part, this will complicate things when trying to deobfuscate the file.

Deobfuscating a HPROF File

The deobfuscation performed by deobfuscator can be broken down into four steps:

  1. Read mapping file (generated by ProGuard or DexGuard during the build).
  2. Read HPROF file to find all strings used as class and field names.
  3. Use mapping to look up the deobfuscated names for classes and fields.
  4. Write an updated HPROF file.

The first step is done using ProGuard’s proguard-base library which reads and processes the mapping file.

In the second step we are using the hprof-lib library (part of the source) to read the input HPROF file. Of all the data contained in the file we are only concerned with these records:

  • STRING: contains the ID and string value of one string
  • LOAD_CLASS: contains a record that a class is loaded by the VM
  • CLASS_DUMP: contains the definition of a class, including the name of the class, lists of constants, named static fields and named instance fields

When reading the field declarations of the class definitions an additional step is completed: deduplication of strings. As mentioned in the previous section about ProGuard/DexGuard obfuscation, fields that previously had unique names are made to share the same name after obfuscation. This means that in order to deobfuscate each field correctly we need to create copies of the strings and then deobfuscate each one separately. The table below attempts to explain this.

Deduplication and deobfuscation

The output from the second step is a list of all strings and class definitions for all loaded classes, with any field affected by the string deduplication updated.

In the third step we first process all class names to see if they have a corresponding entry in the mapping read in the first step. If they have, the entry in the list of strings is updated to reflect the new name.

After this we proceed to process the fields of each class (the class names must be done first since the field mapping is based on the original class names). Using the same lookup as for the class names we then update the field name string entries.

In the fourth, and last, step we then write the HPROF output file. This is done by reading the input HPROF file record by record, and either copying (for records that are unchanged) or replacing (for STRING and CLASS_DUMP) records that needs to be updated.

Due to the increased number of strings (and increased length of them) the output file is slightly larger than the input file.

Using the deobfuscator application

Source code and builds for the deobfuscator application are available here: https://github.com/badoo/hprof-deobfuscator

First, make sure that you have downloaded the most recent release of deobfuscator from our Github page, then execute the following command from the command line: :> java -jar deobfuscator-all-x.y.jar {mapping file} {obfuscated hprof file} {output hprof file} If everything goes well you can now open the output file in the memory analyzer of your choice.

References

  1. Crittercism presentation at Droidcon Berlin 2012 ( http://www.slideshare.net/crittercism/crittercism-droidcon-berlin-2012 )
  2. http://en.wikipedia.org/wiki/Mark-compact_algorithm

Further Reading

Viewing all 132 articles
Browse latest View live