October 18, 2019

How we code once and ship to both iOS and Android devices

How we code once and ship to both iOS and Android devices

While development is being feverishly completed on the next version of the app, I wanted to take a moment to take you on a stroll through our technological landscape. It's:

  • API first, built for speed and adaptability
  • One code base, two app stores, we write it once and release it on iOS and Android
  • Locks down user credentials, build with security in mind
  • Expands our big data insights
  • One team that can work across any part of the technical stack

I'll share how we developed that architecture and how we work in this post.

In The Beginning There Was Nothing

Knowing where to start with any piece of development is sometimes half the battle. There are many solutions and tools out there for you to utilise, how do you choose between them, which pieces do you start with, and how do you manage the dependencies?

One of the joys of working with a development agency, as we do with Newicon, is the benefit of leveraging an existing code base. Rather than having to review every possibility, you start with an existing system, make sure that it's suitable and then work forwards.

The Backend

Our starting point was the language the business logic would be executed by, this is PHP.

A language on it's own is not enough, we enhance this by using a framework, we use Yii2. Our agency Newicon chose Yii2 because:

It’s well established, quick, and intuitive to learn, and has good documentation.

The choice of specific framework isn’t as important as the decision to use one in the first place. Having an imposed structure, security standards, and a best practice approach gives you the freedom to focus on your development, rather than the hygiene factors around running a service.

We also have an additional layer of logic which our agency had added. This provides extra flexibility for common tasks and database interactions. The added layer of logic helps translate our server side into administrative screens and API endpoints.

This layer on top of Yii2, called Neon, allows rapid development of web applications by providing generic functionality which is required for the applications Newicon develop, as it isn’t provided as standard by the framework.

To help us deploy in a reliable fashion we use Apache as our server software. In the future we may migrate this across to Nginx but for now we have a solid setup.

Apache and PHP flow from Washington University https://classes.engineering.wustl.edu/cse330/index.php?title=File:PHP-Apache_Flowchart.png

MySQL is the go to solution for a database when you need flexibility and a relational database. However the software isn't as important as the engine and we use a combination of engines. Our database is a mix of reference tables and raw data. Thus we use a mix of InnoDB and MyISAM. From Newicon:

InnoDB locks at the row level and MyISAM locks at the table level. In practice this means MyISAM is useful when doing lots of reads and only occasional writes, InnoDB is generally better in most other circumstances.

Finally there's the server's OS. For this we use a flavour of Linux acting as a server, Debian. Aside from using a cloud hosting solution, we also virtualise this machine, giving us two different ways to scale our setup.

Everything we have discussed and covered is really focused around two key concepts: speed and adaptability.

One question that is often asked is around scale - how the code and the product will scale. While the product is still growing and we continue to make large changes, building for maximum scale can slow down development. This is why we have started with a monolithic approach but we're doing this in the knowledge that in the future, this will be broken down into micro services.

Quick diagram of what the difference is between the two approaches, by Dev.to https://dev.to/alex_barashkov/microservices-vs-monolith-architecture-4l1m

Having the team and structure in place to support micro services is a critical consideration. While the product still evolves and with a limited team, it's important we weigh up the approach and pick what the business needs, but also what can realistically be supported.

The Devices

In previous write ups I have mentioned that our iOS and Android apps share a code base and we create webapps from a single code base that is deployed to both stores.

The web part of the app is developed using AngularJS. Angular gives us the benefit of structured JavaScript code in similar fashion to the way that the backend code is organised. That helps to reduce the learning curve when moving between the two languages. It also offers fantastic support regardless of browser, making it an ideal candidate for writing JavaScript for mobile applications!

For the MVP of Genuine Impact we used Vue.JS. However, the greater support with third party tools made us switch to AngularJS.

Quote from Newicon on switching JavaScript languages.

We use Cordova and Ionic to access the unique properties of iOS and Android without having to support two separate code bases. It gives us the benefit of only having to writing our code in one way.

Cordova builds your HTML, JavaScript, and CSS into native apps, enabling them to run natively on Android, iOS, and other devices. However, this doesn't give you the native look and feel.

Here is where Ionic comes into play. It's the experience you expect from a mobile app. Having access to the native features is one part of the puzzle, making them feel like a native feature is the finishing touch.

Ionic's post about Cordova https://ionicframework.com/resources/articles/what-is-apache-cordova

Together we can write our app like a website, package it up as a native app, and make it look and feel like a native app. This means a faster experience, more of the functionality that you would expect from an app, and less code that we need to write and maintain!

The Brains

Even with a super smart app, and a very flexible backend power it, we are still missing a key component of Genuine Impact. That is our ranking algorithm.

Algorithms can be an extremely tricky business. It's easy to over complicate, or to engineer them to be inflexible, or worst case completely unmaintainable.

The key requirements for our algorithm, in terms of how it is written, are:

  • Easy to expand and maintain, the code should be "self documenting"
  • Easy to deploy into any environment or server
  • The only requirement to understand the code should be knowledge of advanced mathematics or statistics
  • Separation of data collection and calculation, which is modularised

The end result is a collection of Python scripts. Python was selected because of:

  • Its flexible structure
  • The fact that we can easily deploy this wherever we need to
  • Non-programmers with mathematical background can understand the flow and contribute
  • We can easily break apart the business flow with a wide selection of storage solutions.

Python also has APIs for NoSQL databases like MongoDB, and for all major providers of cloud storage.

The process starts by using APIs calls to work out the universe of securities we are working with. From there we download the data points we need in batches. We now have a complete set of raw data for all of the securities we are assessing.

The second process now kicks in, this is the calculation and ranking phase. This completes by producing the output for us which then gets uploaded to the backend service.

We also selected Python for it's future prospects - as we enhance the algorithm and introduce more machine learning elements, we can easily hook into 3rd party solution like Google's Cloud ML Engine.

The next idea in the brains category is the big one that no technical write up is complete without a mention - our big data solution.

Image from Google on their Data Lake page https://cloud.google.com/solutions/data-lake/

We are in the process of piping all of our data via Stitch into Google's BigQuery. This is going to leave us with a massive data lake. Lot's of raw untreated data which needs to be organised and have sense applied to it.

User Credentials?

We have the user database, the most secure locked vault of them all. We aren't a financial provider ourselves, compared to banks or brokers, but that doesn't mean we shouldn't take security as seriously, plus who knows where the future will take us!

Firebase is run by Google, they allow us to separate ourselves from the user passwords and credentials. It also opens up the door for us using third party logins like Facebook, Google, Twitter, or even via a mobile number. It's great for removing the complexity from us, and also means we know from day one our security is starting off on the right foot.

Google Developers on storing data in Firebase so that all we see is a token, and thus never get access to the secure credentials https://medium.com/google-developers/controlling-data-access-using-firebase-auth-custom-claims-88b3c2c9352a

Going Beyond

It can feel like a lot having to read through how different pieces our technology fit together and why we use each component. However, the setup is very straightforward and designed to adapt. I don't believe in three years time if I repeat this write up the architecture would be the same.

Technologies and languages get better over time as more people contribute and the community grows, however the business needs are far more likely to reprioritise and evolve. This is where I see the modular approach, that's inherent in our design, coming into play.

Peerbits post on planning mobile architecture https://www.peerbits.com/blog/all-about-app-architecture-for-efficient-mobile-app-development.html

Even without a microservice structure we can still architect our code to expect changes. Creating layers/interfaces which allow our developers to always deal with the same output regardless of what is happening behind the scenes is the secret to this.

I hope this has been an enlightening trip across our technical stack and you understand a little bit more about how we work as well as why!