Teller Create free account
Stevie Graham

As you may or may not know, banks do not generally provide third party developers with API access. This is because providing developers with easy API access to customer accounts means actual competition and ultimately compressed margins. In fairness to banks, building a new API channel costs a lot of money and if all it does is increase competitive pressure why would you spend any time or money on it? It’s a rational response given their incentives.

Despite this people still want to connect their bank accounts to services they trust, and companies still want to build those services.

So, where does this leave us? Thankfully the market has stepped in to provide solutions that enable us all to connect trusted apps with our financial accounts, despite this banks still actively block third party access by blocking their traffic.

All IP Addresses Are Not Created Equal

The way we have solved this is to route our financial institution traffic onto the public internet via mobile phone carrier networks.

The great thing about carrier IP ranges is that carriers have significantly more customers than they have IP addresses, meaning public internet breakout is heavily NAT-ed, i.e. a single address is shared and used by many customers simultaneously. The other great thing is there is good chance you’re on mobile data when you use your bank’s mobile app and your IP address is in the carrier’s IP range.

By sending our traffic onto the internet using the same IP addresses shared by millions of a bank’s own customers using the bank’s mobile app, we both make it significantly more difficult to identify and subsequently block our traffic and we also increase the collateral damage of any hostile action a bank might take against us and our users, i.e. erroneously blocking their own customers using their mobile banking app.

Untitled 4 Created with Sketch. 🏦 😤 Cloud traffic is easily detectable and trivial to block 🏦 📱 😌 Traffic routed via carrier networks passes through undetected

Until recently we used a third party provider for mobile carrier network transit, but suddenly without warning their performance and availabilty degraded to unacceptable levels. Requests occasionally took 20-30 seconds to complete. A single Teller API transaction might actually involve several requests to the financial institution, and even if we can parallelize some of these it’s a disaster for us if any of them take 30 seconds.

Teller provides live access to financial accounts. When you request an account balance, Teller synchronously fetches that data live from the financial institition and returns it to you. Fast and reliable network access is an absolute must in order for us to provide that level of access. Other providers can get away with lesser network performance because they don’t actually ever return live data in an API call. They periodically poll the institution a couple of times a day, and give you the most recent data they have when you make your API call.

We immediately began to design and build an in house solution to solve this problem once and for all.

Introducing Telnet

Telnet is our propietary mobile carrier proxy network. The name is a portmanteau of Teller Network, but if we’re honest it began as an internal joke as it’s built on top of SSH, the remote access protocol that obsoleted the original Telnet.

Telnet is composed of a large number of edge nodes, which are single board Linux computers with LTE modems attached running our own software written using Nerves. When nodes boot they reverse SSH into our network and register themselves as available to route API traffic. Our infrastructure then routes our financial institution traffic via our Telnet edge nodes, egressing onto the internet on carrier IP ranges.

Graph Created with Sketch. Vendor A Telnet Direct Request latency

It works amazingly well. We have not only cut the latency overhead to the bone, according to our logs requests failing due to proxy errors have become a thing of the past too.

Credit goes to the team for shipping this so quickly. They went from bare git repo to production deployment of a fleet of embedded devices with OTA software updates in weeks. I’m very proud of them.

Follow @teller for a future blog post on how we built Telnet.

Think this is cool? We’re hiring.