Proxying ADB Client Connections

Creating an ADB proxy that supports port forwarding between device(s) and remote host


By virtue of you opening this article, I assume your development somehow reaches into the world of Android (perhaps only peripherally, as was the case for me). So you are probably familiar that there is a tool called the Android Debug Bridge (ADB) that is used for communicating with an Android device over USB/TCP. It allows everything from polling logs, pushing binaries, and even port forwarding.

If you’re like me, you never thought much about how the ADB tool works — that is — until one fateful day that I tried to Do The Right Thing™️.

Story Time

Pictured: A young me beginning my journey at Amazon

When I first began at Amazon, I was on a team responsible for building a browser. Now for those that do not know, browsers can be quite complicated beasts. As such, we had a large distributed build system that ran our browser builds through an enormous amount of tests. These tests ran on basically every Amazon devices type in existence (yes, even the infamous “Fire Phone”). One problem though — Amazon Kindle teams did not release Android Virtual Device (AVD) images for these devices (and AVDs are not always faithful emulations of real devices anyway). So we needed real Android devices to be plugged into real computers.

This meant my team had to host our own mini server room inside our corporate office building. The distributed build system was comprised of 150+ physical servers connected to hundreds of Amazon devices (ironically, my team fell under the “AWS” umbrella 😅). This didn’t make us the most popular team in the building, because as it turns out, stuffing hundreds of devices in a single room that are all hammering through tests 24/7 can heat up an office just a bit. Suffice to say, we were not one of the cool teams (the Amazon Twitch engineers were top of that list on account of their light-up Razor laptops and purple sweatshirts).

Maybe I’m being dramatic but I remember it looking a lot like this…

Apart from being the outcasts, the problems were different as well. It was a brave new world dealing with the array of hardware/networking failures that came from such a scrappy setup. Not to mention that my team at the time had dwindled to myself and another young engineer. So one day I drew up a plan to move our distributed build system off physical hardware, dreaming of the fateful day we could come running back into the good graces of AWS, having defeated our reliance on physical hardware once and for all.

Quick digression — the plan wouldn’t entirely be rid of physical hardware (after all, we still required testing against physical Android devices), but it would greatly reduce our need for beefy, physical build servers. This new setup would move the entire distributed build system to EC2, and require only a few physical hosts that had as many Android devices stuffed into their USB slots as physically possible. And since these physical hosts just proxied connections from EC2 to the Android devices, they were much simpler to maintain and replace.

Anyway, I went about creating a prototype for my plan. I’ll be honest the exact details aren’t very interesting. But that is mainly because ADB made it very easy for us to accomplish this migration.

What Is ADB?

Basically, the Android Debug Bridge (ADB) is for controlling Android devices and emulator instances from your host machine. The Android project covers this question in OVERVIEW.txt, including how ADB is really comprised of 3 parts: the ADB server, ADB daemon (adbd), and ADB client.

This is a diagram for what this looks like:

The great benefit of ADB using this client-server model is that it is trivial to run an ADB client on another server, which looks something like this:

And this is exactly what we did to allow EC2 instances access to physical devices we had all plugged into a few servers running in our now almost vacant server room. Essentially, all it took was forwarding a port over ssh and we were off to the races.


That is all fine and dandy. But — one problem. When we started testing the rollout of this new architecture, my colleague realized a blip of red. There it was again…Chromium’s network unit tests (which only ran on select devices) were failing. We scrambled to reproduce on the physical hardware, but it would not. After some digging, we found the problem. It was that ADB supports some interesting features like forwarding ports between the host running the server and the device. So this is where things get interesting.

With ADB, you can run something like:

# forward host port 10000 to 10001
$ adb forward tcp:10000 tcp:10001


# reverse forwards device port 10000 to 10001
$ adb reverse tcp:1000 tcp:1001

And it would look something like this:

The problem is, in our case where the ADB client was running on a different server, it looked something like this:

Certain Chromium tests would forward and reverse-forward ports dynamically and shuttle packets to and from the device to test the networking stack and stuff like that. So we had to bridge the gap, so to speak. Unfortunately, we couldn’t just forward or reverse-forward a static list of ports, since the tests used dynamic port ranges which realistically would be very difficult to get changed upstream (and too unwieldy of a patch to maintain ourselves). So instead we decided to build a proxy to sit in between the ADB client and server, which would recognize forward and reverse-forward messages from the client, “bridge” that connection between the remote server and the server it ran on, and pass through the packets to the real ADB server. So this solution looked something like this:

That brings me to the meat of this post, which is concerning the partial reverse engineering that we underwent when attempting to proxy and “sniff” traffic between the ADB client and server to determine when ports needed to be forwarded.

ADB Client ←→ Protocol

Alright, in order to build the ADB proxy we first must dive into the ADB client and server protocol (note, we are not concerned with the server and device daemon protocol, which is something else entirely). Let us first take a look at what the documentation has to say about this protocol:

The documentation describes a very rough overview. Let’s start with the client first.


A client sends a request using the following format:1. A 4-byte hexadecimal string giving the length of the payload
2. Followed by the payload itself.

Simple enough, right? Here is an example packet capture where you see this in action:

NOTE: The bytes shown in hex as '000c' is a base 16 number that converts to 12 in decimal, which is the length of the payload 'host:version'. I explain this simple conversion here.

In another .txt file, example client messages are enumerated in more detail:

This all looks good, except you may have noticed this little tidbit:

host:transport:<serial-number> Ask to switch the connection to the device/emulator identified by <serial-number>. After the OKAY response, every client request will be sent directly to the adbd daemon running on the device. (Used to implement the -s option)

Seems the client can “break” the described protocol when it wants to stream packets directly to the adbd running on the device. Noted…


While it doesn’t include much regarding server messages, it does speak to server response to client messages that contain host: prefix:

The ‘host:’ prefix is used to indicate that the request is addressed to the server itself (we will talk about other kinds of requests later). The content length is encoded in ASCII for easier debugging. The server should answer a request with one of the following:1. For success, the 4-byte “OKAY” string
2. For failure, the 4-byte “FAIL” string, followed by a 4-byte hex length, followed by a string giving the reason for failure.

Here is a server response in action:

Exciting. However, besides the documentation detailing this type of server ACK’ing, other server responses are totally left open ended.

Open Questions

Welp, that is pretty much the extend of the documentation, which there are many gaps. A few notable ones:

  1. Server Packet Format: The server packets don’t have any standard header information like content length, etc. Max payload length is not detailed either.
  2. Client Packet Format Inconsistency: As mentioned earlier, the client can “break” its protocol in scenarios where it wants to stream packets directly to the ADB daemon running on the device (example: adb shell). This means that the first 4 bytes of client requests were not always indicative of content-length, and rather arbitrary bytes that aren’t meant to be processed by the ADB server.
  3. Packet Sequence: TCP connections are often left open after initial client “request” packet and server “response” packet. Depending on the scenario, server or client may send follow-up packet(s). The sequence is not made clear by the protocol, and is implicitly communicated through the message contents being exchanged between the client and server.


In building a proxy, I wanted to write something as close as I could to a barebones TCP proxy that as little understanding of the ADB protocol it was proxying. So below I describe how I addressed the above problems respectively, always choosing the approach that made the ADB the most “agnostic” to application protocol implementation details.

  1. Server Packet Format: Without knowing information like content length, the best solution for the proxy was to simply try reading the max packet length allowed. Though this number could’ve been gleaned through inspecting ADB source, I went the lazy route. I did some “black box” testing using Wireshark to inspect traffic between the client and server for scenarios where tons of data was being sent to the client (example: adb logcat) and was able to quickly perceive maximum packet length to a fair degree of confidence.
  2. Client Packet Format Inconsistency: For the scenario where clients “broke” protocol to stream packets directly to device daemon, rather than having the ADB proxy be aware of when this connection change occurred, I simply decided to have client headers be parsed on “best effort”. ADB proxy would read the client packet (up to max payload length number which was discovered in step 1.) and see whether the header content length bytes matched the actual payload length. If so, it was a “parsable” message meant for the ADB server, so ADB proxy could then parse it and figure out its request type (enumerated in SERVICES.txt).
  3. Packet Sequence: Again, rather than having the ADB proxy be aware of what client requests can lead to servers returning follow-up packets, or vice-versa, I went the simple route. Unfortunately, not knowing if a client or server packet is expected next made it impractical to handle one proxy connection per thread, but at the cost of adding another thread (one to block on client socket and one to block on server socket), this problem was averted as well.

After building the ADB Proxy that would dynamically forward ports via ssh when ADB forwarding commands were executed, we were able to fix the failed tests. It also allowed remote machines to fully utilize ADB functionality, which was an added bonus. While our proxy implementation is not open source, I ended up rewriting the functionality in C (mainly because I need the practice 😅), and releasing it here:

Check it out and let me know what you think!

Amazon Software Engineer in Robotics & AI

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store