Vishnu's Pages

Custom App Licensing Security: What We Built When HTTPS Wasn't Enough (external)

Thumbnail image of a hacker trying to steal data

Note: This post was originally published on QBurst Blog. I'm sharing it here to keep a personal record of my work and make it easier for others to find everything I've written in one place. Edited by Raji Raman, and thumbnail illustration by Rajesh Kumar.

Our client's kiosk app had a simple licensing requirement: each customer gets a license key, each key activates one device, and that device stays licensed for a certain time period.

Sounds straightforward. But there were some hard constraints:

Standard HTTPS-based API calls don’t solve this. We needed protections that would operate above the transport layer.

Here’s how we approached it.

Solving Offline Licensing

Most licensing systems rely on periodic server checks. But this app would work offline after the initial activation. It meant that the app itself had to track license validity.

A common solution for this kind of problem is to have the server return the license expiry date during activation. The app then stores it securely and checks it during startup or at regular intervals.

But what if the user changes the system clock? It would make the license appear valid again. Users can keep doing this as long as they want and bypass the expiry check.

Blocking Date Tampering

Say a device is activated on January 1, 2025, with a license valid for 365 days. By January 1, 2026, the app should stop working. But if the user manually sets the device clock back to January 2025, the app would consider the license still valid for another 365 days.

To prevent this, the app must enforce forward-only time progression. Since the app requires continuous user input (details redacted), we used that interaction to validate time locally.

On activation, the app securely stores the current time as LAST_USED.

Every time the user interacts with the app, it compares the current time to LAST_USED. If the time has gone back, it blocks usage. If it’s forward, it updates LAST_USED.

Preventing Replay Attacks

We made backdating ineffective, but we had another issue to address: replay attacks.

A user could activate one device, intercept the API response using a tool like Charles Proxy, use the response to activate other devices, and completely undermine our client's business model.

To stop this, every activation request must be made unique. A standard approach is to include a nonce, a one-time random value. The server rejects any request with a reused nonce. It also returns a unique response tied to that specific nonce, so reusing won't work.

But this only works if the app can trust that the response really came from our server.

Verifying the Authenticity of API Response

App responses are usually verified using certificate pinning, where the app embeds the server’s public key and accepts responses only from servers with a matching private key.

But in our case, pinning wasn’t enough.

The app wasn’t going to be distributed through Google Play, so we couldn’t verify the installation using Play Integrity APIs. A user could modify the APK, replace the pinned certificate, and resort to replay attacks. Even standard certificate rotations (using Let’s Encrypt, for instance) would require frequent app updates, which wasn’t feasible.

To harden security, we added response-level signature verification, in case HTTPS or certificate pinning fails. To enable this, we created our own public-private key pair. The server signs each activation response with its private key, and the app verifies it using the corresponding public key.

That brought us back to the now-familiar problem: How to embed the public key?

Embedding the Public Key

We can't prevent a determined hacker from extracting the public key from the app, but we can make it harder.

Our idea was simple: make the key hard to spot in decompiled code.

To do that, we built a custom algorithm that scrambles the key, spreads it across the codebase, and reassembles it only when needed.

We used a mix of Kotlin and C++ (via the NDK) to make it harder to figure out from the decompiled code. The scrambling itself happens outside the main code, and the fragments are copy-pasted where they need to be.

Here’s a simple implementation in Kotlin. (The real implementation is more complex and, for obvious reasons, not shared.)

// Function to scramble a given string
fun scramble(input: String, chunkSize: Int): List<String> {
    return input.chunked(chunkSize).reversed()
}

// Function to reassemble the scrambled string
fun assemble(scrambledChunks: List<String>): String {
    return scrambledChunks.reversed().joinToString("")
}

Choosing a Signature Algorithm

Note: The following section outlines some general strategies. The actual implementation is not disclosed for security reasons.

With a method in place to embed the public key (ideally in an obfuscated form), the next step is to select a digital signature algorithm that’s secure and compatible with the tech stack.

A common choice for this kind of setup is ECDSA with SHA-256, which is used in security protocols like SSL/TLS.

Here's how you can generate a key pair (and a self-signed certificate):

# Generate an ECDSA private key with "prime192v1" curve (a stronger curve is recommended for production)
openssl ecparam -name prime192v1 -genkey -out private_key.pem

# Extract the public key from the generated private key
openssl ec -in private_key.pem -pubout -out public_key.pem

# Generate a self-signed certificate with a 1-year validity period
openssl req -x509 -new -nodes \
    -key private_key.pem \
    -sha256 \
    -days 365 \
    -out self_signed_certificate.pem \
    -subj "/C=XY/ST=State/L=Location/O=OrgName/OU=OrgUnit/CN=example.com"

The private key becomes a part of the backend application. The corresponding public key is embedded in the app, either directly or wrapped in a certificate using the scrambling method mentioned earlier.

Now the backend has to sign the response. What data to sign is the developer's choice. It can be a uniquely ordered data or the whole JSON response. If you're signing ordered data, the app must know the exact format to reconstruct it and verify the signature.

Here’s an example in JavaScript that signs a string (for illustration purposes only):

function signString(dataString, privateKeyFilePath) {
    const privateKey = // Read private key from the PEM file

    const sign = crypto.createSign('sha256'); // Use SHA256 for message digest.
    sign.update(dataString);
    sign.end();
    return sign.sign(privateKey, 'base64');
}

The signature is included in the API response, either in an HTTP header or embedded in the signed JSON body. The Android application can verify this signature using a standard security library.

Here’s a simple example in Kotlin:

val sign = Signature.getInstance("SHA256withECDSA")

sign.initVerify(publicKey) // Re-assembled embedded public key from the scrambled certificate or PEM

sign.update(dataToVerify)

sign.verify(Base64.decode(signature, Base64.DEFAULT)) // Returns true if verification is successful, false otherwise

If the signature is valid, the app can trust that the response came from the real backend.

Further Hardening

No system is completely secure, but there are ways to make attacks much harder. Here are some ways to strengthen the system further:

Potential Compromises

When Our Extra Security Layer Paid Off

Right before deployment, our client hit a snag. They didn't have a domain name for the licensing server. With purchase orders pending, they had no choice but to launch using the server’s IP address.

We strongly advised against it, but agreed to temporarily allow cleartext traffic by adding usesCleartextTraffic=true in the app.

In any other setup, this would have exposed everything. But the response-signing layer we built held up. Any tampering would break the signature check, and replay attempts would be blocked by the existing nonce validation.

It wasn’t how we planned to use the system, but it worked exactly as it needed to in the circumstances.

Closing Thoughts

This project is a reminder that real-world constraints shape real-world security. What worked for us was layering the solutions: server-signed responses, local time enforcement, nonce validation, and key obfuscation. Together, they formed a system that held up for our client.