Reading time 4 mins

1 Million NFTs Isn't Cool

By

Alex

Saccardo

2019-04-28

This post is about advancements we have made to improve user experience and transaction limits in generating ERC721 tokens. It's a little technical at points, so if you'd rather watch us talk about it check out the interview below featuring CryptoLark and our CTO, Alex Connolly.

Back in June 2018, we were about to launch the Gods Unchained presale. The model is pretty simple: users send a purchase transaction and in return, they get a certain number of randomly generated Gods Unchained cards. We had awesome artwork, an awesome pack opening experience, and 377 awesome cards. But there was one problem we couldn’t solve.

The Problem

For the three weeks before we launched, the Ethereum network was running at capacity. Miners accept transactions in order of ‘gas price’ — the price you are willing to play for each ‘gas’ unit of computation and storage on the network. This creates a market where users compete for the right to have their transactions included, and prices gradually rise as demand increases for a fixed supply of blocks. This is the same ‘network congestion’ experienced by CryptoKitties in December 2017.

But this post isn't about scaling Ethereum — although everyone is working on really awesome stuff — this post is about how we are building a scalable dapp right now.

Back to June. Unfortunately, creating NFTs is a pretty expensive process — even the simplest OpenZeppelin implementation of ERC721 (i.e. no enumeration) requires >=30k gas per token (largely due to the need to store individual ownership information). At 1 gwei, this means that users must spend ~$2 on fees for 10 packs (worth around $15).

At 100 gwei, users needed to spend $200 to buy those packs.

For a few weeks, we watched our deadlines edge closer, hoping that the congestion would end and gas prices would dip. But they didn’t. Our core contract was deployed at 80 gwei, and cost more than $1000. As our frustration grew, we realised that we’d always be vulnerable to spikes like this, even if gas prices eventually went down this time. So we decided to find another way.

Activation

The best way to think about this is really narrowing your focus on the actual problem. What were the actual things we wanted users to be able to do?

  1. User can purchase a certain number/type of packs
  2. Purchased cards/packs can be displayed to the user
  3. User is able to trade the cards

To build the experience we wanted, we needed to batch 1 and 2 into a single time-sensitive block. However, we realised that 3 wasn’t needed at the time of purchase — and it was 3 which was using up most of the gas.

So we came up with what we call activation. The basic premise of activation is that to show users their cards, it’s only necessary to store a few things on the blockchain: the number of packs purchased, the user who purchased them, and a random seed from which everything else can be calculated.

Once this is stored, our backend waits for an event and then uses a call (which doesn't cost gas) to predict the details of each card acquired. As soon as these cards are in our database, we notify the user and let them open their packs.

Whenever a user wants, they are able to call the activate function and turn their predicted cards into actual ERC721s using the information we have already saved on the blockchain.

Users could now buy and view their cards at any time, and wait for gas prices to drop before activating their cards. What’s more, the transaction cost doesn’t increase linearly with the number of packs. In fact, it doesn’t increase at all - buying 1 pack and 1000 packs uses the same amount of gas. Even at 100 gwei, purchases of any size would now have fees of less than $2.

We call this action segregation — dividing your functionality into action units which need to occur atomically, then letting users choose when they engage in each unit.

This system has another awesome property: users can wait to activate until it would be profitable for them to do so e.g. if the price of their cards skyrocketed. This means that the activation cost just ends up baked into the economy generally.

Using this model, we have sold millions of cards, including 17,505 in one transaction.

But it's not the end of the story.

The Problem: non-granularity

Unfortunately, only being able to activate cards in blocks of 50 is not exactly brilliant in terms of user experience. It's clunky, hard to visualise and harder to explain.

You also lose a lot of the clarity of the economic decision making process described above. It’s easy to make a decision about spending 50c at 10 gwei to activate a single card, it’s a bit harder when you’re deciding to activate 50 and spend $25, unless one of those cards is a $62,000 Mythic Titan. The blocks also need to be activated in order: if you want the last card in your 2000 card purchase, you need to activate every single other card.

This is frustrating for users and makes them feel as if their ownership of cards is limited or second-class compared to other systems: exactly the opposite of what we wanted.

Activation v2

Activating cards one at a time requires us to keep a registry of which individual cards have been activated: our problem here devolves to the cheapest way of doing that. Well, the smallest unit available to us in the EVM is a single bit which can only represent two values — good enough for us! When a purchase is made, we allocate a uint256 array, where every bit represents whether or not a purchase has been activated. When a card is activated, we flip the bit which represents that card:

Now that we can individually track each card, we can check whether a card has been activated:

While this comes at a slight increase in gas per-activation, users have been much more receptive about activation now they are able to control individual cards.

The Problem: Standardisation

Even with single card activation implemented, we still had one major problem: the cards didn’t conform to the standard NFT interface (ERC721). Unactivated cards were still bound to their owner’s address and could be activated at any time, but they didn’t have individual IDs and didn’t look like tokens to any third party services. Users wouldn’t see their cards appear in their external wallets, external marketplaces or other tools.

So, we decided to build a new system which would actually create full ERC721s at the time of purchase. But this time, we would use our knowledge of action segregation to reduce the per-token gas cost to a more acceptable level.  

Activation v3

The gas cost of creating n tokens can be defined as follows, where a is the constant overhead of the entire issuance operation and b is the amount of gas consumed per token:

cost(n) = a + bn

One of the requirements of ERC721 is that a Transfer event must be emitted whenever a token is created. The gas calculation for a log is as follows:

cost = 375 + (375 * topic_count) + (8 * data_byte_count)

The Transfer event uses 3 topics and logs 72 bytes of data, giving it a total cost of 2172 gas. With the current gas limit of ~8m, this means that we should be able to log 3853 of these events, giving us a hard upper bound on the number of tokens we can create.

There are two ‘read’ methods in ERC721: ownerOf and balanceOf. This means that we need to store at least enough data to be able to retrieve the owner of each token and track the number of tokens they own.

The most crucial realisation we made was that token IDs did not have to be consecutive. Tokens could then be claimed in ‘allocations’ — blocks of tokens which were claimed in a single transaction (e.g. creating 1000 tokens). For instance, with an allocation size of 4000, token 3995 would have been produced in the same bloc . Each allocation therefore needs to store only the owner and size of the block once.

The code for this is remarkably simple:

The only important variable is maxAllocationSize which determines the size of allocation blocks. There’s no disadvantage to making this really high, except that players often collect tokens with low vanity IDs — but there’s no need to make it more than 4000 with the current gas limit. The unactivatedBalance variable is used to keep balanceOf accurate.

But hang on, what about the write methods? How are we going to track ownership after tokens have been traded? The answer, as always, involves activation. We are still maintaining the action segregation from before: some data gets stored at the time of purchase, and some at the time of first trade.

By not needing to perform the complex storage operations in _addTokenTo, we’re again segregating the act of purchase from the act of token transfer.

This ‘wrapped activation’ has the added benefit of completely hiding activation as a concept — users receive NFTs instantly after the purchase action unit, and just pay fractionally more gas for their first transfer: done.

Our tokens now fully implement the ERC721 spec described above, with our final values for a being less than 6k and 2.2k gas respectively. This should give a theoretical upper bound on card creation of around 3700 tokens.

However, a basic implementation of this method yielded 3,892 tokens when testing locally using ganache and an 8m gas limit: some quick hacks, some loop unrolling and other magic tricks later and we capped out at 3969 tokens. We’re actually not sure why this performs better than the theoretical upper bound — any ideas?

It’s true that at 100 gwei, those tokens will still cost $20 in fees. But the amount you’re paying in fees will be <20% of the purchase cost as opposed to 1200%, and we think that’s a huge win for our users.

Wrapped activation and allocations have given us all the benefits of a standard NFT implementation, including being completely invisible to users, but with the ability to mint 80x the number of fully-fledged NFTs in one transaction. Coming in Gods Unchained Season One!

The Future

One of the big topics in the NFT space right now is whether existing standards are sufficiently optimised for common use cases: e.g. creating lots of tokens, transferring lots of tokens, transferring a mix of token types. In general, the 'best' standard is application-specific - in our case, how these new standards facilitate the use case we're optimising for: the number of tokens issued in one transaction. Most new standards include some form of TransferAll event, which necessarily performs better than ERC721 under segregated conditions by limiting the value of b. In a single transaction, it is possible to create 6204 ERC1155 tokens, while a simple TransferAll event can produce 10422.

However, there is a further optimisation to be made, in the form of a TransferRange event:

event TransferRange(address from, address to, uint start, uint size)

Using this event in the segregated ERC721 contract described above means our b value would be 0: giving us infinite token creation in one transaction for a fixed gas price of less than 65,000!

We'll publish a more extensive range of benchmarks soon as well as some more detailed comparisons with these standards and some suggestions for future directions. However, incumbency is a very large hurdle for these competing token standards to overcome. Even though the addition of TransferRange to an updated version of ERC721 would require only small changes to existing marketplace and wallet backends (relative to other new standard suggestions), the fact that it requires any change at all means that it would require a new ERC, a new community consensus, and updated implementations: we'll see what happens.

A challenge

Use variable sized allocations to reduce the number of 'wasted' token ids. One starting point for this would be using a search tree to give O(log n) stores and lookups such that our 'write cost' equation from above would be:

cost(n) = a + bn + clogn

Different implementations could result in different values of c (and would probably increase a as well), but in most implementations it is likely that the dominance of the bn term would result in the maximum number of tokens created per transaction not being seriously impacted, while allowing more flexible allocation sizing if it was strongly desired by developers.

Finally, why does app-level scaling/action segregation matter?

After all, isn’t the whole purpose of scalability to reduce the pressure on devs to scale their own applications? Absolutely — but we’re not there yet and even when we are, we think the action segregation paradigm can help crypto projects recognise that users are often better than project developers at deciding when they want to take certain actions — particularly when those actions concern users’ financial interests.

Moreover, we’re really exciting about where this process has taken us. Actually having blockchain IDs for our cards is fantastic and lets us do some really awesome stuff with storing card properties off chain and letting users commit them back to the chain whenever they choose to — we call it ‘inscription’ and you’ll hear all about it soon!

Join Us!

If you want to work on really interesting problems (like this!), we’d love to hear from you! Check out our current listings here.

WANT TO CHAT WITH US?

JOIN OUR DISCORD