The New Decentralized The Graph Network

What are the new features and how to use it

Quite some time has passed since my last post about The Graph. If you don't know what it is and why it's useful, go and read the post. It's still relevant and explains in detail why it's needed and how to use it with the centralized hosted service. 

But the tl;dr is: Events on a blockchain are a very efficient way to add data without having to store it in every node which is very expensive. This is done through the use of a bloom filter and a client is able to parse blocks and transactions to quickly find data he is looking for. But this still requires parsing through a lot of blocks, so one alternative is having a server index this data and store it in a database. Put GraphQL on top as a very convenient query language and you have The Graph, but as a centralized service.

In conclusion, The Graph allows querying data from a blockchain in a much more efficient way. This is extremely important when building a front-end and showing data about what happened in the blockchain without having to store the data directly in a smart contract.

Since then The Graph has started a new decentralized network and also added more features. The hosted service will end in Q1 of 2023, so now it's time to learn how the decentralized network works, how to use it and what new features there are for you as a developer.

Decentralization Meme

The Transition Towards Decentralization

Originally The Graph only had a centralized hosted service, but that of course is not what we want in the long term. After all, what's the point of a Dapp if you fully rely on a centralized server for querying data. Actually it's still better than a fully centralized infrastructure, but we can still do better!

To get around the issue, The Graph has its own decentralized network with its own GRT ERC-20 token.

1. Protocol Roles

  • Consumers: Consumers are the ones sending queries to indexers and paying for this service. It could be end users directly, e.g. in the context of using a Dapp, or middleware services.

  • Indexers. Indexers are the ones actually providing the service of running servers, indexing events and storing them in a database. As reward they will receive fees from the consumers.

  • Curators: Curators will identify subgraphs that are worthy to be indexed and put up their own GRT for specific subgraphs. They will earn a share of the query fees based on a bonding curve. This might be developers of a subgraph that want to get their subgraph indexed.

  • Delegators. Delegators can stake GRT with indexers to earn a share of their query fees.

Consumers can freely choose which indexer to use based on things like up-time and price. And of course a Dapp could even use multiple indexers for the highest security.

2. GRT Token

The GRT token itself is used for staking:

  • Indexers staking GRT serves the purpose of sybil resistance as well as allowing to slash them for bad behavior. Indexers are expected to stake a proportion of total staked GRT equal to the proportion of work they have performed for the network. This is not directly enforced, but rather comes from the design of collecting transaction fees and rebating those to participants as a function of their proportional stake and  fees, inspired by 0x design.
  • Delegators will loan their GRT to indexers and receive a share of the query fees in return. They cannot get their GRT slashed due to indexer misbehavior and there is a limit how much GRT each indexer can receive through delegation.

3. Bootstrapping New Subgraphs

To help in the bootstrapping for new subgraphs that don't have demand yet, GRT staking is inflationary and newly minted tokens are given to indexers which index very low query demand subgraphs. To make sure indexers are actually doing the work of indexing these low-demand subgraphs, there is an extra mechanism called Proof of Indexing (POI). A POI is signature over the subgraph state hash and is needed to receive indexer rewards. POIs are accepted optimistically, but could later be used to slash them. In the first version of the network, an arbitrator set through governance will decide these disputes.

Subgraph configurations (called manifest) are typically uploaded to IPFS. But what can we do when a manifest is simply not available? Then it would be impossible to verify these POIs. For this, there is a Subgraph Availability Oracle. It will look at several prominent IPFS endpoints and if a manifest is unavailable, the subgraph is not available for indexer rewards.

4. Payment Channels

Normal payment channels are a great way to scale payments. When you pay for every query like in The ›Graph, we obviously need something like this. In The Graph there will be an extra layer of security for payment channels: WAVE.

Wave Crypto Meme

WAVE:

  1. Work. Locked micropayment with a description of the work to be performed.

  2. Attestation. Work + signed attestation which unlocks the micropayment optimistically.

  3. Verification. Verify attestation out-of-channel which may lead to penalties.

  4. Expiration. Locked micropayments can expire.

Query Verification

Now the question is how could you verify The Graph queries for correctness? Initially in The Graph, this is handled through an on-chain dispute resolution process, which is decided through arbitration. Fishermen will look for incorrect queries and submit the attestation to arbitration along with a bond.

In the future, rather than rely on arbitration for settling query disputes, the validity of queries could be guaranteed using cryptographic proofs using techniques such as a polynomial commitment or Merkle trees. Similarly even POIs could be automatically verified using a similar mechanism like Optimistic Rollups. Both combined would completely get rid of the need for the more centralized arbitration role currently in the network.

How to deploy to The Decentralized Graph

The Subgraph Studio is the first step towards The Decentralized Graph, but it still contains centralized components. In its current design, the payments are handled by The Graph team directly and you create an API key to query data. Later once verifiable queries are implemented, the users would pay directly.

If you have a subgraph already on the hosted service, take a look at the migration guide.

  1. Go to https://thegraph.com/studio/ and click create subgraph.
  2. $ npm install -g @graphprotocol/graph-cli
  3. Inside an existing project deploy a contract, e.g. $ npx hardhat run scripts/deploy.ts --network rinkeby
  4. $ graph init --studio <created-subgraph-name>
  5. If the Etherscan contract is not verified, you may also pass the ABI file directly, e.g. artifacts/contracts/Game.sol/Game.json
  6. Save the correct schema.graphql and schema.ts.
  7. $ graph auth --studio <my-subgraph-id> (see Subgraph Studio UI)
  8. $ graph codegen && graph build
  9. $ graph deploy --studio <my-subgraph-id>


Now you can click 'Publish' in the Subgraph Studio. To get Rinkeby testnet GRT, head over to The Graph Discord #roles, click 'T' and then go to #testnet-faucet and type in !grt <my-eth-address>. And you can immediately start curating the subgraph. Note that at the time of this writing actual queries on Rinkeby via the decentralized network are not yet supported, but they are for mainnet.

New Features for Developers

  • New AssemblyScript version: The Graph has updated the AssemblyScript used for writing the mappings. If you are used to older versions, you can follow the migration guide.
  • Debug Forking: A new feature called debug forking allows to clone a deployed subgraph and change its mapping for a specific block. So if you deploy a new subgraph and it fails, now you can debug it locally from that block rather then re-syncing from scratch which takes a lot of time.
  • New Blockchains: While not yet available on the decentralized network, The Graph has added indexing support for completely new blockchains, most notably Cosmos, NEAR and Arweave.
  • Subgraph Mapping Unit-testing: A new unit-testing feature called matchstick allows testing subgraph mappings, see below for details.

Unit-testing Subgraph Mappings

To get started with unit-testing in an existing Subgraph:

  1. Install matchstick: 
    1. $ yarn add --dev matchstick-as
  2. Install Postgresql:
    1. $ brew install postgresql (Mac) or 
    2. $ sudo apt install postgresql (Linux) 
    3. or for Windows see docs
  3. Create testing file inside /tests folder.
  4. Run the tests: $ graph test

On the right you can see an example unit-test for our Bet mapping from our previous The Graph tutorial.

  1. You can first create mocked events using newMockEvent.









        2. Then modify the mocked event however you like.












        3. Then call you handle event function from the mapping.




        4. Assert the new state is as expected.




        5. Clean the store afterwards. 

import {
  assert,
  describe,
  clearStore,
  test,
  newMockEvent,
} from "matchstick-as/assembly/index";

import { BetPlaced } from "../generated/Game/Game";
import { Address, BigInt, ethereum } from "@graphprotocol/graph-ts";
import { handleBetPlaced } from "../src/mapping";

function createBetPlacedEvent(
  player: string,
  value: BigInt,
  hasWon: boolean
): BetPlaced {
  const mockEvent = newMockEvent();
  const BetPlacedEvent = new BetPlaced(
    mockEvent.address,
    mockEvent.logIndex,
    mockEvent.transactionLogIndex,
    mockEvent.logType,
    mockEvent.block,
    mockEvent.transaction,
    mockEvent.parameters,
    null
  );
  BetPlacedEvent.parameters = new Array();
  const playerParam = new ethereum.EventParam(
    "player",
    ethereum.Value.fromAddress(Address.fromString(player))
  );
  const valueParam = new ethereum.EventParam(
    "value",
    ethereum.Value.fromUnsignedBigInt(value)
  );
  const hasWonParam = new ethereum.EventParam(
    "hasWon",
    ethereum.Value.fromBoolean(hasWon)
  );

  BetPlacedEvent.parameters.push(playerParam);
  BetPlacedEvent.parameters.push(valueParam);
  BetPlacedEvent.parameters.push(hasWonParam);

  BetPlacedEvent.transaction.from = Address.fromString(player);

  return BetPlacedEvent;
}

describe("handleBetPlaced()", () => {
  test("Should create a new Bet entity", () => {
    const player = "0x7c812f921954680af410d86ab3856f8d6565fc69";
    const hasWon = true;
    const mockedBetPlacedEvent = createBetPlacedEvent(
      player,
      BigInt.fromI32(100),
      hasWon
    );

    handleBetPlaced(mockedBetPlacedEvent);

    const betId =
      mockedBetPlacedEvent.transaction.hash.toHex() +
      "-" +
      mockedBetPlacedEvent.logIndex.toString();

    // fieldEquals(entityType: string, id: string, fieldName: string, expectedVal: string)
    assert.fieldEquals("Bet", betId, "id", betId);
    assert.fieldEquals("Bet", betId, "player", player);
    assert.fieldEquals("Bet", betId, "playerHasWon", "true");
    assert.fieldEquals(
      "Bet",
      betId,
      "time",
      mockedBetPlacedEvent.block.timestamp.toString()
    );
    clearStore();
  });
});

You can find the full example at https://github.com/soliditylabs/the-graph-studio-example.


What do you think of the plans of The Graph? Have you used the hosted service or even the decentralized network already?


Markus Waas

Solidity Developer

More great blog posts from Markus Waas

© 2024 Solidity Dev Studio. All rights reserved.

This website is powered by Scrivito, the next generation React CMS.