Sudograph

Sudograph is a GraphQL database for the Internet Computer (IC).

Its goal is to become the simplest way to develop applications for the IC. Developers start by defining a GraphQL schema using the GraphQL SDL. Once the schema is defined, it can be included within a canister and deployed to the IC. An entire relational database is generated from the schema, with GraphQL queries and mutations enabling a variety of CRUD operations, including advanced querying over relational data.

Sudograph should be considered somewhere between alpha and beta software.

Examples

Multiple examples are located in the examples directory in the Sudograph repository.

Here's a list of possibly useful examples:

Quickest of quick starts (new project)

This section is designed to get you going completely from scratch. It assumes you want to have a frontend, a GraphQL playground, and the graphql canister. If you instead wish to integrate Sudograph into an existing project, see the Existing project section.

If you've already got Node.js, npm, Rust, the wasm32-unknown-unknown Rust compilation target, and dfx 0.7.2 installed then just run the following commands:

mkdir my-new-project
cd my-new-project
npx sudograph
dfx start --background
dfx deploy

Once deployed, you can visit the following canisters from a Chromium browser:

If the above did not work, try the full installation steps in the actual quick start.

More information is available for local deployment and IC deployment.

Quick start (new project)

This section is designed to get you going completely from scratch. It assumes you want to have a frontend, a GraphQL playground, and the graphql canister. If you instead wish to integrate Sudograph into an existing project, see the Existing project section.

Prerequisites

You should have the following installed on your system:

  • Node.js
  • npm
  • Rust
  • wasm32-unknown-unknown Rust compilation target
  • dfx 0.7.2

If you already have the above installed, you can skip to Sudograph generate.

Run the following commands to install Node.js and npm. nvm is highly recommended and its use is shown below:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh | bash

# restart your terminal

nvm install 14

Run the following command to install Rust and the wasm32-unknown-unknown target:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

rustup target add wasm32-unknown-unknown

Run the following command to install dfx 0.7.2:

# Sudograph has been tested against version 0.7.2, so it is safest to install that specific version for now
DFX_VERSION=0.7.2 sh -ci "$(curl -fsSL https://sdk.dfinity.org/install.sh)"

Sudograph generate

Start by making a new directory for your project. You then simply run the sudograph generate command:

mkdir my-new-project

cd my-new-project

npx sudograph

Deployment

Use the following links for more information about local deployment and IC deployment.

Existing project

The quickest of quick starts and quick start are both designed to get you started with an entire example project from scratch. If instead you wish to integrate Sudograph into an existing project, this section will help you to achieve that.

Basically you need to add a new Rust canister to your project and import and call the graphql_database procedural macro. If you're new to developing for the Internet Computer, you might want to check the documentation to get familiar with canister development. The detailed steps are listed out below, but looking at examples might also help a lot.

Make sure you at least have Rust, the wasm32-unknown-unknown Rust compilation target, and dfx 0.7.2 installed on your system. If you need help setting all of that up, look at the prerequisites section of the quick start.

There are a few basic steps to integrate Sudograph into an existing project:

  • Edit dfx.json in root directory
  • Add Cargo.toml to root directory
  • Create graphql canister crate
  • Create GraphQL schema
  • Import and call the graphql_database procedural macro
  • Create candid file
  • Deploy

Edit dfx.json in root directory

Add a new canister to your dfx.json in the root directory of your project. You can name the canister whatever you'd like, but to keep things simple we'll call the canister graphql. If you have other canisters already defined, just add the graphql canister. The canister defined below assumes a directory structure where there is a directory called canisters to contain each canister. You can change up the directory structure if you'd like, just change all of the paths appropriately.:

{
    "canisters": {
        "graphql": {
            "type": "custom",
            "build": "cargo build --target wasm32-unknown-unknown --package graphql --release",
            "candid": "canisters/graphql/src/graphql.did",
            "wasm": "target/wasm32-unknown-unknown/release/graphql.wasm"
        }
    }
}

Add Cargo.toml to root directory

In the root directory of your project create a Cargo.toml file with the following contents:

[workspace]
members = [
    "canisters/graphql",
]

[profile.release]
lto = true
opt-level = 'z'

Again this assumes your project has a canisters directory where the graphql canister will be defined. You can change the directory structure if you wish, just make sure to update this Cargo.toml file.

Create graphql canister crate

Create a new directory within canisters called graphql, and add a Cargo.toml file. It should look like the following:

[package]
name = "graphql"
version = "0.0.0"
edition = "2018"

[lib]
path = "src/graphql.rs"
crate-type = ["cdylib"]

[dependencies]
sudograph = "0.3.0"
ic-cdk = "0.3.0" # TODO this will go away once https://github.com/dfinity/candid/pull/249 is released

Within the canisters/graphql directory, now create a src directory. The canisters/graphql/src directory will contain your GraphQL schema, the Rust entrypoint to your graphql canister, and your candid file.

Create GraphQL schema

Within the canisters/graphql/src directory, create your schema.graphql file. The following is just an example:

type User {
    id: ID!
    username: String!
    blogPosts: [BlogPost!]! @relation(name: "User:blogPosts::BlogPost:author")
}

type BlogPost {
    id: ID!
    publishedAt: Date
    title: String!
    author: User! @relation(name: "User:blogPosts::BlogPost:author")
}

Import and call the graphql_database procedural macro

Within the canisters/graphql/src directory, create your graphql.rs file. The file should look like this:


#![allow(unused)]
fn main() {
use sudograph::graphql_database;

graphql_database!("canisters/graphql/src/schema.graphql");
}

This simply imports the graphql_database procedural macro from sudograph and then invokes it with the path to your schema.graphql file. This is where the magic happens and the database with CRUD queries and mutations are all generated.

Create candid file

Within the canisters/graphql/src directory, create your graphql.did file. The file should look like this:

service : {
    "graphql_query": (text, text) -> (text) query;
    "graphql_mutation": (text, text) -> (text);
}

The generated canister code will have created the two functions defined in graphql.did, but for now you'll need to create the candid file manually. Hopefully in the future it can be generated for you or abstracted away somehow.

graphql_query and graphql_mutation both take two parameters. The first parameter is the query or mutation string. The second parameter is a JSON string containing any variables for the query or mutation. Currently the second parameter is required, so just send an empty JSON object string "{}" if no variables are required for the query or mutation.

graphql_query and graphql_mutation both return the result of the query or mutation as a JSON string. Whatever client is consuming the query or mutation will then need to parse the JSON string to turn it into a language-level object. The Sudograph Client will do this for you in a JavaScript frontend.

Deploy

Use the following links for more information about local deployment and IC deployment.

Local deployment

Start up an IC replica and deploy:

# Open a terminal and run the following command to start a local IC replica
dfx start

# Alternatively to the above command, you can run the replica in the background
dfx start --background

# If you are running the replica in the background, you can run this command within the same terminal as the dfx start --background command
# If you are not running the replica in the background, then open another terminal and run this command from the root directory of your project
dfx deploy

Make sure to run dfx deploy for your first deploy. For quicker deployments after the first, you can run dfx deploy graphql if you've only changed your schema or the Rust code within the graphql canister. dfx deploy graphql will only deploy the graphql canister, which contains the generated database.

playground canister

Start executing GraphQL queries and mutations against your database by going to the following URL in a Chromium browser: http://r7inp-6aaaa-aaaaa-aaabq-cai.localhost:8000.

frontend canister

View a simple frontend application that communicates with the graphql canister by going to the following URL in a Chromium browser: http://rrkah-fqaaa-aaaaa-aaaaq-cai.localhost:8000.

command line

You can execute queries against the graphql canister from the command line if you wish:

# send a query to the graphql canister
dfx canister call graphql graphql_query '("query { readUser { id } }", "{}")'

# send a mutation to the graphql canister
dfx canister call graphql graphql_mutation '("mutation { createUser(input: { username: \"lastmjs\" }) { id } }", "{}")'

Sudograph Client

Still needs to be documented.

Rust canister

If you want to call into the graphql canister from another Rust canister:


#![allow(unused)]
fn main() {
// TODO fill out this example, show queries and mutations
}

Motoko canister

If you want to call into the graphql canister from a Motoko canister:

// TODO fill out this example, show queries and mutations

Wasm binary optimization

If the replica rejects deployment of your canister because the payload is too large, you may need to optimize your Wasm binary.

IC deployment

Before deploying to the Internet Computer you should understand that Sudograph is alpha/beta software. There are missing features and potential bugs. There is also no way to easily migrate data (if you change your schema, you'll need to either delete your state and start over or manually make changes to the Sudograph data structures). But if you must deploy to the IC, here is the command:

dfx deploy --network ic

Wasm binary optimization

If the replica rejects deployment of your canister because the payload is too large, you may need to optimize your Wasm binary.

Wasm binary optimization

At some point your compiled Rust Wasm binary will grow too large and will be rejected by the canister on deploy. This could happen because the Rust source code that you've written has grown too large, or because your schema has grown too large. A large schema will lead to a large amount of generated Rust code.

To temporarily overcome this issue (only so much can be done during optimization, eventually the binary will be too big and the Internet Computer will need to address that), you can optimize your Rust Wasm binary.

Manual optimization

To do this manually, in the root of your directory run the following command once to install the optimizer:

cargo install ic-cdk-optimizer --root target

You should also change your dfx.json file from:

{
    "canisters": {
        "graphql": {
            "type": "custom",
            "build": "cargo build --target wasm32-unknown-unknown --package graphql --release",
            "candid": "canisters/graphql/src/graphql.did",
            "wasm": "target/wasm32-unknown-unknown/release/graphql.wasm"
        }
    }
}

to:

{
    "canisters": {
        "graphql": {
            "type": "custom",
            "build": "cargo build --target wasm32-unknown-unknown --package graphql --release",
            "candid": "canisters/graphql/src/graphql.did",
            "wasm": "target/wasm32-unknown-unknown/release/graphql-optimized.wasm"
        }
    }
}

The only thing that changed was the wasm property of the graphql canister object, and it changed from "wasm": "target/wasm32-unknown-unknown/release/graphql.wasm" to "wasm": "target/wasm32-unknown-unknown/release/graphql-optimized.wasm".

Each time you run dfx deploy or dfx deploy graphql, you will need to run the following command after:

./target/bin/ic-cdk-optimizer ./target/wasm32-unknown-unknown/release/graphql.wasm -o ./target/wasm32-unknown-unknown/release/graphql-optimized.wasm

Automatic optimization

It can be tedious to have to run the above command manually after each dfx deploy. If you wish to figure out how to use cargo scripts of some kind you can do that. You could also use make or bash or some other build process or scripting system.

Another way is to adopt npm scripts. Your package.json could look something like this:

{
    "scripts": {
        "build": "cd canisters/playground && npm install && npm run build && cd ../frontend && npm install && npm run build",
        "dfx-deploy": "npm run dfx-build-graphql && npm run dfx-optimize-graphql && dfx deploy",
        "dfx-deploy-graphql": "npm run dfx-build-graphql && npm run dfx-optimize-graphql && dfx deploy graphql",
        "dfx-build-graphql": "cargo build --target wasm32-unknown-unknown --package graphql --release",
        "dfx-optimize-graphql": "./target/bin/ic-cdk-optimizer ./target/wasm32-unknown-unknown/release/graphql.wasm -o ./target/wasm32-unknown-unknown/release/graphql-optimized.wasm"
    }
}

Then instead of running dfx deploy or dfx deploy graphql you would run npm run dfx-deploy or npm run dfx-deploy-graphql.

In the future it would be nice for the dfx.json to allow for some sort of build scripts, which would make this process less messy. There is an open forum post about this here

Sudograph Client

The Sudograph Client is a frontend JavaScript/TypeScript library that provides a convenient API for interacting with your deployed graphql canister. It is an alternative to using agent-js directly, and currently works only for the frontend (Node.js support will come later).

Installation

Install Sudograph Client into your frontend project with npm install sudograph.

Use

In addition to the code on this page, many of the examples have frontend projects that show Sudograph Client in use.

For our example, let's imagine we have some sort of frontend UI component defined in a JavaScript file called component.js. You could import and prepare Sudograph Client for use as follows:

// component.js

import {
    gql,
    sudograph
} from 'sudograph';

const {
    query,
    mutation
} = sudograph({
    canisterId: 'ryjl3-tyaaa-aaaaa-aaaba-cai'
});

Above we import the gql tag and the sudograph function. The gql tag will be used for queries later on. To prepare for query or mutation execution, we call the sudograph function and pass in an options object. In this case, we simply put in the canister id of our graphql canister. The options object looks like this in TypeScript:

import { Identity } from '@dfinity/agent';

export type Options = Readonly<{
    canisterId: string;
    identity?: Identity;
    queryFunctionName?: string;
    mutationFunctionName?: string;
}>;

query

If we want to execute a query, we would do so as follows. Imagine defining a function to return all user ids:

// component.js

async function getUserIds() {
    const result = await query(gql`
        query {
            readUser {
                id
            }
        }
    `);

    const users = result.data.readUser;

    return users;
}

By the way, the gql tag is just a nice way to integrate with existing editor tools, such as syntax highlighting and type checking. You can remove it if you'd like.

mutation

If we want to execute a mutation, we would do so as follows. Imagine defining a function to create a user:

// component.js

async function createUser(username) {
    const result = await mutation(gql`
        mutation ($username: String!) {
            createUser(input: {
                username: $username
            }) {
                id
            }
        }
    `, {
        username
    });

    const user = result.data.createUser;

    return user;
}

Changing query and mutation canister function names

The queryFunctionName and mutationFunctionName properties of the options object that we pass into the sudograph function allow us to specify the names of the canister functions that are exposed by our graphql canister. By default the generated query and mutation function names are graphql_query and graphql_mutation. Sudograph Client will assume those names should be used unless queryFunctionName and mutationFunctionName are supplied by the developer.

Authentication

The identity property of the options object that we pass into the sudograph function helps us out with authentication, and its type is defined by @dfinity/agent. If we pass in an identity object, it will be passed into the constructor of the @dfinity/agent HttpAgent that Sudograph Client is creating for you under the hood. This identity will be used to sign query and mutation requests, allowing you to implement authorization inside of your graphql canister.

The files example shows how to use Internet Identity with a graphql canister.

agent-js

If you don't wish to use Sudograph Client, you can reach for the lower-level agent-js library.

Installation

Install agent-js into your frontend project with npm install @dfinity/agent.

Use

In addition to the code on this page, the Sudograph Client implementation is a very good example of how to use agent-js directly to interact with a graphql canister.

For our example, let's imagine we have some sort of frontend UI component defined in a JavaScript file called component.js. You could import and prepare agent-js for use as follows:

// component.js

import {
    Actor,
    HttpAgent
} from '@dfinity/agent';

const idlFactory = ({ IDL }) => {
    return IDL.Service({
        graphql_query: IDL.Func([IDL.Text, IDL.Text], [IDL.Text], ['query']),
        graphql_mutation: IDL.Func([IDL.Text, IDL.Text], [IDL.Text], [])
    });
};

const agent = new HttpAgent();

const actor = Actor.createActor(idlFactory, {
    agent,
    canisterId: 'ryjl3-tyaaa-aaaaa-aaaba-cai'
});

Above we manually construct an IDL Factory describing the graphql_query and graphql_mutation functions exported from our canister. We then create an agent and use that agent with the canister id of our graphql canister to create an actor.

query

If we want to execute a query, we would do so as follows. Imagine defining a function to return all user ids:

// component.js

async function getUserIds() {
    const result = await actor.graphql_query(`
        query {
            readUser {
                id
            }
        }
    `, JSON.stringify({}));

    const resultJSON = JSON.parse(result);

    const users = resultJSON.data.readUser;

    return users;
}

mutation

If we want to execute a mutation, we would do so as follows. Imagine defining a function to create a user:

// component.js

async function createUser(username) {
    const result = await actor.graphql_mutation(`
        mutation ($username: String!) {
            createUser(input: {
                username: $username
            }) {
                id
            }
        }
    `, JSON.stringify({
        username
    }));

    const resultJSON = JSON.parse(result);

    const user = resultJSON.data.createUser;

    return user;
}

Authentication

The HttpAgent from @dfinity/agent takes an object as a parameter to its contructor. That object has a property called identity of type Identity which can be found in @dfinity/agent. This identity will be used to sign requests made by the actor object that we create, allowing you to implement authorization inside of your graphql canister.

The files example shows how to use Internet Identity with a graphql canister.

Schema

The schema is where you define all of the data types of your application, including relations between types. It is also where you will eventually define many other settings, possibly including authentication, authorization, subnet, and Sudograph-specific settings.

An example schema might look like this:

type User {
    id: ID!
    username: String!
    blogPosts: [BlogPost!]! @relation(name: "User:blogPosts::BlogPost:author")
}

type BlogPost {
    id: ID!
    publishedAt: Date
    title: String!
    author: User! @relation(name: "User:blogPosts::BlogPost:author")
}

We have told Sudograph that we have two object types, User and BlogPost. We've described the fields of each type, using some included scalar types such as ID, Date, and String. We have also described one relation between our two types, a one-to-many relationship from User to BlogPost on the fields User:blogPosts and BlogPost:author.

The schema is an incredibly powerful yet simple tool for defining the complex data types of your application. Get to know the possibilities of your schema:

Scalars

Scalar types are not divisible, they have no fields of their own. The scalar types automatically available to you in a Sudograph schema are:

Blob

A Blob value maps to a Rust Vec<u8>.

type File {
    id: ID!
    contents: Blob!
}

Query or mutation inputs of type Blob should be strings or arrays of numbers that can be converted into Rust u8 numbers. Blob types in selection sets are always returned as JSON arrays of numbers.

An example in JavaScript of inputting a string for a Blob:

async function createSmallFile() {
    const result = await mutation(gql`
        mutation ($contents: Blob!) {
            createFile(input: {
                contents: $contents
            }) {
                contents
            }
        }
    `, {
        contents: 'hello'
    });

    const file = result.data.createFile;

    console.log(file);
}

The logged contents of the file would be this: [104, 101, 108, 108, 111].

You can convert the array of numbers back to a string like so:

[104, 101, 108, 108, 111].map(x => String.fromCharCode(x)).join('')

An example in JavaScript of inputting an array of numbers for a Blob:

async function createSmallFile() {
    const result = await mutation(gql`
        mutation ($contents: Blob!) {
            createFile(input: {
                contents: $contents
            }) {
                contents
            }
        }
    `, {
        contents: 'hello'.split('').map(x => x.charCodeAt())
    });

    const file = result.data.createFile;

    console.log(file);
}

The logged contents of the file would be this: [104, 101, 108, 108, 111].

You can convert the array of numbers back to a string like so:

[104, 101, 108, 108, 111].map(x => String.fromCharCode(x)).join('')

Blob types in selection sets can use offset and limit to grab specific bytes:

async function createSmallFile() {
    const result = await mutation(gql`
        mutation ($contents: Blob!) {
            createFile(input: {
                contents: $contents
            }) {
                contents(offset: 1, limit: 3)
            }
        }
    `, {
        contents: 'hello'
    });

    const file = result.data.createFile;

    console.log(file);
}

The logged contents of the file would be this: [101, 108, 108].

You can convert the array of numbers back to a string like so:

[101, 108, 108].map(x => String.fromCharCode(x)).join('')

Boolean

A Boolean value maps to a Rust bool.

type User {
    id: ID!
    verified: Boolean!
}

Date

A Date value maps to a Rust String for storage and a chrono::DateTime for filtering.

type User {
    id: ID!
    dateOfBirth: Date!
}

Query or mutation inputs of type Date should be strings that can be parsed by chrono::DateTime. For example, in JavaScript new Date().toISOString() would be an acceptable format.

An example in JavaScript:

async function getUsersInInterval() {
    const result = await query(gql`
        query ($startDate: Date!, $endDate: Date!) {
            readUser(search: {
                dateOfBirth: {
                    gte: $startDate
                    lt: $endDate
                }
            }) {
                id
            }
        }
    `, {
        startDate: new Date('2021-07-01').toISOString(),
        endDate: new Date('2021-07-02').toISOString()
    });

    const users = result.data.readUser;

    return users;
}

Float

A Float value maps to a Rust f32.

type User {
    id: ID!
    height: Float!
}

ID

An ID value maps to a Rust String. All Sudograph object types must have a field called id of type ID.

type User {
    id: ID!
}

Int

An Int value maps to a Rust i32.

type User {
    id: ID!
    age: Int!
}

JSON

A JSON value maps to a Rust String.

type User {
    id: ID!
    meta: JSON!
}

Query or mutation inputs of type JSON should be any valid JSON value. JSON types in selection sets are always returned as JSON values.

String

A String value maps to a Rust String.

type User {
    id: ID!
    username: String!
}

Objects

Object types have fields that may be other object types or scalar types. Object types allow you to define the truly custom data types and relations that make up your application.

You could model a user with blog posts like so:

type User {
    id: ID!
    username: String!
    blogPosts: [BlogPost!]! @relation(name: "User:blogPosts::BlogPost:author")
}

type BlogPost {
    id: ID!
    publishedAt: Date
    title: String!
    author: User! @relation(name: "User:blogPosts::BlogPost:author")
}

You could model a family tree like so:

# TODO this example will not work yet
# TODO the self-referencing has some issues and multiple @relation directives per field is not yet supported
type Person {
    id: ID!
    firstName: String!
    lastName: String!
    father: Person @relation(name: "Person:father::Person:children")
    mother: Person @relation(name: "Person:mother::Person:children")
    children: [Person!]!
        @relation(name: "Person:father::Person:children")
        @relation(name: "Person:mother::Person:children")
}

You could model Ethereum block data like so:

type Block {
    id: ID!
    number: Int!
    hash: String!
    parent: Block
    transactionsRoot: String!
    transactionCount: Int!
    stateRoot: String!
    gasLimit: String!
    gasUsed: String!
    timestamp: Date!
    transactions: [Transaction!]! @relation(name: "Block:transactions::Transaction:block")
}

type Transaction {
    id: ID!
    hash: String!
    index: Int!
    from: String!
    to: String!
    value: String!
    gasPrice: String!
    gas: String!
    inputData: String!
    block: Block! @relation(name: "Block:transactions::Transaction:block")
    gasUsed: String!
}

Relations

Relations allow you to describe the relationships between object types and their fields. Sudograph has a variety of relation capabilities.

Please note that the name argument of the @relation directive is just an arbitrary string, there is no DSL required. The only requirement is that the name argument be the same on both sides of the relation.

Also note that you can only have one @relation directive per field for now.

One-to-one relations

One-to-one relations allow you to connect one object with another object.

One-sided

If you only care about retrieving relation information from one side of the relation, you don't need a @relation directive:

type Foot {
    id: ID!
    shoe: Shoe
}

type Shoe {
    id: ID!
}

In the above example, you will be able to select the shoe of a foot, like so:

query {
    readFoot(search: {
        id: {
            eq: "7c3nrr-6jhf3-2gozt-hh37a-d6nvf-lsdwv-d7bhp-uk5nt-r42y"
        }
    }) {
        id
        shoe {
            id
        }
    }
}

You will not be able to select the foot of a shoe.

Two-sided

If you care about retrieving relation information from both sides of the relation, add a @relation directive. The name argument of the @relation directive can be arbitrary, but it must be the same on both sides of the relation.

type Foot {
    id: ID!
    shoe: Shoe @relation(name: "Foot:shoe::Shoe:foot")
}

type Shoe {
    id: ID!
    foot: Foot @relation(name: "Foot:shoe::Shoe:foot")
}

One-to-many relations

One-to-many relations allow you to connect one object with multiple other objects.

One-sided

If you only care about retrieving relation information from one side of the relation, you don't need a @relation directive:

type Monkey {
    id: ID!
    name: String!
    bananas: [Banana!]!
}

type Banana {
    id: ID!
    color: String!
    size: Int!
}

In the above example, you will be able to select the bananas of a monkey, like so:

query {
    readMonkey(search: {
        id: {
            eq: "7c3nrr-6jhf3-2gozt-hh37a-d6nvf-lsdwv-d7bhp-uk5nt-r42y"
        }
    }) {
        id
        name
        bananas {
            id
            color
            size
        }
    }
}

You will not be able to select the monkey of a banana.

Two-sided

If you care about retrieving relation information from both sides of the relation, add a @relation directive. The name argument of the @relation directive can be arbitrary, but it must be the same on both sides of the relation.

type Monkey {
    id: ID!
    name: String!
    bananas: [Banana!]! @relation(name: "Monkey:bananas::Banana:monkey")
}

type Banana {
    id: ID!
    color: String!
    size: Int!
    monkey: Monkey @relation(name: "Monkey:bananas::Banana:monkey")
}

Many-to-many relations

Many-to-many relations allow you to connect multiple objects with multiple other objects. Many-to-many relations must have a @relation directive. The name argument of the @relation directive can be arbitrary, but it must be the same on both sides of the relation.

type Author {
    id: ID!
    documents: [Document!]! @relation(name: "Author:documents::Document:authors")
}

type Document {
    id: ID!
    text: String!
    authors: [Author!]! @relation(name: "Author:documents::Document:authors")
}

Custom scalars

Custom scalars (scalars that you define) are not yet supported. You'll have to work with the included scalars:

Custom resolvers

DISCLAIMER: Custom resolvers have only been minimally tested. Information presented here may not be entirely accurate. If you find issues please get in contact with @lastmjs or open issues on the repository.

Though Sudograph generates many powerful CRUD operations for you, it will not be able to cover every conceivable requirement of your applications. Custom resolvers provide a way for you to create your own functionality that is accessible through the same GraphQL API as Sudograph's generated functionality. There are two main locations a resolver can be written, within the graphql canister or in a separate canister.

Resolvers within the graphql canister

You can see a full example of Rust custom resolvers here.

To write resolvers within your graphql canister, start by augmenting your schema, for example in canisters/graphql/src/schema.graphql:

type Query {
    custom_get(id: ID!): Message
}

type Mutation {
    custom_set(id: ID!, text: String): Boolean!
}

type Message {
    id: ID!
    text: String!
}

We've added one custom query and one custom mutation to the schema. Next we need to implement the resolvers in code.

To implement a resolver, we add an asynchronous function to the Rust file that contains our graphql_database macro invocation. The function should have the same name as the query or mutation in the schema, and should use parameter and return types that match the types in the schema. The return type should be a Result with the Ok variant matching the return type in the schema, and you should use sudograph::async_graphql::Error as the Err variant. Object types generated from your schema are automatically in scope in Rust, because they are generated by the graphql_database macro.

Type conversions between GraphQL and Rust can be found here.

Now we'll implement the custom resolvers for the query and mutation in canisters/graphql/src/graphql.rs:


#![allow(unused)]
fn main() {
use sudograph::graphql_database;

graphql_database!("canisters/graphql/src/schema.graphql");

type PrimaryKey = String;
type MessageStore = HashMap<PrimaryKey, Option<Message>>;

async fn custom_get(id: ID) -> Result<Option<Message>, sudograph::async_graphql::Error> {
    let message_store = sudograph::ic_cdk::storage::get::<MessageStore>();

    let message_option = message_store.get(&id.to_string());

    match message_option {
        Some(message) => {
            return Ok(message.clone());
        },
        None => {
            return Ok(None);
        }
    };
}

async fn custom_set(id: ID, text: Option<String>) -> Result<bool, sudograph::async_graphql::Error> {
    let message_store = sudograph::ic_cdk::storage::get_mut::<MessageStore>();

    let message = match text {
        Some(text_value) => Some(Message {
            id: id.clone(),
            text: text_value
        }),
        None => None
    };

    message_store.insert(id.to_string(), message);

    return Ok(true);
}
}

Resolvers within a different canister

You can also write resolvers that are deployed to other canisters, using any language supported by the Internet Computer. For now you'll most likely be using Rust or Motoko, so examples are included below.

The process is similar to what you've just seen above, but in your GraphQL schema the custom queries and mutations have the addition of a @canister directive with the canister id of the canister that implements your resolver function.

Rust

In a Rust canister, start by augmenting your schema, for example in canisters/graphql/src/schema.graphql:

type Query {
    custom_get(id: ID!): Message @canister(id: "ryjl3-tyaaa-aaaaa-aaaba-cai")
}

type Mutation {
    custom_set(id: ID!, text: String): Boolean! @canister(id: "ryjl3-tyaaa-aaaaa-aaaba-cai")
}

type Message {
    id: ID!
    text: String!
}

Notice we've added @canister(id: "ryjl3-tyaaa-aaaaa-aaaba-cai") to the custom query and mutation.

Now we need to implement the Rust canister. Let's imagine we've created another Rust canister in canisters/another-rust-canister. We might have a file called canisters/another-rust-canister/src/lib.rs, and it would look like this:


#![allow(unused)]
fn main() {
use sudograph;

// TODO This hasn't been tested, might need some derive macros
struct ID(String);

impl ID {
    fn to_string(&self) -> String {
        return String::from(&self.0);
    }
}

// TODO This hasn't been tested, might need some derive macros
struct Message {
    id: String,
    text: String
};

type PrimaryKey = String;
type MessageStore = HashMap<PrimaryKey, Option<Message>>;

#[sudograph::ic_cdk_macros::query]
async fn custom_get(id: ID) -> Option<Message> {
    let message_store = sudograph::ic_cdk::storage::get::<MessageStore>();

    let message_option = message_store.get(&id.to_string());

    match message_option {
        Some(message) => {
            return message.clone();
        },
        None => {
            return None;
        }
    };
}

#[sudograph::ic_cdk_macros::update]
async fn custom_set(id: ID, text: Option<String>) -> bool {
    let message_store = sudograph::ic_cdk::storage::get_mut::<MessageStore>();

    let message = match text {
        Some(text_value) => Some(Message {
            id: id.clone(),
            text: text_value
        }),
        None => None
    };

    message_store.insert(id.to_string(), message);

    return true;
}
}

Notice that these functions do not return a Result, they directly return the Rust types that correspond to the GraphQL types. This may change in the future as returning a Result may end up being more appropriate.

Also notice that we had to implement the ID and Message types ourselves. We do not have all of the generated types available because we are not using the graphql_database macro in this canister. In the future Sudograph may provide a simple way to generate these types for you without generating the entire database, but for now you'll have to implement them yourself or figure out an appropriate way to induce proper serialization and deserialization. For example, Candid might serialize and deserialize ID to and from strings for us...you'll just have to figure this out on your own for now.

Motoko

You can see a full example of Motoko custom resolvers here.

In a Motoko canister, start by augmenting your schema, for example in canisters/graphql/src/schema.graphql:

type Query {
    customGet(id: ID!): Message @canister(id: "ryjl3-tyaaa-aaaaa-aaaba-cai")
}

type Mutation {
    customSet(id: ID!, text: String): Boolean! @canister(id: "ryjl3-tyaaa-aaaaa-aaaba-cai")
}

type Message {
    id: ID!
    text: String!
}

Notice we've added @canister(id: "ryjl3-tyaaa-aaaaa-aaaba-cai") to the custom query and mutation.

Now we need to implement the Motoko canister. Let's imagine we've created a Motoko canister in canisters/motoko. We might have a file called canisters/motoko/main.mo, and it would look like this:

import Text "mo:base/Text";
import Map "mo:base/HashMap";
import Option "mo:base/Option";

actor Motoko {
    let message_store = Map.HashMap<Text, ?Message>(10, Text.equal, Text.hash);

    type Message = {
        id: Text;
        text: Text;
    };

    public query func customGet(id: Text): async ?Message {
        return Option.flatten(message_store.get(id));
    };

    public func customSet(id: Text, text: ?Text): async Bool {
        let message: ?Message = switch (text) {
            case null null;
            case (?text_value) Option.make({
                id;
                text = text_value;
            });
        };
        
        message_store.put(id, message);

        return true;
    };
}

Implementing the Motoko resolvers is very similar to implementing the Rust resolvers, the biggest difference besides the lanuage itself being the type conversions. We've implemented the Message type, and we've excluded the ID type and just used the native Motoko Text type. Again, you might have to experiment with the serialization and deserialization of values between canisters, a lot of it has to do with Candid.

Other languages

Other languages are somewhat possible to use now (C, C++, AssemblyScript), and many more will come in the future as WebAssembly matures. Writing resolvers in each of these languages will be similar to writing them in Rust or Motoko. Once your schema is setup and correctly pointing to a canister, you simply implement the resolver in the language of choice and ensure that the types align correctly.

Type conversions

GraphQL -> Rust

Object, ID, and Date types must be created in Rust canisters if the graphql_database macro is not invoked. ID and Date types might work as String in Rust.

  • Blob -> Vec<u8>
  • Boolean -> bool
  • Date -> Date
  • Float -> f32
  • ID -> ID
  • Int -> i32
  • JSON -> serde_json::Value
  • String -> String

Creating a custom ID type:


#![allow(unused)]
fn main() {
// TODO This hasn't been tested, might need some derive macros
struct ID(String);

impl ID {
    fn to_string(&self) -> String {
        return String::from(&self.0);
    }
}
}

Creating a custom Date type:


#![allow(unused)]
fn main() {
// TODO This hasn't been tested, might need some derive macros
struct Date(String);

impl Date {
    fn to_string(&self) -> String {
        return String::from(&self.0);
    }
}
}

GraphQL -> Motoko

Object types must be manually created in Motoko.

  • Blob -> Blob
  • Boolean -> Bool
  • Date -> Text
  • Float -> Float
  • ID -> Text
  • Int -> Int32
  • JSON -> Text (it's unclear if this will work)
  • String -> Text

Custom directives

Custom directives (directives that you define) are not yet supported. You'll have to work with the Sudograph directives.

Sudograph directives

Sudograph provides a number of directives for use within your GraphQL schema. Directives can be applied to object types or fields within your schema. The following are available for use:

@relation

  • name: relation
  • arguments: name
  • application: field
  • description: Indicates a two-sided relationship, where both sides of the relationship need to be updated during relation mutations. The name argument is an arbitrary string, but must be the same on both fields representing each side of the relationship.
type Foot {
    id: ID!
    shoe: Shoe @relation(name: "Foot:shoe::Shoe:foot")
}

type Shoe {
    id: ID!
    foot: Foot @relation(name: "Foot:shoe::Shoe:foot")
}

@canister

  • name: canister
  • arguments: id
  • application: field
  • description: Indicates the canister with the implementation of the resolver function. The id argument is used to do a cross-canister function call under-the-hood.
type Query {
    customGet(id: ID!): Message @canister(id: "ryjl3-tyaaa-aaaaa-aaaba-cai")
}

type Mutation {
    customSet(id: ID!, text: String): Boolean! @canister(id: "ryjl3-tyaaa-aaaaa-aaaba-cai")
}

type Message {
    id: ID!
    text: String!
}

Possible future relations

Just let your imagination run wild with what some of these could do:

  • @ignore
  • @auth
  • @token
  • @subnet

Sudograph settings

There will be many settings that Sudograph will allow the developer to customize. Sudograph settings are set in your GraphQL schema using the SudographSettings object type. The following are supported now:

type SudographSettings {
    exportGeneratedQueryFunction: true
    exportGeneratedMutationFunction: true
    exportGeneratedInitFunction: true
    exportGeneratedPostUpgradeFunction: true
}

exportGeneratedQueryFunction

Defaults to true. If set to false, the graphql_query function generated by Sudograph will not be exported as a publicly available canister function. This would allow you to implement your own logic before executing a query, for example as part of an authorization flow.

Here's an example of overriding the generated graphql_query function with some basic authorization. You would create the following GraphQL schema in canisters/graphql/src/schema.graphql:

type SudographSettings {
    exportGeneratedQueryFunction: false
}

type User {
    id: ID!
}

You would write the following in canisters/graphql/src/graphql.rs:


#![allow(unused)]
fn main() {
use sudograph::graphql_database;

graphql_database!("canisters/graphql/src/schema.graphql");

#[sudograph::ic_cdk_macros::query]
async fn graphql_query_custom(query_string: String, variables_json_string: String) -> String {
    let authorized_principal = sudograph::ic_cdk::export::Principal::from_text("y6lgw-chi3g-2ok7i-75s5h-k34kj-ybcke-oq4nb-u4i7z-vclk4-hcpxa-hqe").expect("should be able to decode");
    
    if sudograph::ic_cdk::caller() != authorized_principal {
        panic!("Not authorized");
    }

    return graphql_query(query_string, variables_json_string).await;
}
}

You would update canisters/graphql/src/graphql.did:

service : {
    "graphql_query_custom": (text, text) -> (text) query;
    "graphql_mutation": (text, text) -> (text);
}

exportGeneratedMutationFunction

Defaults to true. If set to false, the graphql_mutation function generated by Sudograph will not be exported as a publicly available canister function. This would allow you to implement your own logic before executing a mutation, for example as part of an authorization flow.

Here's an example of overriding the generated graphql_mutation function with some basic authorization. You would create the following GraphQL schema in canisters/graphql/src/schema.graphql:

type SudographSettings {
    exportGeneratedMutationFunction: false
}

type User {
    id: ID!
}

You would write the following in canisters/graphql/src/graphql.rs:


#![allow(unused)]
fn main() {
use sudograph::graphql_database;

graphql_database!("canisters/graphql/src/schema.graphql");

#[sudograph::ic_cdk_macros::update]
async fn graphql_mutation_custom(mutation_string: String, variables_json_string: String) -> String {
    let authorized_principal = sudograph::ic_cdk::export::Principal::from_text("y6lgw-chi3g-2ok7i-75s5h-k34kj-ybcke-oq4nb-u4i7z-vclk4-hcpxa-hqe").expect("should be able to decode");
    
    if sudograph::ic_cdk::caller() != authorized_principal {
        panic!("Not authorized");
    }

    return graphql_mutation(mutation_string, variables_json_string).await;
}
}

You would update canisters/graphql/src/graphql.did:

service : {
    "graphql_query": (text, text) -> (text) query;
    "graphql_mutation_custom": (text, text) -> (text);
}

exportGeneratedInitFunction

Defaults to true. If set to false, the init function generated by Sudograph will not be exported as a publicly available canister function. This would allow you to implement your own logic during canister initialization. You'll want to make sure to call the generated init function after your functionality is complete, as it executes all of the init mutations that initialize the database.

Here's an example of overriding the generated init function. You would create the following GraphQL schema in canisters/graphql/src/schema.graphql:

type SudographSettings {
    exportGeneratedInitFunction: false
}

type User {
    id: ID!
}

You would write the following in canisters/graphql/src/graphql.rs:


#![allow(unused)]
fn main() {
use sudograph::graphql_database;

graphql_database!("canisters/graphql/src/schema.graphql");

#[sudograph::ic_cdk_macros::init]
async fn init_custom() {
    init.await;
}
}

exportGeneratedPostUpgradeFunction

Defaults to true. If set to false, the post_upgrade function generated by Sudograph will not be exported as a publicly available canister function. This would allow you to implement your own logic during canister post upgrade. You'll want to make sure to call the generated post_upgrade function after your functionality is complete, as it executes all of the init mutations that initialize the database (unless you are keeping your state through stable memory, then you would not want to initialize the database again).

Here's an example of overriding the generated post_upgrade function. You would create the following GraphQL schema in canisters/graphql/src/schema.graphql:

type SudographSettings {
    exportGeneratedPostUpgradeFunction: false
}

type User {
    id: ID!
}

You would write the following in canisters/graphql/src/graphql.rs:


#![allow(unused)]
fn main() {
use sudograph::graphql_database;

graphql_database!("canisters/graphql/src/schema.graphql");

#[sudograph::ic_cdk_macros::post_upgrade]
async fn post_upgrade_custom() {
    post_upgrade.await;
}
}

Generated Schema

Sudograph takes your schema and generates a much more powerful schema along with the resolvers for that schema.

In addition to this documentation, assuming you've generated an example project with npx sudograph and deployed your canisters, then navigate to the playground at http://r7inp-6aaaa-aaaaa-aaabq-cai.localhost:8000 in a Chromium browser and click the Docs button in the top right corner. That documentation explains everything that you can do with your newly generated schema.

As an example, given the following simple schema:

type User {
    id: ID!
}

type BlogPost {
    id: ID!
}

Sudograph will generate the following schema along with its resolvers:

type Query {
  readUser(
    search: ReadUserInput,
    limit: Int,
    offset: Int,
    order: OrderUserInput
  ): [User!]!
	
  readBlogPost(
    search: ReadBlogPostInput,
    limit: Int,
    offset: Int,
    order: OrderBlogPostInput
  ): [BlogPost!]!
}

input DeleteUserInput {
	id: ID
	ids: [ID!]
}

input UpdateBlogPostInput {
	id: ID!
}

input DeleteBlogPostInput {
	id: ID
	ids: [ID!]
}

input ReadUserInput {
	id: ReadIDInput
	and: [ReadUserInput!]
	or: [ReadUserInput!]
}

input ReadIDInput {
	eq: ID
	gt: ID
	gte: ID
	lt: ID
	lte: ID
	contains: ID
}

input OrderUserInput {
	id: OrderDirection
}

enum OrderDirection {
	ASC
	DESC
}

type User {
	id: ID!
}

input ReadBlogPostInput {
	id: ReadIDInput
	and: [ReadBlogPostInput!]
	or: [ReadBlogPostInput!]
}

input OrderBlogPostInput {
	id: OrderDirection
}

type BlogPost {
	id: ID!
}

type Mutation {
	createUser(input: CreateUserInput): [User!]!
	createBlogPost(input: CreateBlogPostInput): [BlogPost!]!
	updateUser(input: UpdateUserInput!): [User!]!
	updateBlogPost(input: UpdateBlogPostInput!): [BlogPost!]!
	deleteUser(input: DeleteUserInput!): [User!]!
	deleteBlogPost(input: DeleteBlogPostInput!): [BlogPost!]!
	initUser: Boolean!
	initBlogPost: Boolean!
}

input UpdateUserInput {
	id: ID!
}

input CreateBlogPostInput {
	id: ID
}

input CreateUserInput {
	id: ID
}

Query

Sudograph will generate the equivalent of the Query object type based on your GraphQL schema. If you have specified your own Query object type, the two object types will be combined into the final Query object type.

The fields in the Query object type generated by Sudograph are:

read

The read query is the main way to read data from your GraphQL database.

Per object type defined in your GraphQL schema, Sudograph generates one read field on the Query object type. We'll focus in on what happens with one object type defined. Imagine your schema looks like this:

type User {
    id: ID!
}

Sudograph will generate the following (we're focusing on just one part of the generated schema):

type Query {
    readUser(
        search: ReadUserInput,
        limit: Int
        offset: Int
        order: OrderUserInput
    ): [User!]!
}

input ReadUserInput {
	id: ReadIDInput
	and: [ReadUserInput!]
	or: [ReadUserInput!]
}

input OrderUserInput {
	id: OrderDirection
}

enum OrderDirection {
	ASC
	DESC
}

Each read query has the ability to search, limit, offset, and order. Each read query returns an array of its corresponding object types.

It's important to remember that within read selection sets you also have the ability to search, limit, offset, and order on any many-relation.

For example if you had this schema:

type User {
    id: ID!
    blogPosts: [BlogPost!]!
}

type BlogPost {
    id: ID!
    title: String!
}

You could write a query like this:

query {
    readUser {
        id
        blogPosts(
            search: {
                title: {
                    contains: "The"
                }
            }
            offset: 0
            limit: 10
            order: {
                title: ASC
            }
        ) {
            id
            title
        }
    }
}

Mutation

Sudograph will generate the equivalent of the Mutation object type based on your GraphQL schema. If you have specified your own Mutation object type, the two object types will be combined into the final Mutation object type.

The fields in the Mutation object type generated by Sudograph are:

create

The create mutation is the main way to create data in your GraphQL database.

Per object type defined in your GraphQL schema, Sudograph generates one create field on the Mutation object type. We'll focus in on what happens with one object type defined. Imagine your schema looks like this:

type User {
    id: ID!
}

Sudograph will generate the following (we're focusing on just one part of the generated schema):

type Mutation {
	createUser(input: CreateUserInput): [User!]!
}

input CreateUserInput {
	id: ID
}

It's important to remember that within create selection sets you also have the ability to search, limit, offset, and order on any many-relation.

For example if you had this schema:

type User {
    id: ID!
    blogPosts: [BlogPost!]!
}

type BlogPost {
    id: ID!
    title: String!
}

You could write a query like this:

mutation {
    createUser(blogPosts: {
        connect: ["7c3nrr-6jhf3-2gozt-hh37a-d6nvf-lsdwv-d7bhp-uk5nt-r42y"]
    }) {
        id
        blogPosts(
            search: {
                title: {
                    contains: "The"
                }
            }
            offset: 0
            limit: 10
            order: {
                title: ASC
            }
        ) {
            id
            title
        }
    }
}

update

The update mutation is the main way to update data in your GraphQL database.

Per object type defined in your GraphQL schema, Sudograph generates one update field on the Mutation object type. We'll focus in on what happens with one object type defined. Imagine your schema looks like this:

type User {
    id: ID!
}

Sudograph will generate the following (we're focusing on just one part of the generated schema):

type Mutation {
	updateUser(input: UpdateUserInput!): [User!]!
}

input UpdateUserInput {
	id: ID!
}

It's important to remember that within update selection sets you also have the ability to search, limit, offset, and order on any many-relation.

For example if you had this schema:

type User {
    id: ID!
    blogPosts: [BlogPost!]!
}

type BlogPost {
    id: ID!
    title: String!
}

You could write a query like this:

mutation {
    updateUser(
        id: "7c3nrr-6jhf3-2gozt-hh37a-d6nvf-lsdwv-d7bhp-uk5nt-r42y"
        blogPosts: {
            connect: ["2c3nrr-4jhf3-2gozt-hj37a-d6nvf-lsdwv-d7bhp-uk5nt-r42y"]
        }
    ) {
        id
        blogPosts(
            search: {
                title: {
                    contains: "The"
                }
            }
            offset: 0
            limit: 10
            order: {
                title: ASC
            }
        ) {
            id
            title
        }
    }
}

delete

The delete mutation is the main way to delete data in your GraphQL database.

Per object type defined in your GraphQL schema, Sudograph generates one delete field on the Mutation object type. We'll focus in on what happens with one object type defined. Imagine your schema looks like this:

type User {
    id: ID!
}

Sudograph will generate the following (we're focusing on just one part of the generated schema):

type Mutation {
	deleteUser(input: DeleteUserInput!): [User!]!
}

input DeleteUserInput {
	id: ID
	ids: [ID!]
}

It's important to remember that within delete selection sets you also have the ability to search, limit, offset, and order on any many-relation.

For example if you had this schema:

type User {
    id: ID!
    blogPosts: [BlogPost!]!
}

type BlogPost {
    id: ID!
    title: String!
}

You could write a query like this:

mutation {
    deleteUser(input: {
        id: "7c3nrr-6jhf3-2gozt-hh37a-d6nvf-lsdwv-d7bhp-uk5nt-r42y"
    }) {
        id
        blogPosts(
            search: {
                title: {
                    contains: "The"
                }
            }
            offset: 0
            limit: 10
            order: {
                title: ASC
            }
        ) {
            id
            title
        }
    }
}

init

The init mutation initializes the underlying Rust data structures in your GraphQL database. This mutation must be run before other queries or mutations can be executed for an object type. Sudograph will automatically run all init mutations for all of your object types in the graphql canister's init and post_upgrade functions, unless you override them.

Per object type defined in your GraphQL schema, Sudograph generates one init field on the Mutation object type. We'll focus in on what happens with one object type defined. Imagine your schema looks like this:

type User {
    id: ID!
}

Sudograph will generate the following (we're focusing on just one part of the generated schema):

type Mutation {
	initUser: Boolean!
}

Subscription

Subscriptions are not currently supported by Sudograph.

Because the Internet Computer itself does not have any push mechanisms exposed, it will be difficult to provide subscription capabilities in the normal ways e.g. web sockets.

For now you will have to implement your own polling solutions to know when data has been updated.

Search

The search input allows for flexible querying of records. You can query by scalars and relations to arbitrary depths (assuming performance allows). You can also use arbitrary combinations of and and or in your searches.

You can search by scalar fields using the inputs generated for each scalar type.

Blob

Generated input:

input ReadBlobInput {
	eq: Blob
	contains: Blob
	startsWith: Blob
	endsWith: Blob
}

Examples:

query {
    readFile(search: {
        contents: {
            eq: [101, 108, 108]
        }
    }) {
        id
        contents
    }
}

query {
    readFile(search: {
        contents: {
            contains: [108, 108]
        }
    }) {
        id
        contents
    }
}

query {
    readFile(search: {
        contents: {
            startsWith: [101]
        }
    }) {
        id
        contents
    }
}

query {
    readFile(search: {
        contents: {
            endsWith: [108]
        }
    }) {
        id
        contents
    }
}

Boolean

Generated input:

input ReadBooleanInput {
	eq: Boolean
}

Examples:

query {
    readUser(search: {
        living: {
            eq: true
        }
    }) {
        id
        living
    }
}

Date

Generated input:

input ReadDateInput {
	eq: Date
	gt: Date
	gte: Date
	lt: Date
	lte: Date
}

Examples:

query {
    readBlogPost(search: {
        createdAt: {
            eq: "2021-07-02T22:45:44.001Z"
        }
    }) {
        id
        title
    }
}

query {
    readBlogPost(search: {
        createdAt: {
            gt: "2021-07-02T22:45:44.001Z"
        }
    }) {
        id
        title
    }
}

query {
    readBlogPost(search: {
        createdAt: {
            gte: "2021-07-02T22:45:44.001Z"
        }
    }) {
        id
        title
    }
}

query {
    readBlogPost(search: {
        createdAt: {
            lt: "2021-07-02T22:45:44.001Z"
        }
    }) {
        id
        title
    }
}

query {
    readBlogPost(search: {
        createdAt: {
            lte: "2021-07-02T22:45:44.001Z"
        }
    }) {
        id
        title
    }
}

Float

Generated input:

input ReadFloatInput {
	eq: Float
	gt: Float
	gte: Float
	lt: Float
	lte: Float
}

Examples:

query {
    readUser(search: {
        height: {
            eq: 5.8
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        height: {
            gt: 5.8
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        height: {
            gte: 5.8
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        height: {
            lt: 5.8
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        height: {
            lte: 5.8
        }
    }) {
        id
    }
}

ID

Generated input:

input ReadIDInput {
	eq: ID
	gt: ID
	gte: ID
	lt: ID
	lte: ID
	contains: ID
}

Examples:

query {
    readUser(search: {
        id: {
            eq: "7c3nrr-6jhf3-2gozt-hh37a-d6nvf-lsdwv-d7bhp-uk5nt-r42y"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        id: {
            gt: "1"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        id: {
            gte: "1"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        id: {
            lt: "100"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        id: {
            lte: "100"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        id: {
            contains: "7c3nrr"
        }
    }) {
        id
    }
}

Int

Generated input:

input ReadIntInput {
	eq: Int
	gt: Int
	gte: Int
	lt: Int
	lte: Int
}

Examples:

query {
    readUser(search: {
        age: {
            eq: 25
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        age: {
            gt: 20
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        age: {
            gte: 30
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        age: {
            lt: 45
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        age: {
            lte: 70
        }
    }) {
        id
    }
}

JSON

Generated input:

input ReadJSONInput {
	eq: String
	gt: String
	gte: String
	lt: String
	lte: String
	contains: String
}

Examples:

query {
    readUser(search: {
        meta: {
            eq: "{ \"zone\": 5 }"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        meta: {
            gt: "{ \"zone\": 5 }"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        meta: {
            gte: "{ \"zone\": 5 }"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        meta: {
            lt: "{ \"zone\": 5 }"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        meta: {
            lte: "{ \"zone\": 5 }"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        meta: {
            contains: "zone"
        }
    }) {
        id
    }
}

String

Generated input:

input ReadStringInput {
    eq: String
	gt: String
	gte: String
	lt: String
	lte: String
	contains: String
}

Examples:

query {
    readUser(search: {
        username: {
            eq: "lastmjs"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        username: {
            gt: "lastmjs"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        username: {
            gte: "lastmjs"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        username: {
            lt: "lastmjs"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        username: {
            lte: "lastmjs"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        username: {
            contains: "mjs"
        }
    }) {
        id
    }
}

and

The search input for each object type, in addition to all scalar and relation fields, contains an and field. If you want to and together multiple searches of the same field, there are two ways to do so:

query {
    readUser(search: {
        age: {
            gte: 5
            lte: 10
        }
    }) {
        id
        age
    }
}

This can also be achieved like so:

query {
    readUser(search: {
        and: [
            {
                age: {
                    gte: 5
                }
            },
            {
                age: {
                    lte: 10
                }
            }
        ]
    }) {
        id
        age
    }
}

or

The search input for each object type, in addition to all scalar and relation fields, contains an or field. If you want to or together multiple searches of the same field, you can do so:

query {
    readUser(search: {
        or: [
            {
                age: {
                    eq: 5
                }
            },
            {
                age: {
                    eq: 6
                }
            }
        ]
    }) {
        id
        age
    }
}

You can search by relation fields using the search inputs generated for each object type.

Imagine the following schema:

type User {
    id: ID!
    username: String!
    blogPosts: [BlogPost!]! @relation(name: "User:blogPosts::BlogPost:author")
}

type BlogPost {
    id: ID!
    publishedAt: Date
    title: String!
    author: User! @relation(name: "User:blogPosts::BlogPost:author")
}

The search inputs generated for each object type would be:

input ReadUserInput {
	id: ReadIDInput
	username: ReadStringInput
	blogPosts: ReadBlogPostInput
	and: [ReadUserInput!]
	or: [ReadUserInput!]
}

input ReadBlogPostInput {
	id: ReadIDInput
	publishedAt: ReadDateInput
	title: ReadStringInput
	author: ReadUserInput
	and: [ReadBlogPostInput!]
	or: [ReadBlogPostInput!]
}

You can search across relations like so:

query {
    readUser(search: {
        blogPosts: {
            title: {
                contains: "The"
            }
        }
    }) {
        id
        username
        blogPosts {
            id
            title
        }
    }
}

Limit

The limit input argument is an Int that allows you to specify how many records to return for a selection. For example, a limit of 0 would always return 0 records, and a limit of 10 would return no more than 10 records.

If the limit specified is greater than the number of records available based on the query inputs, then the total number of records available will be returned.

Combining limit with offset allows for flexible paging capabilities. A good example of paging can be found in the frontend of the files example.

Assuming there are 10 User records in the database:

query {
    readUser(limit: 10) {
        id
    }
}

# The readUser property in the selection set would be:
# [{ id: 0 }, { id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }, { id: 6 }, { id: 7 }, { id: 8 }, { id: 9 }]
query {
    readUser(limit: 5) {
        id
    }
}

# The readUser property in the selection set would be:
# [{ id: 0 }, { id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }]
query {
    readUser(limit: 0) {
        id
    }
}

# The readUser property in the selection set would be:
# []

It's important to remember that within any selection sets you have the ability to limit on any many-relation:

query {
    readUser {
        id
        blogPosts(limit: 5) {
            title
        }
    }
}

mutation {
    createUser(input: {
        username: "lastmjs"
    }) {
        id
        blogPosts(limit: 5) {
            title
        }
    }
}

mutation {
    updateUser(input: {
        id: "0"
        username: "lastmjs"
    }) {
        id
        blogPosts(limit: 5) {
            title
        }
    }
}

mutation {
    deleteUser(input: {
        id: "0"
    }) {
        id
        blogPosts(limit: 5) {
            title
        }
    }
}

Offset

The offset input argument is an Int that allows you to specify the starting index in the selection of records. For example, imagine there are 10 User records in the database. An offset of 0 would return all 10 records starting at index 0 which is the first record (assuming they are ordered already in the database):

query {
    readUser(offset: 0) {
        id
    }
}

# The readUser property in the selection set would be:
# [{ id: 0 }, { id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }, { id: 6 }, { id: 7 }, { id: 8 }, { id: 9 }]

An offset of 1 would return 9 records starting at index 1 which is the second record:

query {
    readUser(offset: 1) {
        id
    }
}

# The readUser property in the selection set would be:
# [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }, { id: 6 }, { id: 7 }, { id: 8 }, { id: 9 }]

If the offset specified is greater than or equal to the number of records available based on the query inputs, Sudograph will panic causing the call to trap. Essentially at this point the offset has gone beyond the end of the selection array. If you disagree with this choice let me know @lastmjs or open an issue in the repository.

Combining offset with limit allows for flexible paging capabilities. A good example of paging can be found in the frontend of the files example.

It's important to remember that within any selection sets you have the ability to offset on any many-relation:

query {
    readUser {
        id
        blogPosts(offset: 5) {
            title
        }
    }
}

mutation {
    createUser(input: {
        username: "lastmjs"
    }) {
        id
        blogPosts(offset: 5) {
            title
        }
    }
}

mutation {
    updateUser(input: {
        id: "0"
        username: "lastmjs"
    }) {
        id
        blogPosts(offset: 5) {
            title
        }
    }
}

mutation {
    deleteUser(input: {
        id: "0"
    }) {
        id
        blogPosts(offset: 5) {
            title
        }
    }
}

Order

The order input allows you to order by any one scalar field of an object type. In the future it may be possible to order by multiple fields. There are two possible orderings, DESC and ASC.

Here are some examples assuming the following schema:

type User {
    id: ID!
    age: Int!
    username: String!
}
query {
    readUser(order: {
        id: DESC
    }) {
        id
    }
}

query {
    readUser(order: {
        age: ASC
    }) {
        id
    }
}

query {
    readUser(order: {
        username: DESC
    }) {
        id
    }
}

It's important to remember that within any selection sets you have the ability to order on any many-relation:

query {
    readUser {
        id
        blogPosts(order: {
            title: DESC
        }) {
            title
        }
    }
}

mutation {
    createUser(input: {
        username: "lastmjs"
    }) {
        id
        blogPosts(order: {
            title: DESC
        }) {
            title
        }
    }
}

mutation {
    updateUser(input: {
        id: "0"
        username: "lastmjs"
    }) {
        id
        blogPosts(order: {
            title: DESC
        }) {
            title
        }
    }
}

mutation {
    deleteUser(input: {
        id: "0"
    }) {
        id
        blogPosts(order: {
            title: DESC
        }) {
            title
        }
    }
}

Custom database operations

Describe Sudodb a little and especially the database data structures. Users can use Sudodb directly if they wish, but also the database data structures directly. Link to the automatically generated documentation, that should be sufficient for now.

Migrations

Automated migrations are not currently supported. If you wish to update a Sudograph canister that has already been deployed, you will lose all of your saved data unless you implement your own migrations.

You can implement your own migrations by implementing and exporting pre_upgrade and post_upgrade canister functions. You can save your state to stable memory in the pre_upgrade function, and restore that state in the post_upgrade function. See here and here for more information.

The plan is to eventually automate migrations as much as possible. If you change your schema and wish to update it on a live canister, Sudograph will generate migrations written in Rust to accomplish the migration for you. If a migration cannot be performed automatically, Sudograph will allow you to easily define your own migration code in Rust. That's the rough plan for now.

But until then, you're on your own. Good luck soldier.

Transactions

Sudograph does not have a strong guarantee of atomicity (transactions) at this time. Read on for more information.

Single canister mutations

Within a single update call, transactions are automatically handled by the Internet Computer itself! If there are any errors (technically Wasm traps) all state changes are undone and thus not persisted.

This is a very nice feature of single canister development. If you don't know already you need to know that the schema that Sudograph generates for you is limited to a single canister by default. If you need to scale across canisters, you will need to write custom code.

Unfortunately, Sudograph does not currently guarantee that all errors will lead to traps that undo all state changes. It should not be too difficult to add, but currently you do not have a guarantee that all mutations within a single update call will be executed atomically.

Once Sudograph ensures all errors will lead to traps, you will be able to ensure atomicity by executing many mutations within a single update call like this:

mutation {
    createUser1: createUser(input: {
        username: "user1"
    }) {
        id
    }

    createUser2: createUser(input: {
        username: "user2"
    }) {
        id
    }

    createUser3: createUser(input: {
        username: "user3"
    }) {
        id
    }
}

All of the mutations above will either all succeed or all fail.

Multi-canister mutations

Even if you batch many mutations into one update call, if any of your mutations are custom and call into other canisters, the atomic guarantees are gone. This will be more difficult for Sudograph to implement because the Internet Computer does not provide atomicity when doing multi-canister updates.

If you need transactions across multiple canisters, you will need to write custom code that undoes state changes across all canisters in a chain of mutations.

Authorization

Vision

Not yet well-documented, sheesh!

Limitations

  • No paging or ordering of records
  • No custom scalars, only Int, Float, String, ID, Boolean, and Date are available
  • Filtering is limited to the top level selection set
  • Limited to a single canister ~4GB of storage
  • Very inneficient querying, be careful once you get into the 100,000s or 1,000,000s of records
  • No automatic migrations, once you deploy the schema is final unless you implement your own migrations
  • No authorization at the schema level, deal with it through your own custom authorization at the canister function level
  • No automated tests
  • No subscriptions
  • No transactions

Multi-canister scaling

Sudograph will not scale a single schema across multiple canisters automatically. The goal is to eventually provide this functionality, but the timeline and feasibility of this goal are unknown.

Currently each schema that you deploy into a canister is limited to ~4GB of data. This should be sufficient for prototyping and small amounts of storage and usage. There are also multiple scaling techniques that could be used to scale out, for example by storing large files (video, audio, images, documents) in a separate set of canisters that has automatic scaling built-in, and storing references to that data in your Sudograph canister.

One of the main problems Sudograph will have scaling across multiple canisters is ensuring efficient and complex querying. Indexes and filters will need to work across multiple canisters.

One reason Sudograph is waiting to implement scaling, is to lock down an amazing single canister development experience first. This should be sufficient for many new developers and young projects.

wasm64

multiple memories

infinite virtual memory

You now have everything you need to deploy a simple graphql canister. Boot up a node with dfx start and then deploy with dfx deploy. It's important to note that Sudograph currently only works within a single canister. You can deploy as many Sudograph canisters as you'd like, with as many schemas as you'd like, but the generated querying and mutations will only know about data that has been created within the same canister. Querying between canisters would require you to write your own custom code. Sudograph will hopefully address scaling in the future so that you only ever have to deal with thinking about one schema per application.