Sudograph
Sudograph is a GraphQL database for the Internet Computer (IC).
Its goal is to become the simplest way to develop applications for the IC. Developers start by defining a GraphQL schema using the GraphQL SDL. Once the schema is defined, it can be included within a canister and deployed to the IC. An entire relational database is generated from the schema, with GraphQL queries and mutations enabling a variety of CRUD operations, including advanced querying over relational data.
Sudograph should be considered somewhere between alpha and beta software.
Quickest of quick starts
If you've already got Node.js, npm, Rust, the wasm32-unknown-unknown Rust compilation target, and dfx 0.7.0 installed then just run the following commands:
mkdir my-new-project
cd my-new-project
npx sudograph
dfx start --background
dfx deploy
Once deployed, you can visit the following canisters from a Chromium browser:
- playground: http://r7inp-6aaaa-aaaaa-aaabq-cai.localhost:8000
- frontend: http://rrkah-fqaaa-aaaaa-aaaaq-cai.localhost:8000
If the above did not work, try the full installation steps in the actual quick start.
Quick start
Prerequisites
You should have the following installed on your system:
- Node.js
- npm
- Rust
- wasm32-unknown-unknown Rust compilation target
- dfx 0.7.0
If you already have the above installed, you can skip to Sudograph generate.
Run the following commands to install Node.js and npm. nvm is highly recommended and its use is shown below:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh | bash
# restart your terminal
nvm install 14
Run the following command to install Rust and the wasm32-unknown-unknown target:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
rustup target add wasm32-unknown-unknown
Run the following command to install dfx 0.7.0:
# Sudograph has been tested against version 0.7.0, so it is safest to install that specific version for now
DFX_VERSION=0.7.0 sh -ci "$(curl -fsSL https://sdk.dfinity.org/install.sh)"
Sudograph generate
Start by making a new directory for your project. You then simply run the sudograph generate command:
mkdir my-new-project
cd my-new-project
npx sudograph
Local deployment
Start up an IC replica and deploy:
# Open a terminal and run the following command to start a local IC replica
dfx start
# Alternatively to the above command, you can run the replica in the background
dfx start --background
# If you are running the replica in the background, you can run this command within the same terminal as the dfx start --background command
# If you are not running the replica in the background, then open another terminal and run this command from the root directory of your project
dfx deploy
Make sure to run dfx deploy for your first deploy. For quicker deployments after the first, you can run dfx deploy graphql if you've only changed your schema or the Rust code within the graphql canister. dfx deploy graphql will only deploy the graphql canister, which contains the generated database.
playground canister
Start executing GraphQL queries and mutations against your database by going to the following URL in a Chromium browser: http://r7inp-6aaaa-aaaaa-aaabq-cai.localhost:8000.
frontend canister
View a simple frontend application that communicates with the graphql canister by going to the following URL in a Chromium browser: http://rrkah-fqaaa-aaaaa-aaaaq-cai.localhost:8000.
graphql canister
You can execute queries against the graphql canister from the command line if you wish:
# send a query to the graphql canister
dfx canister call graphql graphql_query '("query { readUser(input: {}) { id } }", "{}")'
# send a mutation to the graphql canister
dfx canister call graphql graphql_mutation '("mutation { createUser(input: { username: \"lastmjs\" }) { id } }", "{}")'
Production deployment
Before deploying to production you should understand that Sudograph is alpha/beta software. There are missing features and potential bugs. There is also no way to easily migrate data (if you change your schema, you'll need to delete your state and start over). But if you must deploy to production, here is the command:
dfx deploy --network ic
GraphQL database canister
Sudograph provides one main entrypoint for creating your GraphQL database, and that is the graphql_database Rust procedural macro. Using the power of Rust procedural macros, graphql_database will take your GraphQL schema file and generate all code required to turn the types defined in your schema into a functioning CRUD database.
To use graphql_database, first create a new Rust canister. If you're new to developing for the Internet Computer, you might want to check the documentation to get familiar with canister development.
Add a new canister to your dfx.json. You can name the canister whatever you'd like, but to keep things simple the canister defined below is named graphql. The contents of your dfx.json should look like the following. If you have other canisters already defined, just add the graphql canister:
{
"canisters": {
"graphql": {
"type": "custom",
"build": "cargo build --target wasm32-unknown-unknown --package graphql --release",
"candid": "canisters/graphql/src/graphql.did",
"wasm": "target/wasm32-unknown-unknown/release/graphql.wasm"
}
}
}
The canister defined above assumes a directory structure where dfx.json is in the root of your project, and there is a directory called canisters to contain each canister. You can change up the directory structure if you'd like, just change all of the paths appropriately. Create a new directory within canisters called graphql, and add a Cargo.toml file. It should look something like the following:
[package]
name = "graphql"
version = "0.0.0"
edition = "2018"
[lib]
path = "src/graphql.rs"
crate-type = ["cdylib"]
[dependencies]
sudograph = 0.2.0
Within the canisters/graphql directory, now create a src directory. The canisters/graphql/src directory will contain the entrypoint to your graphql canister, graphql.rs, along with your schema.graphql file and your graphql.did file.
The graphql.rs file should look like this:
#![allow(unused)] fn main() { use sudograph::graphql_database; graphql_database!("canisters/graphql/src/schema.graphql"); }
This simply imports the graphql_database procedural macro from sudograph and then invokes it with the path to your schema.graphql file.
You must also create a custom candid file graphql.did:
service : {
"graphql_query": (text, text) -> (text) query;
"graphql_mutation": (text, text) -> (text);
}
The generated canister code will have created the two functions defined in graphql.did, but for now you'll need to create the candid file manually. Hopefully in the future it can be generated for you or abstracted away somehow.
graphql_query and graphql_mutation both take two parameters. The first parameter is the query or mutation string. The second parameter is a JSON string containing any variables for the query or mutation. Currently the second parameter is required, so just send an empty JSON object strin "{}" if no variables are required for the query or mutation.
graphql_query and graphql_mutation both return the result of the query or mutation as a JSON string. Whatever client is consuming the query or mutation will then need to parse the JSON string to turn it into a language-level object. The Sudograph Client will do this for you in a JavaScript frontend.
Finally create your schema.graphql file:
type User {
id: ID!
username: String!
blogPosts: [BlogPost!]! @relation(name: "User:blogPosts and BlogPost:author")
}
type BlogPost {
id: ID!
publishedAt: Date
title: String!
author: User! @relation(name: "User:blogPosts and BlogPost:author")
}
You now have everything you need to deploy a simple graphql canister. Boot up a node with dfx start and then deploy with dfx deploy. It's important to note that Sudograph currently only works within a single canister. You can deploy as many Sudograph canisters as you'd like, with as many schemas as you'd like, but the generated querying and mutations will only know about data that has been created within the same canister. Querying between canisters would require you to write your own custom code. Sudograph will hopefully address scaling in the future so that you only ever have to deal with thinking about one schema per application.
Sudograph Client
This is not yet well-documented. You can look at the examples to see how to use it in a JavaScript frontend. npm install sudograph and go from there.
Schema
The schema is where you define all of the data types of your application, including relations between types. It is also where you will eventually define many other settings, possibly including authentication, authorization, subnet, and Sudograph-specific settings.
An example schema might look like this:
type User {
id: ID!
username: String!
blogPosts: [BlogPost!]! @relation(name: "User:blogPosts and BlogPost:author")
}
type BlogPost {
id: ID!
publishedAt: Date
title: String!
author: User! @relation(name: "User:blogPosts and BlogPost:author")
}
We have told Sudograph that we have two object types, User and BlogPost. We've described the fields of each type, using some included scalar types such as ID, Date, and String. We have also described one relation between our two types, a one-to-many relationship from User to BlogPost on the fields User:blogPosts and BlogPost:author.
The schema is an incredibly powerful yet simple tool for defining the complex data types of your application. Get to know the possibilities of your schema:
Scalars
Scalar types are not divisible, they have no fields of their own. The scalar types automatically available to you in a Sudograph schema are:
Boolean
A Boolean value maps to a Rust bool.
type User {
id: ID!
verified: Boolean!
}
Date
A Date value maps to a Rust String for storage and a chrono::DateTime for filtering.
type User {
id: ID!
dateOfBirth: Date!
}
Float
A Float value maps to a Rust f32.
type User {
id: ID!
height: Float!
}
ID
An ID value maps to a Rust String. All Sudograph object types must have a field called id of type ID.
type User {
id: ID!
}
Int
An Int value maps to a Rust i32.
type User {
id: ID!
age: Int!
}
String
A String value maps to a Rust String.
type User {
id: ID!
username: String!
}
Objects
Object types have fields that may be other object types or scalar types. Object types allow you to define the truly custom data types and relations that make up your application.
You could model a user with blog posts like so:
type User {
id: ID!
username: String!
blogPosts: [BlogPost!]! @relation(name: "User:blogPosts and BlogPost:author")
}
type BlogPost {
id: ID!
publishedAt: Date
title: String!
author: User! @relation(name: "User:blogPosts and BlogPost:author")
}
You could model a family tree like so:
type Person {
id: ID!
firstName: String!
lastName: String!
father: Person @relation(name: "Person:father and Person:children")
mother: Person @relation(name: "Person:mother and Person:children")
children: [Person!]!
@relation(name: "Person:father and Person:children")
@relation(name: "Person:mother and Person:children")
}
TODO the example above will not work yet
TODO the self-referencing has some issues and multiple @relation directives per field is not yet supported
You could model Ethereum block data like so:
type Block {
id: ID!
number: Int!
hash: String!
parent: Block
transactionsRoot: String!
transactionCount: Int!
stateRoot: String!
gasLimit: String!
gasUsed: String!
timestamp: Date!
transactions: [Transaction!]! @relation(name: "Block:transactions and Transaction:block")
}
type Transaction {
id: ID!
hash: String!
index: Int!
from: String!
to: String!
value: String!
gasPrice: String!
gas: String!
inputData: String!
block: Block! @relation(name: "Block:transactions and Transaction:block")
gasUsed: String!
}
Relations
Relations allow you to describe the relationships between object types and their fields. Sudograph has a variety of relation capabilities:
One-to-one relations
One-to-one relations allow you to connect one object with another object.
One-sided
If you only care about retrieving relation information from one side of the relation, you don't need a @relation directive:
type Foot {
id: ID!
shoe: Shoe
}
type Shoe {
id: ID!
}
In the above example, you will be able to select the shoe of a foot, like so:
query {
readFoot(input: {
id: {
eq: "1"
}
}) {
id
shoe {
id
}
}
}
You will not be able to select the foot of a shoe.
Two-sided
If you care about retrieving relation information from both sides of the relation, add a @relation directive. The name argument of the @relation directive can be arbitrary, but it must be the same on both sides of the relation.
type Foot {
id: ID!
shoe: Shoe @relation(name: "Foot:shoe and Shoe:foot")
}
type Shoe {
id: ID!
foot: Foot @relation(name: "Foot:shoe and Shoe:foot")
}
One-to-many relations
One-to-many relations allow you to connect one object with multiple other objects.
One-sided
If you only care about retrieving relation information from one side of the relation, you don't need a @relation directive:
type Monkey {
id: ID!
name: String!
bananas: [Banana!]!
}
type Banana {
id: ID!
color: String!
size: Int!
}
In the above example, you will be able to select the bananas of a monkey, like so:
query {
readMonkey(input: {
id: {
eq: "1"
}
}) {
id
name
bananas {
id
color
size
}
}
}
You will not be able to select the monkey of a banana.
Two-sided
If you care about retrieving relation information from both sides of the relation, add a @relation directive. The name argument of the @relation directive can be arbitrary, but it must be the same on both sides of the relation.
type Monkey {
id: ID!
name: String!
bananas: [Banana!]! @relation(name: "Monkey:bananas and Banana:monkey")
}
type Banana {
id: ID!
color: String!
size: Int!
monkey: Monkey @relation(name: "Monkey:bananas and Banana:monkey")
}
Many-to-many relations
Many-to-many relations allow you to connect multiple objects with multiple other objects. Many-to-many relations must have a @relation directive. The name argument of the @relation directive can be arbitrary, but it must be the same on both sides of the relation.
type Author {
id: ID!
documents: [Document!]! @relation(name: "Author:documents and Document:authors")
}
type Document {
id: ID!
text: String!
authors: [Author!]! @relation(name: "Author:documents and Document:authors")
}
Custom scalars
Custom scalars (scalars that you define) are not yet supported. You'll have to work with the included scalars:
Custom resolvers
Custom resolvers where the resolver function is defined within the same Rust canister as the graphql_database macro are possible now, but not well-documented. See the intermediate example for a very rough overview of how to achieve this.
Custom resolvers where the resolver function is defined in a separate canister (be it written in Rust, Motoko, AssemblyScript, or any other language) are also supported. For now you will need to write a custom resolver in Rust that does a cross-canister call to the resolver in the separate canister. This is not well-documented but is possible. In the future, the plan is for those types of custom resolvers to be defined only in the schema so that no Rust code is required. It will look like this:
# schema.graphql
type Query {
myCustomQueryResolver(param1: Int!): Boolean! @canister(id: "renrk-eyaaa-aaaaa-aaada-cai")
}
type Mutation {
myCustomMutationResolver(param1: Float!): String! @canister(id: "rdmx6-jaaaa-aaaaa-aaadq-cai")
}
Defining the above Query and Mutation object types in your schema would generate two custom resolver functions in the Rust canister where the graphql_database macro is used. These resolver functions would perform cross-canister calls to a canister with the id defined in the @canister directive. You would then need to define myCustomQueryResolver with the appropriate parameters and return type in the renrk-eyaaa-aaaaa-aaada-cai canister, and myCustomMutationResolver with the appropriate parameters and return type in the rdmx6-jaaaa-aaaaa-aaadq-cai canister.
Sudograph settings
There will be many settings that Sudograph will allow the developer to customize. Currently however, none of these settings are possible to change. But here is how it might eventually look to change settings:
type SudographSettings {
exportGeneratedGraphQLQueryFunction: false
exportGenerateGraphQLMutationFunction: true
exportGeneratedInitFunction: true
exportGeneratedPostUpgradeFunction: false
}
Generated Schema
Sudograph takes your schema and generates a much more powerful schema.
This section is not yet well-documented, but assuming you've generated an example project with npx sudograph and deployed your canisters, then navigate to the playground at http://r7inp-6aaaa-aaaaa-aaabq-cai.localhost:8000 in a Chromium browser and click the Docs button in the top right corner. That documentation explains everything that you can do with your newly generated schema.
Query
This section is not yet well-documented, but assuming you've generated an example project with npx sudograph and deployed your canisters, then navigate to the playground at http://r7inp-6aaaa-aaaaa-aaabq-cai.localhost:8000 in a Chromium browser and click the Docs button in the top right corner. That documentation explains everything that you can do with your newly generated schema.
read
This section is not yet well-documented, but assuming you've generated an example project with npx sudograph and deployed your canisters, then navigate to the playground at http://r7inp-6aaaa-aaaaa-aaabq-cai.localhost:8000 in a Chromium browser and click the Docs button in the top right corner. That documentation explains everything that you can do with your newly generated schema.
Mutation
This section is not yet well-documented, but assuming you've generated an example project with npx sudograph and deployed your canisters, then navigate to the playground at http://r7inp-6aaaa-aaaaa-aaabq-cai.localhost:8000 in a Chromium browser and click the Docs button in the top right corner. That documentation explains everything that you can do with your newly generated schema.
create
This section is not yet well-documented, but assuming you've generated an example project with npx sudograph and deployed your canisters, then navigate to the playground at http://r7inp-6aaaa-aaaaa-aaabq-cai.localhost:8000 in a Chromium browser and click the Docs button in the top right corner. That documentation explains everything that you can do with your newly generated schema.
update
This section is not yet well-documented, but assuming you've generated an example project with npx sudograph and deployed your canisters, then navigate to the playground at http://r7inp-6aaaa-aaaaa-aaabq-cai.localhost:8000 in a Chromium browser and click the Docs button in the top right corner. That documentation explains everything that you can do with your newly generated schema.
delete
This section is not yet well-documented, but assuming you've generated an example project with npx sudograph and deployed your canisters, then navigate to the playground at http://r7inp-6aaaa-aaaaa-aaabq-cai.localhost:8000 in a Chromium browser and click the Docs button in the top right corner. That documentation explains everything that you can do with your newly generated schema.
Subscription
This is not yet implemented
Filtering
This section is not yet well-documented, but assuming you've generated an example project with npx sudograph and deployed your canisters, then navigate to the playground at http://r7inp-6aaaa-aaaaa-aaabq-cai.localhost:8000 in a Chromium browser and click the Docs button in the top right corner. That documentation explains everything that you can do with your newly generated schema.
Paging
Not yet implemented
Ordering
Not yet implemented
Examples
This section is not yet well-documented, but assuming you've generated an example project with npx sudograph and deployed your canisters, then navigate to the playground at http://r7inp-6aaaa-aaaaa-aaabq-cai.localhost:8000 in a Chromium browser and click the Docs button in the top right corner. That documentation explains everything that you can do with your newly generated schema.
Migrations
Automated migrations are not currently supported. If you wish to update a Sudograph canister that has already been deployed, you will lose all of your saved data unless you implement your own migrations.
You can implement your own migrations by implementing and exporting pre_upgrade and post_upgrade canister functions. You can save your state to stable memory in the pre_upgrade function, and restore that state in the post_upgrade function. See here and here for more information.
The plan is to eventually automate migrations as much as possible. If you change your schema and wish to update it on a live canister, Sudograph will generate migrations written in Rust to accomplish the migration for you. If a migration cannot be performed automatically, Sudograph will allow you to easily define your own migration code in Rust. That's the rough plan for now.
But until then, you're on your own. Good luck soldier.
Transactions
Sudograph does not have a strong guarantee of atomicity (transactions) at this time. Read on for more information.
Single canister mutations
Within a single update call, transactions are automatically handled by the Internet Computer itself! If there are any errors (technically Wasm traps) all state changes are undone and thus not persisted.
This is a very nice feature of single canister development. If you don't know already you need to know that the schema that Sudograph generates for you is limited to a single canister by default. If you need to scale across canisters, you will need to write custom code.
Unfortunately, Sudograph does not currently guarantee that all errors will lead to traps that undo all state changes. It should not be too difficult to add, but currently you do not have a guarantee that all mutations within a single update call will be executed atomically.
Once Sudograph ensures all errors will lead to traps, you will be able to ensure atomicity by executing many mutations within a single update call like this:
mutation {
createUser1: createUser(input: {
username: "user1"
}) {
id
}
createUser2: createUser(input: {
username: "user2"
}) {
id
}
createUser3: createUser(input: {
username: "user3"
}) {
id
}
}
All of the mutations above will either all succeed or all fail.
Multi-canister mutations
Even if you batch many mutations into one update call, if any of your mutations are custom and call into other canisters, the atomic guarantees are gone. This will be more difficult for Sudograph to implement because the Internet Computer does not provide atomicity when doing multi-canister updates.
If you need transactions across multiple canisters, you will need to write custom code that undoes state changes across all canisters in a chain of mutations.
Vision
Not yet well-documented, sheesh!
Motivation
Not yet well-documented, sheesh!
Limitations
- No paging or ordering of records
- No custom scalars, only Int, Float, String, ID, Boolean, and Date are available
- Filtering is limited to the top level selection set
- Limited to a single canister ~4GB of storage
- Very inneficient querying, be careful once you get into the 100,000s or 1,000,000s of records
- No automatic migrations, once you deploy the schema is final unless you implement your own migrations
- No authorization at the schema level, deal with it through your own custom authorization at the canister function level
- No automated tests
- No subscriptions
- No transactions
Multi-canister scaling
Sudograph will not scale a single schema across multiple canisters automatically. The goal is to eventually provide this functionality, but the timeline and feasibility of this goal are unknown.
Currently each schema that you deploy into a canister is limited to ~4GB of data. This should be sufficient for prototyping and small amounts of storage and usage. There are also multiple scaling techniques that could be used to scale out, for example by storing large files (video, audio, images, documents) in a separate set of canisters that has automatic scaling built-in, and storing references to that data in your Sudograph canister.
One of the main problems Sudograph will have scaling across multiple canisters is ensuring efficient and complex querying. Indexes and filters will need to work across multiple canisters.
One reason Sudograph is waiting to implement scaling, is to lock down an amazing single canister development experience first. This should be sufficient for many new developers and young projects.
wasm64
multiple memories
infinite virtual memory