Sudograph
Testing
This book is hosted on the Internet Computer (Sodium Test Network)
Sudograph is a GraphQL generator for the Internet Computer. It is similar to projects like Prisma, Graphback, and Hasura, though it is designed to run on and thus inherit the capabilities of the Internet Computer.
Sudograph aims to greatly simplify the hardest part of GraphQL development, which is the actual implementation of the resolvers. From a types-only GraphQL schema written in the GraphQL SDL, Sudograph will generate a far more capable CRUD schema along with the implementation of its resolvers.
Basically, a GraphQL schema goes in and a generated CRUD backend comes out. This will create a highly declarative developer experience, and will free you as a developer to think more in terms of the shapes of the data of the systems you create, and to pass the implementation of the more capable schema and resolvers on to the generator.
As Sudograph will inherit the capabilities of the Internet Computer, its aim is to become the simplest, most flexible, secure, and scalable way to use GraphQL. It also aims to be the best way to build CRUD apps on the Internet Computer.
These are lofty goals, and there is a long road ahead.
Roadmap
This roadmap should give you an idea of what Sudograph is currently capable of, and where it is headed. Keep in mind that the roadmap is a rough sketch and subject to change.
Database
The Internet Computer does not have an efficient and scalable relational data store yet. A prerequisite to this project's success may be to create one of these data stores.
-
Single canister scaling
- Efficient field-level search
- Relational joins
-
Multiple canister scaling
- Efficient field-level search
- Relational joins
Query
Arbitrary-depth joins in selection sets, all basic relation types including one-to-one, one-to-many, many-to-one, and many-to-many.
- get (retrieve single record by id)
-
find (retrieve multiple records by filter with paging and ordering)
- top level filtering as described in Selection Sets
- top level paging as described in Selection Sets
Mutation
Single level of scalar inputs per entity and connecting or disconnecting relations by id only, arbitrary-depth joins in selection sets, same selection set capabilities as queries.
- create
- update
- delete
Selection Sets
-
filtering
- applied at arbitrary depths in selection sets on relations
- scalar values and relation ids only
- no cross-relational filters
- basic operations: eq, gt, lt, contains, startsWith, etc
-
paging
- applied at arbitrary depths in selection sets on relations
- limit and offset
-
order by
- applied at arbitrary depths in selection sets on relations
- field name and order direction
Possible Future Capabilities
- create, update, delete many
- create, update, delete, update/upsert within mutation inputs
- cross-relational filters
- order by multiple fields
- Statistics within relation results (for example total counts, averages, sums, etc)
- migrations
- subscriptions
- transactions
- unique constraints and capabilities
Quick Start
Install Sudograph
Navigate to the root directory of your project in your terminal and run the following command:
cargo install sudograph
Create your GraphQL schema
Create a file called schema.graphql in the root directory of your project. For example, it might look like the following:
type User {
id: String!
blog_posts: [BlogPost!]!
username: String!
}
type BlogPost {
id: String!
author: User!
body: String!
created_at: Date!
live: Boolean!
title: String!
}
Your schema should define all of the types of your application, including the relationships between them. You can think of each GraphQL type as an object, document, or table.
Generate
Run the following command in your terminal:
cargo sudograph generate
You should now have a new directory called sudograph_generated in the root directory of your project. It will contain a much more capable schema file called schema-generated.graphql. For example, given the simple schema we defined above, the following will be generated:
type Query {
readUser(input: ReadUserInput)
readBlogPost(input: ReadBlogPostInput)
}
type Mutation {
createUser(input: CreateUserInput, inputs: [CreateUserInput!])
createBlogPost(input: CreateBlogPostInput, inputs: [CreateBlogPostInput!])
updateUser(input: UpdateUserInput, inputs: [UpdateUserInput!])
updateBlogPost(input: UpdateBlogPostInput, inputs: [UpdateBlogPostInput!])
deleteUser(input: DeleteUserInput, inputs: [DeleteUserInput!])
deleteBlogPost(input: DeleteBlogPostInput, inputs: [DeleteBlogPostInput!])
}
type User {
id: String!
blog_posts: [BlogPost!]!
username: String!
}
type BlogPost {
id: String!
author: User!
body: String!
created_at: DateTime!
live: Boolean!
title: String!
}
input ReadUserInput {
id: ReadStringInput
blog_posts: ReadBlogPostInput # TODO perhaps an annotation here will help us distinguish the type of result, singular or multiple
username: ReadStringInput
}
input ReadBlogPostInput {
id: ReadStringInput
author: ReadUserInput
body: ReadStringInput
created_at: ReadDateTimeInput
live: ReadBooleanInput
title: ReadStringInput
}
input ReadStringInput {
eq: String
gt: String
gte: String
lt: String
lte: String
contains: String
}
input ReadDateInput {
eq: String
gt: String
gte: String
lt: String
lte: String
}
input ReadBooleanInput {
eq: String
}
input CreateUserInput {
}
input CreateBlogPostInput {
}
input UpdateUserInput {
}
input UpdateBlogPostInput {
}
input DeleteUserInput {
}
input DeleteBlogPostInput {
}
In addition to the generated schema file, there is a directory called canister. This has all of the code necessary to be deployed to the IC. You will need to update your dfx.json file to include this new canister, or you can simply run dfx deploy from the canister directory.
It is very likely that you'll need to customize this canister, so you may wish to move it into a directory with your other canisters.
You'll need to update the argument being passed to the sudograph_generate procedural macro. Make sure the argument is the correct path to your schema.graphql file.
In addition to a much more capable schema than the simple one we've created, Sudograph will generate resolvers that read and write data using Sudodb.
Here's what the generate resolvers for the above would look like:
#![allow(unused)] fn main() { // TODO put in some Rust code here }
TODO
Actually, perhaps I should actually update their dfx.json file? I can read it in, find out where they're storing their canisters, and just write a new cansiter there. This might be tricky and dangerous, so perhaps that should come later?