Warpgrapher
Warpgrapher is framework for developing graph-based API services. Describe the data model for which you want to run a web service. Wargrapher automatically generates a GraphQL schema from the data model, as well as a set of resolvers for basic reate, read, update, and delete (CRUD) operations on that data.
If you need more more sophisticated, custom queries and endpoints, you can supply your own custom resolvers. Warpgrapher will automatically generate the GraphQL configuration and invoke your custom resolvers when appropriate.
The project is currently in development. Prior to reaching v1.0.0:
- Minor versions represent breaking changes.
- Patch versions represent fixes and features.
- There are no deprecation warnings between releases.
Quickstart
This guide will walk you through creating a brand new project using the Warpgrapher engine served over HTTP using actix-web. The back-end graph database in this example is Neo4J.
Source
cargo.toml
[dependencies]
warpgrapher = { version = "0.9.0", features = ["neo4j"] }
src/main.rs
use std::collections::HashMap;
use std::convert::TryFrom;
use warpgrapher::engine::config::Configuration;
use warpgrapher::engine::context::RequestContext;
use warpgrapher::engine::database::neo4j::Neo4jEndpoint;
use warpgrapher::engine::database::DatabaseEndpoint;
use warpgrapher::Engine;
static CONFIG: &str = "
version: 1
model:
- name: User
props:
- name: email
type: String
";
#[derive(Clone, Debug)]
struct AppRequestContext {}
impl RequestContext for AppRequestContext {
type DBEndpointType = Neo4jEndpoint;
fn new() -> AppRequestContext {
AppRequestContext {}
}
}
#[tokio::main]
async fn main() {
// parse warpgrapher config
let config = Configuration::try_from(CONFIG.to_string()).expect("Failed to parse CONFIG");
// define database endpoint
let db = Neo4jEndpoint::from_env()
.expect("Failed to parse neo4j endpoint from environment")
.pool()
.await
.expect("Failed to create neo4j database pool");
// create warpgrapher engine
let engine: Engine<AppRequestContext> = Engine::new(config, db)
.build()
.expect("Failed to build engine");
// execute graphql mutation to create new user
let query = "
mutation {
UserCreate(input: {
email: \"a@b.com\"
}) {
id
email
}
}
"
.to_string();
let metadata = HashMap::new();
let result = engine.execute(query, None, metadata).await.unwrap();
// display result
println!("result: {:#?}", result);
}
Database
Configure database settings:
export WG_NEO4J_HOST=127.0.0.1
export WG_NEO4J_PORT=7687
export WG_NEO4J_USER=neo4j
export WG_NEO4J_PASS=*MY-DB-PASSWORD*
Start a 4.1 Neo4j database:
docker run --rm -p 7687:7687 -e NEO4J_AUTH="${WG_NEO4J_USER}/${WG_NEO4J_PASS}" neo4j:4.1
Run
cargo run
Databases
Warppgrapher supports several database back-ends for graph data:
- Apache Tinkerpop
- AWS Neptune (Gremlin variant)
- Azure Cosmos DB (Gremlin variant)
- Neo4J
Using each of the databases requires correctly selecting the appropriate crate feature and setting up environment variables to allow Warpgrapher to connect with the database.
Regardless of database, export an environment variable to control the size of the database connection pool:
export WG_POOL_SIZE=8
If the WG_POOL_SIZE variable is not set, Warpgrapher defaults to a pool the same size as the
number of CPUs detected. If the number of CPUs cannot be detected, Warpgrapher defaults to a pool
of 8 connections.
Gremlin-Based Databases
For all gremlin-based databases -- Apache Tinkerpop, AWS Neptune, and Azure Cosmos DB -- the following environment variables control Warpgrapher behavior:
- WG_GREMLIN_HOST is the host name for the database to which to connect.
- WG_GREMLIN_READ_REPICA provides a separate host name for read-only replica nodes, if being used for additional scalability. If not set, the read pool connects to the same host as the read/write connection pool.
- WG_GREMLIN_PORT provides the port to which Warpgrapher should connect.
- WG_GREMLIN_USER is the username to use to authenticate to the database, if required.
- WG_GREMLIN_PASS is the password to use to authenticate to the database, if required.
- WG_GREMLIN_USE_TLS is set to
trueif Warpgrapher should connect to the database over a TLS connection, andfalseif not using TLS. Defaults totrue. - WG_GREMLIN_VALIDATE_CERTS is set to
trueif Warpgrapher should validate the certificate used for a TLS connection, andfalse. Defaults totrue. Should only be set to false in non-production environments. - WG_GREMLIN_BINDINGS is set is
trueif Warpgrapher should use Gremlin bindings to send values in queries (effectively query parameterization), andfalseif values should be sanitized and sent inline in the query string itself. Defaults totrue. - WG_GREMLIN_LONG_IDS is set to
trueif Warpgrapher should use long integers for vertex and edge identifiers. Iffalse, Warpgrapher uses strings. Defaults tofalse. - WG_GREMLIN_PARTITIONS is set to
trueif Warpgrapher should require a partition ID, and false if Warpgrapher should ignore or omit partition IDs. Defaults tofalse. - WG_GREMLIN_SESSIONS is set to
trueif Warpgrapher mutations should be conducted within a single Gremlin session, which in some databases provides transactional semantics, andfalseif sessions should not be used. Defaults tofalse. - WG_GREMLIN_VERSION may be set to
1,2, or3, to indicate the version of GraphSON serialization that should be used in communicating with the database. Defaults to3.
Example configurations for supported databases are shown below. In many cases, some environment variables are omitted for each database where the defaults are correct.
Apache Tinkerpop
Add Warpgrapher to your project config:
cargo.toml
[dependencies]
warpgrapher = { version = "0.9.0", features = ["gremlin"] }
Then set up environment variables to contact your Gremlin-based DB:
export WG_GREMLIN_HOST=localhost
export WG_GREMLIN_PORT=8182
export WG_GREMLIN_USER=username
export WG_GREMLIN_PASS=password
export WG_GREMLIN_USE_TLS=true
export WG_GREMLIN_VALIDATE_CERTS=true
export WG_GREMLIN_LONG_IDS=true
The WG_GREMLIN_CERT environment variable is true if Warpgrapher should ignore the validity of
certificates. This may be necessary in a development or test environment, but should always be set
to false in production.
If you do not already have a Gremlin-based database running, you can run one using Docker:
docker run -it --rm -p 8182:8182 tinkerpop/gremlin-server:latest
To use an interactive gremlin console to manually inspect test instances, run
docker build -t gremlin-console -f tests/fixtures/gremlin-console/Dockerfile tests/fixtures/gremlin-console
docker run -i --net=host --rm gremlin-console:latest
In the console, connect to the remote graph:
:remote connect tinkerpop.server conf/remote.yaml
:remote console
AWS Neptune
Add Warpgrapher to your project config:
cargo.toml
[dependencies]
warpgrapher = { version = "0.9.0", features = ["gremlin"] }
Then set up environment variables to contact your Neptune DB:
export WG_GREMLIN_HOST=[neptune-rw-hostname].[region].neptune.amazonaws.com
export WG_GREMLIN_READ_REPLICAS=[neptune-ro-hostname].[region].neptune.amazonaws.com
export WG_GREMLIN_PORT=443
export WG_GREMLIN_USE_TLS=true
export WG_GREMLIN_VALIDATE_CERTS=true
export WG_GREMLIN_BINDINGS=false
export WG_GREMLIN_SESSIONS=true
The WG_GREMLIN_CERT environment variable is true if Warpgrapher should ignore the validity of
certificates. This may be necessary in a development or test environment, but should always be set
to false in production.
Azure Cosmos DB
Add Warpgrapher to your project config:
cargo.toml
[dependencies]
warpgrapher = { version = "0.9.0", features = ["gremlin"] }
Then set up environment variables to contact your Cosmos DB:
export WG_GREMLIN_HOST=*MY-COSMOS-DB*.gremlin.cosmos.azure.com
export WG_GREMLIN_PORT=443
export WG_GREMLIN_USER=/dbs/*MY-COSMOS-DB*/colls/*MY-COSMOS-COLLECTION*
export WG_GREMLIN_PASS=*MY-COSMOS-KEY*
export WG_GREMLIN_USE_TLS=true
export WG_GREMLIN_VALIDATE_CERTS=true
export WG_GREMLIN_PARTITIONS=true
export WG_GREMLIN_VERSION=1
Note that when setting up your Cosmos database, you must configure it to offer a Gremlin graph API.
Note also that you must set your partition key to be named partitionKey.
Be advised that Gremlin traversals are not executed atomically within Cosmos DB. A traversal may fail part way through if, for example, one reaches the read unit capacity limit. See this article for details. The workaround proposed in the article helps, but even idempotent queries do not guarantee atomicity. Warpgrapher does not use idempotent queries with automated retries to overcome this shortcoming of Cosmos DB, so note that if using Cosmos, there is a risk that a failed query could leave partially applied results behind.
Neo4J
Add Warpgrapher to your project config:
[dependencies]
warpgrapher = { version = "0.9.0", features = ["neo4j"] }
Then set up environment variables to contact your Neo4J DB:
export WG_NEO4J_HOST=127.0.0.1
export WG_NEO4J_READ_REPLICAS=127.0.0.1
export WG_NEO4J_PORT=7687
export WG_NEO4J_USER=neo4j
export WG_NEO4J_PASS=*MY-DB-PASSWORD*
Note that the WG_NEO4J_READ_REPLICAS variable is optional. It is used for Neo4J cluster
configurations in which there are both read/write nodes and read-only replicas. If the
WG_NEO4J_READ_REPLICAS variable is set, read-only queries will be directed to the read replicas,
whereas mutations will be sent to the instance(s) at WG_NEO4J_HOST.
If you do not already have a Neo4J database running, you can run one using Docker:
docker run -e NEO4JAUTH="${WG_NEO4J_USER}:${WG_NEO4J_PASS}" neo4j:4.1
Warpgrapher Config
In the Quickstart we demonstrated how to initialize a Config from a configurationf file read from file at run time. A Warpgrapher Config can also be generated programatically. The resulting configs from all the examples below are equivalent.
Text Config
version: 1
model:
# User
- name: User
props:
- name: username
type: String
- name: email
type: String
# Team
- name: Team
props:
- name: teamname
type: String
rels:
- name: members
nodes: [User]
list: true
Programmatic Config
#![allow(unused)] fn main() { use warpgrapher::engine::config::{Config, Property, Relationship, Type}; let config = Config::new( 1, vec![ // User Type::new( "User".to_string(), vec![ Property::new( "username".to_string(), "String".to_string(), false, false, None, None, ), Property::new( "email".to_string(), "String".to_string(), false, false, None, None, ), ], Vec::new(), EndpointsFilter::all(), ), // Team Type::new( "Team".to_string(), vec![Property::new( "teamname".to_string(), "String".to_string(), false, false, None, None, )], vec![Relationship::new( "members".to_string(), true, vec!["User".to_string()], Vec::new(), EndpointsFilter::default(), None, )], EndpointsFilter::all(), ), ], vec![], ); }
Warpgrapher CRUD API
One of the primary features of Warpgrapher is the auto-generation of CRUD operations for all Types. This includes basic and advanced queries that support nested operations and graph traversals.
Warpgrapher automatically creates a set of CRUD (Create, Read, Update, Delete) operations for each Type and each rel in a Type. This page describes the format of those automatically generated GraphQL operations.
For for more details on general GraphQL syntax, see: https://graphql.org/learn/.
Node Read
- Read All Nodes
- Read nodes with matching props
- Return nodes with relationships to other nodes with matching props
Return all nodes
To return all Project nodes:
query {
Project {
id
name
status
}
}
Return nodes with matching props
To return all Project nodes with status == "ACTIVE":
query {
Project(
input: {
status: "ACTIVE"
}
)
{
id
name
}
}
Return nodes with relationships to other nodes with matching props
To returns all Project nodes containing an owner relationship to a User node with name="Halsey":
query {
Project(
input: {
owner: {
dst: {
User: {
name: "Halsey"
}
}
}
}
)
{ ... }
}
Node Create
- Create node with no relationships
- Create node with relationship to a NEW node
- Create node with relationship to an EXISTING node
Create node with no relationships
To create a new Project node by itself:
mutation {
ProjectCreate(
input: {
name: "Project 1",
status: "ACTIVE"
}
)
{
id
name
}
}
Output:
{
"data": {
"ProjectCreate": {
"id": "a55e1dae-a87f-4ca1-8724-b967a678b942",
"name": "Project 1"
}
}
}
Create node with relationship to a NEW node
To create a new Project node with an owner relationship to a new User node:
mutation {
ProjectCreate(
input: {
name: "Project 2",
status: "ACTIVE",
owner: {
dst: {
User: {
NEW: {
name: "Smith"
}
}
}
}
}
)
{ ... }
}
Create node with relationship to an EXISTING node
To create a new Project node with an owner relationship to an existing User node:
mutation {
ProjectCreate(
input: {
name: "Project 3",
status: "ACTIVE",
owner: {
dst: {
User: {
EXISTING: {
name: "Oneill"
}
}
}
}
}
)
{ ... }
}
Node Update
- Update props on node with matching props
- Update props on node with a relationship to a node containing matching props
- Update node to add (multi-node) relationships
Update props on node with matching props
To update the status of the Project node with name == "Project 2":
mutation {
ProjectUpdate(
input: {
MATCH: {
name: "Project 2"
},
SET: {
status: "INACTIVE"
}
}
)
{ ... }
}
Update props on node with a relationship to a node containing matching props
To update the status of any Project nodes containing an owner relationship to the User with name == "Smith":
mutation {
ProjectUpdate(
input: {
MATCH: {
owner: {
dst: {
User: {
name: "Smith"
}
}
}
},
SET: {
status: "INACTIVE"
}
}
)
{ ... }
}
Update node to add (multi-node) relationships
To update a specific Project node to add several issue relationships to new Feature and Bug nodes, call ProjectUpdate with:
mutation {
ProjectUpdate(
input: {
MATCH: {
id: "<ID>"
},
SET: {
issues: [
{
ADD: {
dst: {
Feature: {
NEW: {
name: "Add login page"
}
}
}
}
},
{
ADD: {
dst: {
Bug: {
NEW: {
name: "Fix auth bug"
}
}
}
}
}
]
}
}
)
{ ... }
}
Node Delete
Delete node with matching props
To delete Project nodes with status "INACTIVE", call ProjectDelete with:
mutation {
ProjectDelete(
input: {
MATCH: {
status: "INACTIVE"
},
DELETE: {
}
}
)
{ ... }
}
Relationship Read
Find relationships by src node
query {
ProjectIssues(input: {
src: {
Project: {
name: "Project 0"
}
}
})
{
id
}
}
Find relationships by dst node
query {
ProjectIssues(input: {
dst: {
Feature: {
name: "Add new button"
}
}
})
{...}
}
Find relationships by rel props
query {
ProjectOwner(input: {
props: {
since: "2015"
}
})
{...}
}
Relationship Create
- Create relationship between pair of existing nodes
- Create relationship from existing node to new node
Create relationship between pair of existing nodes
mutation {
ProjectOwnerCreate(input: {
MATCH: {
name: "Project0"
},
CREATE: {
dst: {
User: {
EXISTING: {
name: "Jane"
}
}
}
}
})
{...}
}
Create relationship between from existing node to new node
mutation {
ProjectOwnerCreate(input: {
MATCH: {
name: "Project0"
},
CREATE: {
dst: {
User: {
NEW: {
name: "Joe"
}
}
}
}
})
{...}
}
Relationship Update
Update relationships props
mutation {
ProjectOwnerUpdate(input: {
MATCH: {
props: {
sin
}
}
})
{
id
}
}
Relationship Delete
Delete relationships
mutation {
ProjectOwnerDelete(input: {
MATCH: {
src: {
Project: {
name: "Project0"
}
}
}
})
}
Output:
{
"data": {
"ProjectOwnerDelete": 1
}
}
Engine Features
Static Endpoints
Warpgrapher includes built-in static endpoints that provide useful information or functionality. Built-in static endpoints names are preceded by _.
Version
If the Engine is built with an explicit version:
#![allow(unused)] fn main() { let mut server: Engine<()> = Engine::new(config, db) .with_version("0.1.0".to_string()) .build(); }
the version value can be accessed via the _version endpoint:
query {
_version
}
{
"data": {
"_version": "0.1.0"
}
}
If the server is not configured with an explicit version, the _version endpoint will return null:
{
"data": {
"_version": null
}
}
Defined Endpoints
In addition to the CRUD endpoints auto-generated for each type, Warpgrapher provides the ability to define additional endpoints.
Usage
1. Add Endpoints to Config
The following config specified no types in the model section (so no CRUD endpoints will be generated), but defines several endpoints of varying inputs and outputs.
version: 1
model:
# Team
- name: Team
props:
- name: name
type: String
- name: size
type: Int
endpoints:
# GetAppName
- name: GetAppName
class: Query
input: null
output:
type: String
# GetLargetTeam
- name: GetLargestTeam
class: Query
input: null
output:
type: Team
2. Implement endpoint resolver logic
#![allow(unused)] fn main() { use std::collections::HashMap; use warpgrapher::engine::resolvers::{ResolverFacade, ExecutionResult}; use warpgrapher::value::Value; // resolver that returns a Scalar (String) fn resolve_getappname( context: ResolverFacade<()> ) -> ExecutionResult { facade.resolve_scalar("MyAppName") } // resolver that returns a Node (Team) fn resolve_getlargestteam( facade: ResolverFacade<()> ) -> ExecutionResult { // query database to get team ... let mut hm = HashMap::new(); hm.insert("name".to_string(), Value::String("Blue Team".to_string())); hm.insert("size".to_string(), Value::Int64(5)); let largest_team_node = facade.create_node(("Team", &hm); context.resolve_node(&larget_team_node) } }
3. Add resolvers when building Engine
#![allow(unused)] fn main() { use warpgrapher::Engine; let mut resolvers = Resolvers<()>::new(); resolvers.insert("GetAppName".to_string, Box::new(resolve_getappname)); resolvers.insert("GetLargestTeam".to_string, Box::new(resolve_getlargestteam)); let engine = Engine<()>::new(config, db) .with_resolvers(resolvers) .build(); }
4. Call Defined Endpoints
query {
GetAppName
}
{
"data": {
"GetAppName": "MyAppName"
}
}
query {
GetLargestTeam {
id
name
size
}
}
{
"data": {
"GetLargestTeam": {
"id": "123456789012345670",
"name": "Blue Team",
"size": 5
}
}
}
Dynamic Props
When Warpgrapher auto-generates a CRUD endpoint, the values of Node and Relationship properties are retreived from the database and returned in a query. In some cases, however, it may be necessary to perform real-time computations to derive the value of a prop. We call these type of properties "dynamic properties", and Warpgrapher provides a mechanism to execute custom logic to resolve the value of the prop.
Usage
1. Mark a properties as dynamic by setting the resolver field
model:
- name: Project
props:
- name: name
type: String
- name: points
type: Int
resolver: resolve_project_points
";
2. Define custom logic that resolve the prop value
Box::pin(async move {
// compute value
let points = 5;
facade.resolve_scalar(points)
})
}
3. Add prop resolver when building Engine
let mut resolvers = Resolvers::<AppRequestContext>::new();
resolvers.insert(
"resolve_project_points".to_string(),
Box::new(resolve_project_points),
);
// create warpgrapher engine
let engine: Engine<AppRequestContext> = Engine::new(config, db)
.with_resolvers(resolvers)
.build()
.expect("Failed to build engine");
Dynamic Relationships
Dynamic relationships are similiar to Dynamic Props. Instead of returning values contained in the database, Dynamic relationships allows values to be computed at request time.
Usage
1. Mark rel as dynamic by setting the resolver field
model:
- name: User
props:
- name: name
type: String
- name: Project
props:
- name: name
type: String
rels:
- name: top_contributor
nodes: [User]
resolver: resolve_project_top_contributor
";
2. Define custom logic that resolve the prop value
facade: ResolverFacade<AppRequestContext>,
) -> BoxFuture<ExecutionResult> {
Box::pin(async move {
// create dynamic dst node
let mut top_contributor_props = HashMap::<String, Value>::new();
top_contributor_props.insert(
"id".to_string(),
Value::from(Uuid::new_v4().to_hyphenated().to_string()),
);
top_contributor_props.insert("name".to_string(), Value::from("user0".to_string()));
let top_contributor = facade.node("User", top_contributor_props);
// create dynamic rel
let rel_id = "1234567890".to_string();
let top_contributor_rel =
facade.create_rel_with_dst_node(Value::from(rel_id), None, top_contributor)?;
facade.resolve_rel(&top_contributor_rel).await
})
}
3. Add the custom relationship resolver to the engine
let mut resolvers = Resolvers::<AppRequestContext>::new();
resolvers.insert(
"resolve_project_top_contributor".to_string(),
Box::new(resolve_project_top_contributor),
);
// create warpgrapher engine
let engine: Engine<AppRequestContext> = Engine::new(config, db)
.with_resolvers(resolvers)
.build()
.expect("Failed to build engine");
Request Context
The Request Context feature enables the creation of mutable state through the lifecycle of a request.
Usage
1. Define RequestContext struct
Define a struct that contains mutable information to be available for the lifetime of a request. The request context must implement Clone, Debug, Sync, Send, and Warpgrapher RequestContext.
request_id: String,
}
impl RequestContext for AppRequestContext {
type DBEndpointType = Neo4jEndpoint;
fn new() -> AppRequestContext {
// generate random request id
let request_id = "1234".to_string();
AppRequestContext { request_id }
}
}
fn resolve_echo_request(facade: ResolverFacade<AppRequestContext>) -> BoxFuture<ExecutionResult> {
Box::pin(async move {
2. Create Engine with RequestContext type parameter
The RequestContext is specified in the second type paramter of Engine.
.with_resolvers(resolvers)
.build()
.expect("Failed to build engine");
3. Access Context inside resolver
Box::pin(async move {
let request_context = facade.request_context().unwrap();
let request_id = request_context.request_id.clone();
facade.resolve_scalar(format!("echo! (request_id: {})", request_id))
})
}
Full Example
View on Github.