Intro Hypi High Performance Distributed In Memory Computing with Apache Ignite

This is part 1 in the series on how Hypi’s implemented.

One of the core features of the Hypi platform is that it allows a publisher to instantly go from a GraphQL model to a scalable, production ready GraphQL API.

Imagine you wanted to build an app like Facebook, an over simplified data model to start with may look like this.

Hypi will generate a CRUD API e.g. from this model some GraphQL functions that become available would include:

The filter parameter in the API refers to a HypiQL filter

Hypi focuses on the relationships in the schema and depends heavily on them to understand what it should generate for you app’s API. 

At the Nov 2018, London In Memory Computing meetup we spoke about how Hypi implements these under the hood.

In particular, we introduced two original optimisation techniques created as part of our CTO’s PhD research (Wormhole traversals and Vertex cascading) and about how these two techniques when used in combination with a custom FMIndex (emphasising the importance of Burrows Wheel Transform).

The talk covered business to theory to ignite approach, much like what’s shown in the following

In this series, we will be writing a post which covers each of these in turn.

This post doesn’t go into details but we hope it at least gets you prepared for the rest of this posts to come in this series. The posts to come will be detailed and will explain how these techniques and technologies work together.


Related Posts


    Hypi info lighthouse