Discussion Forum

Use cases of Grakn vs Computational Graphs

Hey can you share your thoughts on the differences in use-cases of graph analysis using Grakn; versus from computational graphs?

In my understanding, at a high level, the difference is about the amount of computation supported by the graph’s hardware-acceleration. In Grakn’s case, it’s cpu-based stream processing. In computational graphs, its ai/ml chip-based, which of course has more horsepower.

However, I’m having trouble understanding beyond the high level. It seems computational graphs aren’t used for public websites; more for individual data analysis and labeling of high dimensional data.

FYI: I’m interested in; reasonably fast queries, nodes with multiple vectors/features, for mid-size data sets (aprx 10k - 50k nodes), for a data visualization websites. Think graph-based landscape analysis similar to Quid. The visualization aspect would be handled either in existing or custom-written webgl data viz,. Thanks

You’ve pretty much covered the big differences - Grakn’s processing is CPU based so you’ll have much different performance from any kind of accelerated processing.

Grakn can handle the kind of scale you’re talking about, but keep in mind that Grakn’s strengths are the data modeling and inference engine, rather than aggregations and complex computations over the whole graph. Most of the team’s work is currently going into the OLTP engine (which is aimed at answering queries that involve small subgraphs of the full knowledge graph) rather than the OLAP engine (which can do things like shortest path, etc.). It really depends what kind of analysis you’re trying to visualise!

Last point - we don’t currently have vectors as a native datatype supported, you would have to use strings to encode them if you really needed arrays like the ones used in ML! Alternatively, you can just store multiple attributes if the cardinality of the set of data points per concept isn’t huge!

2 Likes

Thanks! This clarifies how Grakn’s engine excels over computational graphs; through the use of logical inference in small sub-graphs.

Any chance you can advise on this last part of my question relating to visualization? No worries if this is out of scope. How I might translate Grakn’s graph data into coordinate space? My goal is to make more tightly clustered graph data look more like an actual landscape (mountains). This would be something like a histogram or density plot, with more styling. Its for an industry landscape of 10k companies, showing emerging; market segments, and technology areas.

Implementation might be similar to what you suggested; using strings to encode them. Or maybe some short webgl UI code that maps; 1. returned json query result tags from Grakn to 2. values (ie. color-coded). Thanks!

See pic to get a sense of what I have in mind.

1 Like

If you want to visualise the number of concepts connected to a node you could try to just do a match $x id VXXX; $r ($x, $y) isa relation; get $r; count;, though this might be a bit slow if you’re not sampling the graph somehow!