https://preview.redd.it/gumkp2vwfp2c1.jpg?width=1600&format=pjpg&auto=webp&s=02e58ba63baef04d072eeaf8172da9e1ef5294c5

Hey Graph’ers!

Are you ready for a special AMA with members of the core dev teams working on The Graph’s new roadmap - the New Era of The Graph? This isn’t just a new chapter for The Graph—it marks a transformative evolution in the world of web3, aiming to empower developers, boost the ecosystem, and redefine what’s possible with decentralized data.

📅 Join this special AMA from Tuesday, November 28, to Wednesday, November 30, 2023.

Because core dev team members span multiple time zones, responses to your questions will be staggered, offering a continuous and evolving dialogue over three days. Feel free to ask questions at your convenience and return often to see new answers and participate in the ongoing discussions. Community members and moderators from The Graph’s Reddit channel will be on hand to guide the AMA and ensure your questions are addressed.

Meet the AMA participants, all from the core dev teams at The Graph:

  • Adam Fuller - Product Manager, Edge & Node
  • Alex Bourget - Co-founder & CTO, StreamingFast
  • Chris Wessels - Founder, GraphOps
  • Daniel Keyes - Founder & CEO, Pinax
  • Eva Beylin - Director, The Graph Foundation
  • Sam Green - Co-founder & Head of Research, Semiotic Labs
  • Uri Goldshtein - Founder, The Guild
  • Vincent Wen - Engineering Manager, Messari
  • Yaniv Tal - Founder & CEO, Geo

✨ The New Era promises a suite of new data services and features that are set to drive the next generation of decentralized applications. Additionally, from new tooling, features, updates, and upgrades, The Graph is empowering developers and ecosystem contributors. You can read the official announcement here and on Twitter.

The roadmap is structured around five core objectives:

  1. World of Data Services: Expanding beyond subgraphs to deliver a rich market of data services on the network (i.g., new query languages, LLMs, etc.)
  2. Developer Empowerment: Supporting developers through enhanced DevEx and tooling (e.g., Sunrise of Decentrzalized Data, upgrade Indexer, etc.)
  3. Protocol Evolution and Resiliency: Delivering a more resilient, flexible, and efficient protocol
  4. Optimized Indexer Performance: Boosting Indexer performance with improved tooling and operational capabilities
  5. Interconnected Graph of Data: Creating tools for composable data and organized knowledge graph

At the center of The Graph protocol is the power of community - so let’s hear your thoughts, feedback, and of course, answer any questions you may have about this New Era. Whether you’re curious about specific features, the roadmap’s objectives, or how you can get involved, the core devs are here to chat.

🌐 So, let’s dive in - ask the core devs anything about The New Era of The Graph!

Please note that this AMA will adhere to this channel’s Moderation & Administration Policy:
https://www.reddit.com/r/thegraph/comments/l0t81p/welcome_to_the_official_subreddit_for_the_graph/

  • Shitshotdead@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    Hi, wondering what efforts that the graph are doing to ensure that those with a stake and participating in the protocol do not see their stake being eroded drastically through inflation and decreasing price of the token?

    Are there interesting tokenomics/protocol changes that will make delegating more atteactive?

  • PaulieB79@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    By this time next year, what key milestones and aspirations do you hope The Graph Protocol will have achieved? Additionally, looking further ahead, what are some stretch goals or ambitious projects you foresee for The Graph in the upcoming years, especially in addressing emerging needs in the decentralized data space?

    • xsamgreen@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Hi u/PaulieB79! This is Sam Green from Semiotic Labs. Thanks for the question!

      Here are some milestones I would like to see Semiotic achieve in the coming year:

      • Deploy Scalar TAP (The Graph’s new micropayment system). Once the hosted service subgraphs have been upgraded to The Graph Network, TAP will handle more micropayments than most other web2 and web3 systems.
      • Deploy SQL for analytics queries in The Graph Network as a new data service. (See this thread for more details.)
      • Deploy open-source large language models to The Graph Network as a new data service.
      • Deploy verifiable Firehose.
      • As a stretch goal this year, I would like Semiotic to deploy verifiable SQL queries. This would allow The Graph’s data to be used for high-consequence applications, like tax accounting.
    • undefinedza@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Hey Paulie. Chris from GraphOps here. Thanks for your question!

      At GraphOps we’re particularly excited about the protocol evolving to support a world of Data Services. This shift, from a single enshrined tech stack that can’t possibly meet every use case, to a diverse marketplace of data services, underscores the ecosystem’s commitment to make The Graph the foundational data layer of Web3.

      There are so many exciting Data Services on the Horizon, but I have to make mention of GraphOps’ focus on File Data Service (FDS). FDS is a marketplace to share and monetise file data. Naturally, this has very broad applicability, but our initial focus is on supporting the real needs of Indexers within the Graph ecosystem today. Indexers are large-scale infrastructure operators, dealing with many terabytes of data. Much of this data is in the form of files: database backups, archive node snapshots and Firehose flatfiles. For Indexers that have already generated this data, FDS allows them to generate additional revenue by selling it to other Indexers. For Indexers that don’t yet have this data, it’s highly likely that someone will be willing to sell it to them at a cost that is lower than the cost of recreating it from scratch. This should increase the efficiency of The Graph, reducing time-to-data for Indexers and bolstering revenues.

      In a year from now, we’d like to see FDS being actively used by Indexers to monetise and share their file data. Beyond the marketplace itself, we’d also like to see tooling developed that sits on top of FDS which automates adjacent actions like restoring a subgraph database backup from a file, and provides a seamless UX for Indexers to bootstrap a Subgraph or Firehose using data from FDS.

      We’re also very excited about our ongoing work on Launchpad, our Kubernetes Toolkit for Indexers. We continue to believe that Kubernetes-centric infrastructure tooling is a critical piece of the puzzle to scale The Graph, and we’re actively working on adding Firehose and Substreams support to Launchpad. Next year, we’re excited to support both of these, alongside SQL, Files, and other emerging Data Services.

      Outside of new Data Services, we’re excited to continue to see Graphcast impact the network positively in the areas of data determinism and network intelligence.

      This barely scratches the surface of the developments that are being worked on across the Core Developer ecosystem, and we’re thrilled to be working alongside so many other smart and determined teams. Watch this space!

    • Wensi_isneW@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Hey Paulie, this is Vincent from Messari. Thank you for your question!

      Our focus will continue to be around completing our subgraph coverage, both in terms of breadth (more chains supported, more diversified protocols types, more protocols) and in terms of depth (more data and deeper data for each protocol). In particular, we are expanding our coverage to some of the more “bespoke” protocols that don’t fit into our standardized schema particularly well. For example, FriendTech subgraph is one we built recently that exposes a lot of interesting data, despite it not fitting into any of the standardized protocol types we support.

      Over the course of next year and into the years ahead, we are also super excited about adding SQL capabilities to The Graph ecosystem and enabling analytics use cases. We are seeing a lot of interest in this, and is also something we’ve always wanted at Messari. We will be building on top of the great work that are currently underway by Streamingfast and Semiotic, and potentially build standardized SQL-based “subgraphs” in a more composable way using DBT and Clickhouse.

    • pinax-network@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Hey u/PaulieB79, Daniel Keyes here, CEO of Pinax. Thanks for the question!

      Quite simply, by this time next year, I expect The Graph to be serving many more blockchains and many more data sources.

      As we enter the new era, The Graph has the opportunity to achieve network effects and service levels unrivaled by any centralized data service provider, serving developers and data consumers with an ever-expanding set of data services and data sources built and hosted by a thriving ecosystem of core developers, service providers, and community members collaborating to bring this mission to life.

      To learn more about what Pinax is doing to achieve this vision, visit our blog and read up on the future of the world of data services and how to navigate the chain integration process.

  • Drewsapple@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    Thanks for hosting this, and providing a glimpse of the future at datapalooza.

    The separation of Extraction, transformation, Loading and Querying data seems to be key to accelerating the availability and flexibility of the data provided by the graph. Sam’s announcement of bringing clickhouse SQL to the graph really excites me, as I’m currently wasting a lot of time writing/maintaining code to make aggregations over data as part of the transform step, instead of aggregating at query-time.

    What can we expect to see for the rollout of clickhouse SQL on the graph?

    Since this is dependent on substreams, which in turn depend on firehose, what steps are needed to get substreams working on OP stack chains?

    Will there be a way to get an “event substream” without call handlers shipped earlier than the full firehose implementation for OP stack chains, as this can be done with just an RPC instead of instrumenting OP-geth or OP-reth?

    Thanks.

    • pinax-network@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Hey, I’m Daniel Keyes, CEO of Pinax, and we’re very pleased to be here participating in this AMA.

      Thanks for asking these great questions. For SQL data services, Pinax is currently investigating how to deploy these services in a performant, modular, and reliable way. We’ll work closely with StreamingFast and Semiotic to improve the workflow as operator of these services.

      For Firehose, Pinax is working on adding RPC nodes for many EVM chains (if you want to see which ones, check the hosted service list of supported blockchains here: https://thegraph.com/docs/en/developing/supported-networks/). The StreamingFast team is working on a Firehose “light” stream that will not need to have deep instrumentation.

      There will be some discussion on this topic in the Monthly Core Dev call this coming Thursday if you want to learn more. This page has info on how to access recordings of previous Core Dev calls and how to join in the future.

    • abourget@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Hey apple :)

      I’m currently wasting a lot of time writing/maintaining code to make aggregations over data as part of the transform step, instead of aggregating at query-time.

      I’d love to learn more about your use case, and where you’re building aggregations in the transform step. Is that within Substreams? If so, we’re working hard to making this simpler and simpler. For instance, we’ll be:

      1. working to make a WASI-compatible target, so you can use a bunch of languages, not only Rust, leveraging libraries from here and there, and previous skills you would have.
      2. building more and more code generation tools, to allow you to get off the ground much more quickly, like having all those dynamic data sources patterns automatically be built for you. We’ve started that with ABI to Database tables (check the latest substreams CLI changelog, in the init command)
      3. some are building DSLs, and higher order libraries in Rust to allow you to do more with less code

      That being said, we very much understand there’s a whole lot of things that are best done at query time. That’s why we’re putting lots of efforts on the SQL sink (https://github.com/streamingfast/substreams-sink-sql). It already has a high throughput injector, reorgs navigation - which we just released - for postgres, support for Clickhouse, and a bunch of other features.

      What can we expect to see for the rollout of clickhouse SQL on the graph?

      This SQL sink is also what we’re turning into a deployable unit, shippable to The Graph network eventually. You have our first take at it here: https://substreams.streamingfast.io/tutorials/substreams-sql … but I think it’ll evolve quite a bit. The goal is that indexers can run those deployment endpoints, and even some gateways can accept deployment requests and decide where to optimally run workloads.

      Our goal is to make it as easy as possible for you to think of a data service, pluck some from the community, and have them running on your behalf somewhere on The Graph network.

      Since this is dependent on substreams, which in turn depend on firehose, what steps are needed to get substreams working on OP stack chains?

      We’ve just recently closed this issue: https://github.com/streamingfast/substreams/issues/278 and we’ve rolled out that RPC poller for the Firehose Ethereum, that requires only an RPC node. The data is lighter, but we can get to much more chains much faster.

      Using this method, we’ve backfilled the Arbitrum network (prior to the Nitro, called the “Classic” era). With this method, we’ll be sync’ing one chain after the other. We’re currently sync’ing Bitcoin Core (!) using this new method. OP is next on our list, but with a few instructions one could start using it right away. We’ve crafted a more precise definition of a Firehose extractor (you can read about it here: https://github.com/streamingfast/firehose-core/issues/17) and have implemented the RPC poller methods using this interface. Our goal is to speed up the chain coverage, by simplifying the extraction and not always require core chain instrumentation. Yet, if people want better performances (than the bit of latency induced by some RPC nodes), going deeper can be done post factum.

      I think this addresses your last question too ^^.

      Thanks for reaching out!

      - Alexandre Bourget, CTO at StreamingFast.io

    • xsamgreen@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Hi u/Drewsapple! This is Sam from Semiotic Labs. Regarding your rollout question, here’s the current status:
      * We currently have Substreams to ClickHouse working well
      * We have recently prototyped the SQL API
      * We have a sketch for how to handle DBT experiments by the developer
      * The plan is to get SQL queries on the network by Q1 2024
      * We are very interested in learning more about our developers’ specific use cases for SQL. Please dm me if you would be interested in chatting!

      Pinax will answer your OP stack question :)

  • jagtapyash2512@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    As a Graph Advocate, what are the measures that I could use to learn about all the new things in the roadmap and implement all of those practically? Additionally I can help other Advocates and communities with that.

    • pinax-network@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Hi, my name is Chris Ewing and I am a long time Graph community member, Graph Advocate and the Pinax Community Liaison. Thanks for stopping in to chat!

      Indexer Office Hours is one of my personal favorite venues for tracking protocol progress. Graph Ops always presents the latest developments, and lately the core devs have been keeping the indexers up-to-date with coming changes, as well as seeking their input on various items. It’s a great place to witness the teams interacting with the community.

      Regarding ongoing education around New Era, this is a huge step forward for the protocol, and you can expect some detailed learning opportunities when we sort out the specific details. I’ve also heard rumors of a New Era intro presentation for Advocates coming in the near future. Again, we can expect targeted learning resources as the details coalesce.

      The Graph Forum and Discord are also good resources for following the conversation. We get to watch this being built in real-time!

    • hornelson@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      I would suggest to focus on the area you like the most from our roadmap:
      https://thegraph.com/roadmap

      • If you are into development, then I suggest not to miss our weekly Graph Builders Office Hours every Thursday.
      • If you are more into DevOps and you want to better understand how the network works, then every Tuesday you can listen to Indexer Office Hours.
      • Also, every month we host a Graph Core Dev Call where you can learn from different working groups and brainstorm around active R&D tracks happening in The Graph ecosystem. Next call is this Thursday, in two days from now, link here.

      🗓️ You can subscribe to the Graph Foundation’s Ecosystem Calendar, so you don’t miss any of the upcoming calls!

  • Proxima-95803@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    hi team,

    (1) This may be a rookie question, I would like to ask how the graph will serve the future cross-chain ecosystem and how to ensure cross-chain data consistency.

    (2) What are the detailed plans for the open source large language model? For example, collaborate with OpenAI, or develop a component for AI to query on-chain data?

    • abourget@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      To (1), I’d say we’re working right now on allowing multiple streams of data flowing into a single database, with the `Substreams:SQL` initiative, and the deployable units designs. A goal there would be to be able to join stuff from multiple chains. Take a look at the `dbt` layer too in the tutorial: https://substreams.streamingfast.io/tutorials/substreams-sql

      We’re not there yet, but once we have a solid product offering, we’ll be researching the different verification layers that can apply to these technologies. There are few ideas already up there (SQL verification… in Semiotic’s wheelhouse, Firehose/Substreams runtime verification, economic security through cross-validation of multiple providers, etc…)

  • soyab0007@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    Who are the competitors of thegraph? And what are their current status on technology vs thegraph?

    • abourget@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      You tell us! We’re busy building, looking in our users’ eyes, and not looking back.

  • PaulieB79@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    Sorry for a follow up question. But I was wondering if you could elaborate on future potential or possibilities of cross chain queries. Thanks in advance.

    • abourget@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      I responded above to someone asking about cross-chain queries. Take a look ^^

  • Stockcratez@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    What are the top five dapps that currently utilize the graph? Besides messari & uniswap? Which ones currently utilize the decentralized network? Just want to see if adoption is actually taking place.