Skip to main content

Getting Started with ZkTerminal

How do I get started with ZkTerminal?

ZkTerminal is your gateway to advanced AI automation using ZkSurfer AI, enabling browser-based node setup, email, and Telegram automation. Install the ZkSurfer AI extension, connect your Zynapse API key, and start automating tasks on any webpage.

Welcome to ZkTerminal, your solution using advanced AI techniques. With ZkSurfer AI, you can automate setting up your node, sending email and telegram messages on your browser effortlessly.

How do I install ZkSurfer AI?

ZkSurfer AI is a Chrome extension built from source via the GitHub repository. You need Node.js 16+, clone the repo, install dependencies with Yarn, build, and load the unpacked extension in Chrome.

Installing the Extension

ZkSurfer AI is currently available exclusively through our GitHub repository. Follow these steps to build and install the extension locally on your machine:

  • Ensure you have Node.js installed, preferably version 16 or later.
  • Clone the ZkSurfer AI repository from GitHub.
  • Navigate to the cloned repository directory.
  • Install the dependencies using Yarn:
    yarn install
  • Build the package:
    yarn build
  • Load the extension in Chrome:
    • Navigate to chrome://extensions/ in your Chrome browser.
    • Enable Developer mode.
    • Click on "Load unpacked extension" and select the build folder generated by yarn build.

Running in Your Browser

Once the extension is installed, you can access it in two forms:

  • Popup: Press Cmd+Shift+Y (Mac) or Ctrl+Shift+Y (Windows/Linux), or click on the extension logo in your browser.
  • Devtools Panel: Open the browser's developer tools and navigate to the ZkSurfer AI panel.

Next, you'll need to obtain an API key from Zynapse and paste it into the provided box within the extension. This key will be securely stored in your browser and will not be uploaded to any third-party servers.

Finally, navigate to the webpage you want ZkSurfer AI to automate actions on (e.g., the OpenAI playground) and start experimenting!

Tech Stack

  • Node.js
  • Chrome Extension API
  • Custom Transformer Model
  • Zynapse API

Resources

How does the ZkSurfer AI action cycle work?

ZkSurfer AI uses a custom transformer model and the Zynapse API to control your browser and execute instructions. The action cycle captures user instructions, processes them through the transformer model via Zynapse API, and executes the resulting actions in the browser.

  • Step 1 — Capture: User provides instructions (predefined or ad-hoc) through the extension interface.
  • Step 2 — Process: Instructions are sent to the custom transformer model through the Zynapse API for interpretation.
  • Step 3 — Execute: The model returns browser actions, which ZkSurfer AI executes automatically on the active webpage.

For more details on how to use ZkSurfer AI and its advanced features, refer to our GitHub repository and documentation.

What use cases does ZkSurfer AI support?

ZkSurfer AI supports node setup automation, marketing automation, Leo-code generation for the Aleo network, and privacy-preserving image/video generation. Each use case leverages the Zynapse API and custom transformer model for secure, automated workflows.

Node Setup Automation

  • Automated setup process for nodes, catering to both technical and non-technical users.
  • Streamlined resource allocation for optimal node performance.
  • Compatibility with various node configurations and networks.

Marketing Automation

  • Telegram scraping for data collection.
  • Automated email outreach with personalized messaging.
  • Bulk distribution capabilities for efficient marketing campaigns.
  • Integration with popular messaging platforms like Telegram for direct messaging automation.

Leo-Code Generation

  • Code generation functionality for the Aleo network.
  • Generation of secure and efficient code based on user input.
  • Integration with Aleo development tools for seamless workflow.

Privacy-Preserving Image and Video Generation

  • Integration with Zynapse API for privacy-preserving image and video generation.
  • Secure handling of user data and content.
  • Support for various image and video formats and resolutions.

How do I set up decentralized GPU clustering?

Decentralized GPU clustering lets you contribute GPU resources to a network for running heavy ML models in a privacy-preserving manner. You need Docker, an NVIDIA GPU with CUDA support, and Python 3.x to get started.

Prerequisites

  • Docker
  • Nvidia GPU with CUDA support
  • Python 3.x

1. Installation

Clone the decentralized GPU clustering repository, then install the required Python packages:

pip install -r requirements.txt

2. Configuration

Navigate to the config directory and edit the config.yaml file to configure your settings. You can specify your Ethereum wallet address for receiving rewards.

3. Running the Dashboard

To access the dashboard and monitor GPU utilization, run the following command:

python dashboard.py

This will start the dashboard server. You can access the dashboard by opening your web browser and navigating to http://localhost:8080.

4. Contributing GPU Resources

To contribute your GPU resources to the network, run the following command:

python contribute.py

This will start your GPU node and connect it to the decentralized clustering network. Your GPU will now be available for running ML models.

How does ZkAGI ensure privacy-preserved computing?

ZkAGI uses zero-knowledge proofs (zkproofs) to ensure all computations on the decentralized GPU network are privacy-preserving. Your data and computations remain private and secure while your GPU contributes to the network and runs ML models.

  • Computations are verified using zero-knowledge proofs (zkproofs) without revealing underlying data.
  • GPU contributors can run ML models on the network while keeping their data private and secure.
  • The system ensures correctness of computation without exposing sensitive information.

What is fractional computing in ZkAGI?

Fractional computing allows you to contribute partial GPU resources to the ZkAGI decentralized network instead of dedicating an entire GPU. This enables efficient utilization of GPU resources and maximizes the network's overall computational power.

  • Contribute fractional GPU resources instead of requiring full GPU dedication.
  • Enables efficient utilization of GPU resources across the network.
  • Maximizes the network's computational power through granular resource allocation.

Integration and Capabilities

For data scientists and ML practitioners:

  • Easily parallelize and distribute ML workloads across multiple nodes and GPUs.
  • Leverage the ML ecosystem with native and extensible integrations.

For ML platform builders and ML engineers:

  • Provides compute abstractions for creating a scalable and robust ML platform.
  • Provides a unified ML API that simplifies onboarding and integration with the broader ML ecosystem.
  • Reduces friction between development and production by enabling the same Python code to scale seamlessly from a laptop to a large cluster.

For distributed systems engineers — automatically handles key processes:

  • Orchestration – Managing the various components of a distributed system.
  • Scheduling – Coordinating when and where tasks are executed.
  • Fault tolerance – Ensuring tasks complete regardless of inevitable points of failure.
  • Auto-scaling – Adjusting the number of resources allocated to dynamic demand.

Scalable infrastructure integrations:

  • Scalable libraries for common machine learning tasks such as data preprocessing, distributed training, hyperparameter tuning, reinforcement learning, and model serving.
  • Pythonic distributed computing primitives for parallelizing and scaling Python applications.
  • Integrations and utilities for deploying a cluster with existing tools and infrastructure such as Kubernetes, AWS, GCP, and Azure.