# Platform Architecture

**Decentralized Network**

* **Node Network**: Power AI connects thousands of idle GPUs from contributors around the world, forming a vast decentralized network. Each GPU acts as a node, contributing computing power to the network.
* **Smart Scheduling**: Our platform employs advanced scheduling algorithms to distribute AI tasks across the network. This ensures optimal utilization of resources, balancing the load effectively to maximize performance and minimize latency.
* **Scalable Infrastructure**: Power AI’s architecture is designed to scale seamlessly as more nodes join the network. This scalability allows us to handle increasing demand for AI computing power efficiently.

**Core Components**

* **Task Manager**: Manages the submission, distribution, and execution of AI tasks across the network.
* **Resource Manager**: Monitors and manages the availability and performance of GPU resources, ensuring efficient use of the network.
* **Reward System**: Utilizes smart contracts to calculate and distribute rewards to contributors based on their participation and computing power provided.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs-whitepaper.gitbook.io/power-ai/technology/platform-architecture.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
