Resource Usages
Why is there a subscription quota? How much does each agent use?
In a computationally demanding environment, it's crucial to understand how different agents in a system consume CPU and memory resources. In this section, we will estimate the CPU and memory usage for each agent in our system: Sentinel, Oracle, and Scribe.
Each agent has different responsibilities and computational needs based on the tasks they perform.
These estimations are based on typical usage patterns for the type of operations each agent performs on a coin. Below, we break down each agent's estimated resource consumption.
1. Sentinel - Risk Analysis Agent
Sentinel is responsible for risk analysis, which requires intensive calculations and substantial data handling. Specifically, it deals with n² complexity (e.g., large dataset iterations) and makes numerous API calls. Given these factors, Sentinel demands a significant amount of CPU and memory.
CPU Usage
Sentinel's operations require extensive looping and processing, particularly when analyzing coins that contain a large amount of transactional data. This results in high CPU usage. Therefore an estimate of the CPU usage is:
2-4 vCPUs for small to medium coins.
8-16 vCPUs for large coins.
Memory Usage
We also save large parts of the transactional data in a memory buffer, which again uses additional memory. This buffer is needed for Sentinel to go over the transactions, find abnormalities and to calculate wallet and coin holdings. The more concurrent coins are being scanned, the more data that the bot needs to keep in memory, the more memory is being used. Therefore an estimate of the memory usage is:
8 GB to 32 GB, depending on the dataset size and concurrent operations.
Fear not, this is exactly why we are using caches! In particular, a cache is used to check whether the data of a ticker or a coin address has already been retrieved before. This way we don't have to perform additional api calls to retrieve the data To make sure our memory consumption becomes less and less, we even store big metadata on the disk.
4 GB to 16 GB, depending on the size of the dataset, the complexity of the model, and the number of concurrent tasks.
2. Oracle - Prediction Analysis Agent
Oracle requires memory to store its machine learning models and manage prediction data. While the current AI models we are using are generally lightweight in memory and cpu usage, we do additional tasks to make our predictions better. This requires a lot of external data and tasks to include more external factors in the predictions we are performing. The estimated CPU usage is:
CPU Usage
2-4 vCPUs for light loads (small datasets)
4-12 vCPUs for heavier workloads (larger datasets and more complex models)
Oracle performs computations for predictive analysis, which typically consumes moderate CPU resources.
To make concurrent requests faster, we use a combination of threads and processes. Threads are used for I/O-bound tasks, like making API calls, allowing multiple requests to happen at once. For CPU-bound tasks, like data processing, we use separate processes to fully utilize multiple cores. This approach helps speed up execution by optimizing resource use and reducing bottlenecks.
Memory Usage
Because we are performing such big tasks to perform our predictions and to generate the charts, we require a lot of data. Low transactional coins already can go up to 400 datapoints, where each datapoint packs up to 1 MB of data! The higher transactional coins can go up to 10000 datapoints, which makes the memory consumption around:
4 GB to 16 GB, depending on the size of the coin dataset.
3. Scribe - Coin Metadata Agent
Scribe handles regular coin metadata. This is a less computationally demanding task, primarily involving basic data collection, API calls, and minimal processing. As a result, Scribe is the lightest in terms of both CPU and memory consumption.
CPU Usage
Scribe's operations involve fetching and processing simple metadata, making low CPU usage a characteristic feature. CPU consumption is generally light:
0.5-1 vCPU for smaller workloads or lighter tasks
2 vCPUs for handling heavier metadata requests or larger datasets
Memory Usage
Since Scribe mainly handles metadata, the memory requirements are relatively low. The amount of memory needed is primarily for storing coin-related data:
1 GB to 4 GB, depending on the volume of coin data and the number of concurrent requests.
Last updated