mirror of
https://github.com/asterinas/asterinas.git
synced 2025-06-26 10:53:25 +00:00
Reorganize benchmark structure
This commit is contained in:
committed by
Tate, Hongliang Tian
parent
969ac97144
commit
5d1c16896a
@ -1,172 +1,261 @@
|
||||
# Introduction to benchmarks
|
||||
# Asterinas Benchmark Collection
|
||||
|
||||
## Overview of supported benchmarks
|
||||
The benchmark suite contains several benchmarks that can be used to evaluate the performance of the Asterinas platform. The following benchmarks are supported:
|
||||
The Asterinas Benchmark Collection evaluates the performance of Asterinas in comparison to Linux across a range of benchmarking tools (e.g., LMbench, Sysbench, iPerf) and real-world applications (e.g., Nginx, Redis, SQLite). These benchmarks are conducted under various configurations, such as within a single virtual machine (VM) or between a VM and its host.
|
||||
|
||||
- [Sysbench](#Sysbench)
|
||||
- [Membench](#Membench)
|
||||
- [Iperf](#Iperf)
|
||||
The benchmarks are run automatically on a nightly basis through continuous integration (CI) pipelines. Results, presented in clear and visually appealing figures and tables, are available [here](https://asterinas.github.io/benchmark/).
|
||||
|
||||
### Sysbench
|
||||
Sysbench is a scriptable benchmark tool that evaluates system performance. It includes five kinds of tests: CPU, memory, file I/O, mutex performance, and thread performance. Detailed usage and options can be found by using:
|
||||
```shell
|
||||
sysbench --help
|
||||
sysbench --test=<test_name> help
|
||||
```
|
||||
Here we list some general commands for evaluation:
|
||||
```shell
|
||||
# CPU test
|
||||
sysbench --test=cpu --cpu-max-prime=<N> --num-threads=<N> run
|
||||
## File Organization
|
||||
|
||||
# Thread test
|
||||
sysbench --test=threads --thread-yields=<N> --num-threads=<N> --max-time=<N> run
|
||||
### Benchmark Suites
|
||||
|
||||
# Mutex test
|
||||
sysbench --test=mutex --mutex-num=<N> --mutex-locks=<N> --num-threads=<N>
|
||||
The benchmark collection is organized into benchmark suites, each dedicated to a specific benchmarking tool or application. These suites focus on comparing the performance of different operating systems using a particular methodology. Currently, there are seven benchmark suites, each located in its own directory:
|
||||
|
||||
# File test, the file-total-size and file-num of prepare and run must be consistent
|
||||
sysbench --test=fileio --file-total-size=<N><K,M,G> --file-num=<N> prepare
|
||||
sysbench --test=fileio --file-total-size=<N><K,M,G> --file-num=<N> --file-test-mode=<Type> --file-block-size=<N><K,M,G> --max-time=<N> run
|
||||
- [lmbench](https://github.com/asterinas/asterinas/tree/main/test/benchmark/lmbench)
|
||||
- [sysbench](https://github.com/asterinas/asterinas/tree/main/test/benchmark/sysbench)
|
||||
- [fio](https://github.com/asterinas/asterinas/tree/main/test/benchmark/fio)
|
||||
- [iperf](https://github.com/asterinas/asterinas/tree/main/test/benchmark/iperf)
|
||||
- [sqlite](https://github.com/asterinas/asterinas/tree/main/test/benchmark/sqlite)
|
||||
- [redis](https://github.com/asterinas/asterinas/tree/main/test/benchmark/redis)
|
||||
- [nginx](https://github.com/asterinas/asterinas/tree/main/test/benchmark/nginx)
|
||||
|
||||
# Memory test
|
||||
sysbench --test=memory --memory-block-size=<N><K,M,G> --memory-access-mode=<Type> --memory-oper=<Type> run
|
||||
Each suite has a corresponding web page (e.g., [LMbench results](https://asterinas.github.io/benchmark/lmbench/)) that publishes the latest performance data. At the top of each page, a summary table showcases the most recent results, configured using the `summary.json` file in the suite's directory.
|
||||
|
||||
### Benchmark Jobs
|
||||
|
||||
Each benchmark suite is divided into benchmark jobs, which perform specific benchmarking tasks. Benchmark jobs are organized into subdirectories under their parent suite directory:
|
||||
|
||||
```plaintext
|
||||
<bench_suite>/
|
||||
├── <bench_job_a>/
|
||||
└── <bench_job_b>/
|
||||
```
|
||||
|
||||
### Membench
|
||||
Membench is used to establish a baseline for memory bandwidth and latency. For specific usage and options, use:
|
||||
```shell
|
||||
membench --help
|
||||
```
|
||||
Here we list some general commands to use membench:
|
||||
```shell
|
||||
# Measure the latency of mmap
|
||||
membench -runtime=5 -dir=/dev/zero -size=<N><K,M,G> -engine=mmap_lat
|
||||
Benchmark jobs can be executed using the `bench_linux_and_aster.sh` script located in the `test/benchmark/` directory:
|
||||
|
||||
# Measure the latency of page fault handling. The size must be consistent with the file size.
|
||||
membench -runtime=5 -dir=path_to_a_file -size=<N><K,M,G> -copysize=<N><K,M,G> -mode=<Type> -engine=page_fault
|
||||
|
||||
# This is a easy way to generate a file with target size in Linux.
|
||||
# The following command can create a file named 512K.file with the size 512K.
|
||||
dd if=/dev/zero of=512K.file bs=1K count=512
|
||||
```bash
|
||||
./bench_linux_and_aster.sh <bench_suite>/<bench_job>
|
||||
```
|
||||
|
||||
### Iperf
|
||||
iPerf is a tool for actively measuring the maximum achievable bandwidth on IP networks. Usage and options are detailed in:
|
||||
```shell
|
||||
iperf3 -h
|
||||
```
|
||||
iperf can run in the following instructions:
|
||||
```shell
|
||||
export HOST_ADDR=127.0.0.1
|
||||
export HOST_PORT=8888
|
||||
iperf3 -s -B $HOST_ADDR -p $HOST_PORT -D # Start the server as a daemon
|
||||
iperf3 -c $HOST_ADDR -p $HOST_PORT # Start the client
|
||||
```
|
||||
Note that [a variant of iperf3](https://github.com/stefano-garzarella/iperf-vsock) can measure the performance of `vsock`. But the implemented `vsock` has not been verified to work well in it.
|
||||
For example, to measure the latency of the `getppid` system call on both Linux and Asterinas, run:
|
||||
|
||||
## Add benchmark to benchmark CI
|
||||
|
||||
To add a new benchmark to the Asternias Continuous Integration (CI) system, follow these detailed steps:
|
||||
|
||||
### Step 1: Add the Benchmark to the `asterinas/test/benchmarks` Directory
|
||||
|
||||
1. **Create the Benchmark Directory:**
|
||||
- Navigate to `asterinas/test/benchmarks`.
|
||||
- Create a new directory named after your benchmark, e.g., `lmbench/getpid`.
|
||||
|
||||
2. **Create the Necessary Files:**
|
||||
- **config.json:**
|
||||
```json
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customSmallerIsBetter",
|
||||
"search_pattern": "Simple syscall:",
|
||||
"result_index": "3",
|
||||
"description": "lat_syscall null",
|
||||
"title": "[Process] The cost of getpid",
|
||||
"benchmark_type": "host_guest",
|
||||
"aster_scheme": "iommu"
|
||||
}
|
||||
```bash
|
||||
./bench_linux_and_aster.sh lmbench/process_getppid_lat
|
||||
```
|
||||
|
||||
- `alert_threshold`: Set the threshold for alerting. If the benchmark result exceeds this threshold, an alert will be triggered. Note that the threshold should usually be greater than 100%. If your results are not stable, set it to a bigger value.
|
||||
- `alert_tool`: Choose the validation tool to use. The available options are `customBiggerIsBetter` and `customSmallerIsBetter`. Refer to [this](https://github.com/benchmark-action/github-action-benchmark?tab=readme-ov-file#tool-required) for more details. If using `customBiggerIsBetter`, the alert will be triggered when `prev.value / current.value` exceeds the threshold. If using `customSmallerIsBetter`, the alert will be triggered when `current.value / prev.value` exceeds the threshold.
|
||||
- `search_pattern`: Define a regular expression to extract benchmark results from the output using `awk`. This regular expression is designed to match specific patterns in the output, effectively isolating the benchmark results and producing a set of fragments.
|
||||
- `result_index`: Specify the index of the result in the extracted output. This field is aligned with `awk`'s action.
|
||||
- `description`: Provide a brief description of the benchmark.
|
||||
- `title`: Set the title of the benchmark.
|
||||
- `benchmark_type`: This parameter defines the type of benchmark to be executed. The default value is `guest_only`. The available options include `guest_only`, and `host_guest`.
|
||||
- `guest_only`: Use this option when the benchmark is intended solely for the guest environment.
|
||||
- `host_guest`: Choose this option when the benchmark involves both the host and guest environments. When using this option, you will need to define your own `host.sh` and `bench_runner.sh` scripts to handle the host-side operations and benchmark execution.
|
||||
- `aster_scheme`: Specify the scheme used in Asterinas. The optional values, e.g., `iommu`, are aligned with the `SCHEME` parameter in `asterinas/Makefile`.
|
||||
The script starts a VM running either Linux or Asterinas as the guest OS and invokes the `run.sh` script located in the benchmark job's directory to execute the benchmark:
|
||||
|
||||
For example, if the benchmark output is "Syscall average latency: 1000 ns", the `search_pattern` is "Syscall average latency:", and the `result_index` is "4". `awk` will extract "1000" as the benchmark result. See the `awk` [manual](https://www.gnu.org/software/gawk/manual/gawk.html#Getting-Started) for more information.
|
||||
|
||||
- **summary.json:**
|
||||
```json
|
||||
{
|
||||
"benchmarks": [
|
||||
"cpu_lat",
|
||||
"thread_lat"
|
||||
]
|
||||
}
|
||||
```plaintext
|
||||
<bench_suite>/
|
||||
└── <guest_only_job>/
|
||||
└── run.sh
|
||||
```
|
||||
- List all the benchmarks that are included in the benchmark overview. This file is used to generate the overview chart of the benchmark results.
|
||||
- The benchmark does not appear in the overview chart if it is not listed in this file. But it will still be included in the detailed benchmark results.
|
||||
- The sequence of the benchmarks in this file will be the same as the sequence in the overview chart.
|
||||
|
||||
- **result_template.json:**
|
||||
```json
|
||||
[
|
||||
{
|
||||
"name": "Average Syscall Latency on Linux",
|
||||
"unit": "ns",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average Syscall Latency on Asterinas",
|
||||
"unit": "ns",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
||||
For benchmarks requiring collaboration between the guest VM and the host OS (e.g., server-client scenarios), the job should include a `host.sh` script alongside the `run.sh` script:
|
||||
|
||||
```plaintext
|
||||
<bench_suite>/
|
||||
└── <host_guest_job>/
|
||||
├── host.sh
|
||||
└── run.sh
|
||||
```
|
||||
- Adjust `name` and `unit` according to your benchmark specifics.
|
||||
|
||||
- **run.sh:**
|
||||
#### Single Result Jobs
|
||||
|
||||
For jobs that produce a single result, the directory is structured as follows:
|
||||
|
||||
```plaintext
|
||||
<bench_suite>/
|
||||
└── <single_result_job>/
|
||||
├── bench_result.json
|
||||
└── run.sh
|
||||
```
|
||||
|
||||
The `bench_result.json` file contains metadata about the result, including the title, measurement unit, and whether higher or lower values indicate better performance.
|
||||
|
||||
#### Multi-Result Jobs
|
||||
|
||||
For jobs producing multiple results, the directory includes a `bench_results/` folder:
|
||||
|
||||
```plaintext
|
||||
<bench_suite>/
|
||||
└── <multi_result_job>/
|
||||
├── bench_results/
|
||||
│ ├── <job_a>.json
|
||||
│ └── <job_b>.json
|
||||
└── run.sh
|
||||
```
|
||||
|
||||
Each JSON file in the `bench_results/` directory describes a specific result's metadata.
|
||||
|
||||
## Adding New Benchmark Jobs
|
||||
|
||||
To seamlessly integrate new benchmarks into the Asterinas Benchmark Collection, follow the steps below. These instructions are tailored to the directory structure outlined earlier, where benchmarks are organized under specific suites and jobs.
|
||||
|
||||
### Step 1: Add the Directory Structure
|
||||
|
||||
Each benchmark job should be added under the corresponding suite in the `test/benchmark` directory.
|
||||
|
||||
#### Directory Structure
|
||||
|
||||
```plaintext
|
||||
<bench_suite>/
|
||||
└── <job>/
|
||||
├── host.sh # Only for host-guest jobs
|
||||
├── bench_result.json # or bench_results/ directory for multiple results jobs
|
||||
└── run.sh
|
||||
```
|
||||
|
||||
### Step 2: Create Necessary Files
|
||||
|
||||
In this step, we need to create several files that are essential for running and managing the benchmarks effectively. Below are the detailed instructions for each required file.
|
||||
|
||||
#### Running Scripts
|
||||
|
||||
Typically, two scripts are required for each benchmark job: `run.sh` and `host.sh` (for host-guest jobs). These scripts are responsible for executing the benchmark within the guest VM and handling host-side operations, respectively.
|
||||
|
||||
Below are the contents of each script for a sample `iperf3` benchmark:
|
||||
|
||||
`run.sh`:
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
/benchmark/bin/lmbench/lat_syscall -P 1 null
|
||||
echo "Running iperf3 server..."
|
||||
/benchmark/bin/iperf3 -s -B 10.0.2.15 --one-off
|
||||
```
|
||||
- This script runs the benchmark. Ensure the path to the benchmark binary is correct. `asterinas/test/Makefile` handles the benchmark binaries.
|
||||
|
||||
### Step 2: Update the `asterinas/.github/benchmarks.yml` File
|
||||
`host.sh`:
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo "Running iperf3 client"
|
||||
iperf3 -c $GUEST_SERVER_IP_ADDRESS -f m
|
||||
```
|
||||
|
||||
#### Configuration Files
|
||||
|
||||
The configuration files provide metadata about the benchmark jobs and results, such as the regression alerts, chart details, and result extraction patterns. Typically, these files are in JSON format. For single-result jobs, a `bench_result.json` file is used, while multi-result jobs have individual JSON files under `bench_results/` for each result. Some fields in these files are necessary while some are optional, depending on the benchmark's requirements. For more information, see the [`bench_result.json` format](#the-bench_resultjson-format) section.
|
||||
|
||||
Below are the contents of these files for the sample benchmark:
|
||||
|
||||
```jsonc
|
||||
// fio/ext2_iommu_seq_write_bw/bench_result.json
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": true
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "bw=",
|
||||
"result_index": 2
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Ext2] The bandwidth of sequential writes (IOMMU enabled on Asterinas)",
|
||||
"description": "fio -filename=/ext2/fio-test -size=1G -bs=1M -direct=1",
|
||||
"unit": "MB/s",
|
||||
"legend": "Average file write bandwidth on {system}"
|
||||
},
|
||||
"runtime_config": {
|
||||
"aster_scheme": "iommu"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```jsonc
|
||||
// sqlite/ext2_benchmarks/bench_results/ext2_deletes_between.json
|
||||
{
|
||||
"result_extraction": {
|
||||
"search_pattern": "10000 DELETEs, numeric BETWEEN, indexed....",
|
||||
"result_index": 8
|
||||
},
|
||||
"chart": {
|
||||
...
|
||||
}
|
||||
|
||||
// sqlite/ext2_benchmarks/bench_results/ext2_updates_between.json
|
||||
{
|
||||
"result_extraction": {
|
||||
"search_pattern": "10000 UPDATES, numeric BETWEEN, indexed....",
|
||||
"result_index": 8
|
||||
},
|
||||
"chart": {
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Update Suite's `summary.json`
|
||||
|
||||
Asterinas is an increasingly continuous improvement project. Consequently, while some benchmarks have been incorporated into the Benchmark Collection, their optimization is still ongoing. We do not wish to display these benchmarks on the overview charts. Therefore, we define the benchmarks that should be shown in the `summary.json` file. Only the benchmarks in the `summary.json` file can be displayed on the overview charts. Note that the standalone benchmark results are still available in the respective benchmark suite's page.
|
||||
|
||||
To include a new benchmark in the suite's summary table, we need to update the `summary.json` file at the root of the suite. Taking `sqlite` for example:
|
||||
|
||||
```jsonc
|
||||
// sqlite/summary.json
|
||||
{
|
||||
"benchmarks": [
|
||||
"sqlite/ext2_deletes_between",
|
||||
"sqlite/ext2_deletes_individual",
|
||||
"sqlite/ext2_refill_replace",
|
||||
"sqlite/ext2_selects_ipk"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Update the CI Configuration
|
||||
|
||||
Asterinas employs GitHub Actions for continuous integration (CI) to automatically execute benchmark collection every day. To incorporate the new benchmark into the CI pipeline, it is necessary to update `<bench_suite>/<bench_job>` within the `.github/benchmarks.yml` file.
|
||||
|
||||
1. **Edit the Benchmarks Configuration:**
|
||||
- Open `asterinas/.github/benchmarks.yml`.
|
||||
- Add your benchmark to the `strategy.matrix.benchmark` list:
|
||||
```yaml
|
||||
strategy:
|
||||
matrix:
|
||||
benchmark: [lmbench/getpid]
|
||||
fail-fast: false
|
||||
benchmarks:
|
||||
- redis/ping_inline_100k_conc20_rps
|
||||
- sqlite/ext2_benchmarks
|
||||
...
|
||||
```
|
||||
|
||||
### Step 3: Test the Benchmark Locally
|
||||
### Step 5: Test, Validate and Commit
|
||||
|
||||
Before committing the changes, it is essential to test the new benchmark job locally to ensure it runs correctly. This step helps identify any issues or errors that may arise during the benchmark execution.
|
||||
|
||||
Firstly, we can run the benchmark locally to ensure it works as expected. The following command should finally generate the `result_<bench_suite>-<bench_job>.json` under `asterinas/`.
|
||||
|
||||
1. **Run the Benchmark:**
|
||||
- Execute the following command to test the benchmark locally:
|
||||
```bash
|
||||
cd asterinas
|
||||
bash test/benchmark/bench_linux_and_aster.sh lmbench/getpid
|
||||
cd asterinas/
|
||||
bash test/benchmark/bench_linux_and_aster.sh <bench_suite>/<bench_job>
|
||||
```
|
||||
- Ensure the benchmark runs successfully and check the results in `asterinas/result_getpid.json`.
|
||||
|
||||
### Additional Considerations
|
||||
Secondly, we can validate modifications by running the CI pipeline on our own repository. To do this, we need to modify the `runs-on` field from `self-hosted` to `ubuntu-latest` on `.github/benchmarks.yml`. Then, we can manually trigger the CI pipeline on our own repository to ensure the new benchmark is correctly executed. After validation, we can reverse the `runs-on` field back to `self-hosted`.
|
||||
|
||||
- **Validation:** After adding and testing the benchmark, ensure that the CI pipeline correctly integrates the new benchmark by triggering a CI build.
|
||||
- **Documentation:** Update any relevant documentation to include the new benchmark, explaining its purpose and how to interpret its results.
|
||||
Finally, if the new benchmark job runs successfully, we can commit the changes and create a pull request to merge the new benchmark into the main branch.
|
||||
|
||||
By following these steps, you will successfully integrate a new benchmark into the Asternias CI system, enhancing its capability to evaluate platform performance.
|
||||
## The `bench_result.json` Format
|
||||
|
||||
The `bench_result.json` file configures how benchmark results are processed and displayed. Below is an example of the file to give you a big-picture understanding:
|
||||
|
||||
```jsonc
|
||||
{
|
||||
// Configurations for performance alerts.
|
||||
"alert": {
|
||||
"threshold": "130%", // Acceptable performance deviation (e.g., 130% = 30% higher).
|
||||
"bigger_is_better": true // true: Higher values are better; false: Lower values are better.
|
||||
},
|
||||
// Settings for extracting benchmark results from raw outputs.
|
||||
"result_extraction": {
|
||||
"search_pattern": "sender", // Regex or string to locate results.
|
||||
"result_index": 7 // Match index to use (e.g., 7th occurrence).
|
||||
},
|
||||
// Configurations for how the results are displayed in charts.
|
||||
"chart": {
|
||||
"title": "[Network] iperf3 sender performance using TCP", // Title of the chart.
|
||||
"description": "iperf3 -s -B 10.0.2.15", // Context or command associated with the benchmark.
|
||||
"unit": "Mbits/sec", // Measurement unit for the results.
|
||||
"legend": "Average TCP Bandwidth over virtio-net between Host Linux and Guest {system}" // Chart legend with dynamic placeholder {system} supported.
|
||||
},
|
||||
// Optional runtime configurations for the benchmark.
|
||||
"runtime_config": {
|
||||
"aster_scheme": "iommu" // Corresponds to Makefile parameters (e.g., IOMMU support).
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
By adhering to this format, we ensure clarity and consistency in benchmarking workflows and reporting systems.
|
||||
|
19
test/benchmark/fio/ext2_iommu_seq_read_bw/bench_result.json
Normal file
19
test/benchmark/fio/ext2_iommu_seq_read_bw/bench_result.json
Normal file
@ -0,0 +1,19 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": true
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "bw=",
|
||||
"result_index": 2
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Ext2] The bandwidth of sequential reads (IOMMU enabled on Asterinas)",
|
||||
"description": "fio -filename=/ext2/fio-test -size=1G -bs=1M -direct=1",
|
||||
"unit": "MB/s",
|
||||
"legend": "Average file read bandwidth on {system}"
|
||||
},
|
||||
"runtime_config": {
|
||||
"aster_scheme": "iommu"
|
||||
}
|
||||
}
|
@ -1,9 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customBiggerIsBetter",
|
||||
"search_pattern": "bw=",
|
||||
"result_index": "2",
|
||||
"description": "fio -filename=/ext2/fio-test -size=1G -bs=1M -direct=1",
|
||||
"title": "[Ext2] The bandwidth of sequential reads (IOMMU enabled on Asterinas)",
|
||||
"aster_scheme": "iommu"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average file read bandwidth on Linux",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average file read bandwidth on Asterinas",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
19
test/benchmark/fio/ext2_iommu_seq_write_bw/bench_result.json
Normal file
19
test/benchmark/fio/ext2_iommu_seq_write_bw/bench_result.json
Normal file
@ -0,0 +1,19 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": true
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "bw=",
|
||||
"result_index": 2
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Ext2] The bandwidth of sequential writes (IOMMU enabled on Asterinas)",
|
||||
"description": "fio -filename=/ext2/fio-test -size=1G -bs=1M -direct=1",
|
||||
"unit": "MB/s",
|
||||
"legend": "Average file write bandwidth on {system}"
|
||||
},
|
||||
"runtime_config": {
|
||||
"aster_scheme": "iommu"
|
||||
}
|
||||
}
|
@ -1,9 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customBiggerIsBetter",
|
||||
"search_pattern": "bw=",
|
||||
"result_index": "2",
|
||||
"description": "fio -filename=/ext2/fio-test -size=1G -bs=1M -direct=1",
|
||||
"title": "[Ext2] The bandwidth of sequential writes (IOMMU enabled on Asterinas)",
|
||||
"aster_scheme": "iommu"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average file write bandwidth on Linux",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average file write bandwidth on Asterinas",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/fio/ext2_seq_read_bw/bench_result.json
Normal file
16
test/benchmark/fio/ext2_seq_read_bw/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": true
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "bw=",
|
||||
"result_index": 2
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Ext2] The bandwidth of sequential reads",
|
||||
"description": "fio -filename=/ext2/fio-test -size=1G -bs=1M -direct=1",
|
||||
"unit": "MB/s",
|
||||
"legend": "Average file read bandwidth on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customBiggerIsBetter",
|
||||
"search_pattern": "bw=",
|
||||
"result_index": "2",
|
||||
"description": "fio -filename=/ext2/fio-test -size=1G -bs=1M -direct=1",
|
||||
"title": "[Ext2] The bandwidth of sequential reads"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average file read bandwidth on Linux",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average file read bandwidth on Asterinas",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/fio/ext2_seq_write_bw/bench_result.json
Normal file
16
test/benchmark/fio/ext2_seq_write_bw/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": true
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "bw=",
|
||||
"result_index": 2
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Ext2] The bandwidth of sequential writes",
|
||||
"description": "fio -filename=/ext2/fio-test -size=1G -bs=1M -direct=1",
|
||||
"unit": "MB/s",
|
||||
"legend": "Average file write bandwidth on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customBiggerIsBetter",
|
||||
"search_pattern": "bw=",
|
||||
"result_index": "2",
|
||||
"description": "fio -filename=/ext2/fio-test -size=1G -bs=1M -direct=1",
|
||||
"title": "[Ext2] The bandwidth of sequential writes"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average file write bandwidth on Linux",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average file write bandwidth on Asterinas",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/iperf3/tcp_virtio_bw/bench_result.json
Normal file
16
test/benchmark/iperf3/tcp_virtio_bw/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "130%",
|
||||
"bigger_is_better": true
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "sender",
|
||||
"result_index": 7
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Network] iperf3 sender performance using TCP",
|
||||
"description": "iperf3 -s -B 10.0.2.15",
|
||||
"unit": "Mbits/sec",
|
||||
"legend": "Average TCP Bandwidth over virtio-net between Host Linux and Guest {system}"
|
||||
}
|
||||
}
|
@ -1,9 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "130%",
|
||||
"alert_tool": "customBiggerIsBetter",
|
||||
"search_pattern": "sender",
|
||||
"result_index": "7",
|
||||
"description": "iperf3 -s -B 10.0.2.15",
|
||||
"title": "[Network] iperf3 sender performance using TCP",
|
||||
"benchmark_type": "host_guest"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average TCP Bandwidth over virtio-net between Host Linux and Guest Linux",
|
||||
"unit": "Mbits/sec",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average TCP Bandwidth over virtio-net between Host Linux and Guest Asterinas",
|
||||
"unit": "Mbits/sec",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/ext2_copy_files_bw/bench_result.json
Normal file
16
test/benchmark/lmbench/ext2_copy_files_bw/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": true
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "lmdd result:",
|
||||
"result_index": 8
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Ext2] The bandwidth of copying data between files",
|
||||
"description": "lmdd",
|
||||
"unit": "MB/s",
|
||||
"legend": "Average file copy bandwidth on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customBiggerIsBetter",
|
||||
"search_pattern": "lmdd result:",
|
||||
"result_index": "8",
|
||||
"description": "lmdd",
|
||||
"title": "[Ext2] The bandwidth of copying data between files"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average file copy bandwidth on Linux",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average file copy bandwidth on Asterinas",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": true
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "^0k",
|
||||
"result_index": 2
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Ext2] The throughput of creating/deleting small files (0KB)",
|
||||
"description": "lat_fs -s 0k /ext2",
|
||||
"unit": "number",
|
||||
"legend": "Number of created/deleted files on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customBiggerIsBetter",
|
||||
"search_pattern": "^0k",
|
||||
"result_index": "2",
|
||||
"description": "lat_fs -s 0k /ext2",
|
||||
"title": "[Ext2] The throughput of creating/deleting small files (0KB)"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Number of created/deleted files on Linux",
|
||||
"unit": "number",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Number of created/deleted files on Asterinas",
|
||||
"unit": "number",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": true
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "10k",
|
||||
"result_index": 2
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Ext2] The throughput of creating/deleting small files (10KB)",
|
||||
"description": "lat_fs -s 10K /ext2",
|
||||
"unit": "number",
|
||||
"legend": "Number of created/deleted files on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customBiggerIsBetter",
|
||||
"search_pattern": "10k",
|
||||
"result_index": "2",
|
||||
"description": "lat_fs -s 10K /ext2",
|
||||
"title": "[Ext2] The throughput of creating/deleting small files (10KB)"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Number of created/deleted files on Linux",
|
||||
"unit": "number",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Number of created/deleted files on Asterinas",
|
||||
"unit": "number",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/fifo_lat/bench_result.json
Normal file
16
test/benchmark/lmbench/fifo_lat/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": false
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "Fifo latency",
|
||||
"result_index": 3
|
||||
},
|
||||
"chart": {
|
||||
"title": "[FIFO] The cost of write+read (1B)",
|
||||
"description": "lat_fifo",
|
||||
"unit": "\u00b5s",
|
||||
"legend": "Average fifo latency on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customSmallerIsBetter",
|
||||
"search_pattern": "Fifo latency",
|
||||
"result_index": "3",
|
||||
"description": "lat_fifo",
|
||||
"title": "[FIFO] The cost of write+read (1B)"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average fifo latency on Linux",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average fifo latency on Asterinas",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/mem_copy_bw/bench_result.json
Normal file
16
test/benchmark/lmbench/mem_copy_bw/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": true
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "536.87",
|
||||
"result_index": 2
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Memory] The bandwidth of copying integers",
|
||||
"description": "bw_mem fcp",
|
||||
"unit": "MB/s",
|
||||
"legend": "Average memory copy bandwidth on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customBiggerIsBetter",
|
||||
"search_pattern": "536.87",
|
||||
"result_index": "2",
|
||||
"description": "bw_mem fcp",
|
||||
"title": "[Memory] The bandwidth of copying integers"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average memory copy bandwidth on Linux",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average memory copy bandwidth on Asterinas",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/mem_mmap_bw/bench_result.json
Normal file
16
test/benchmark/lmbench/mem_mmap_bw/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": true
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "268.44",
|
||||
"result_index": 2
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Memory] The bandwidth of mmap",
|
||||
"description": "bw_mmap",
|
||||
"unit": "MB/s",
|
||||
"legend": "Average mmap bandwidth on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customBiggerIsBetter",
|
||||
"search_pattern": "268.44",
|
||||
"result_index": "2",
|
||||
"description": "bw_mmap",
|
||||
"title": "[Memory] The bandwidth of mmap"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average mmap bandwidth on Linux",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average mmap bandwidth on Asterinas",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/mem_mmap_lat/bench_result.json
Normal file
16
test/benchmark/lmbench/mem_mmap_lat/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": false
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "4.194304",
|
||||
"result_index": 2
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Memory] The cost of mmap+unmap",
|
||||
"description": "lat_mmap",
|
||||
"unit": "\u00b5s",
|
||||
"legend": "Average mmap latency on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customSmallerIsBetter",
|
||||
"search_pattern": "4.194304",
|
||||
"result_index": "2",
|
||||
"description": "lat_mmap",
|
||||
"title": "[Memory] The cost of mmap+unmap"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average mmap latency on Linux",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average mmap latency on Asterinas",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/mem_pagefault_lat/bench_result.json
Normal file
16
test/benchmark/lmbench/mem_pagefault_lat/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": false
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "Pagefaults on ",
|
||||
"result_index": 4
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Memory] The cost of page fault handling",
|
||||
"description": "lat_pagefault",
|
||||
"unit": "\u00b5s",
|
||||
"legend": "Average page fault latency on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customSmallerIsBetter",
|
||||
"search_pattern": "Pagefaults on ",
|
||||
"result_index": "4",
|
||||
"description": "lat_pagefault",
|
||||
"title": "[Memory] The cost of page fault handling"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average page fault latency on Linux",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average page fault latency on Asterinas",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/mem_read_bw/bench_result.json
Normal file
16
test/benchmark/lmbench/mem_read_bw/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": true
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "536.87",
|
||||
"result_index": 2
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Memory] The bandwidth of reading integers",
|
||||
"description": "bw_mem frd",
|
||||
"unit": "MB/s",
|
||||
"legend": "Average memory read bandwidth on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customBiggerIsBetter",
|
||||
"search_pattern": "536.87",
|
||||
"result_index": "2",
|
||||
"description": "bw_mem frd",
|
||||
"title": "[Memory] The bandwidth of reading integers"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average memory read bandwidth on Linux",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average memory read bandwidth on Asterinas",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/mem_write_bw/bench_result.json
Normal file
16
test/benchmark/lmbench/mem_write_bw/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": true
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "536.87",
|
||||
"result_index": 2
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Memory] The bandwidth of writing integers",
|
||||
"description": "bw_mem fwr",
|
||||
"unit": "MB/s",
|
||||
"legend": "Average memory write bandwidth on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customBiggerIsBetter",
|
||||
"search_pattern": "536.87",
|
||||
"result_index": "2",
|
||||
"description": "bw_mem fwr",
|
||||
"title": "[Memory] The bandwidth of writing integers"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average memory write bandwidth on Linux",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average memory write bandwidth on Asterinas",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/pipe_bw/bench_result.json
Normal file
16
test/benchmark/lmbench/pipe_bw/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": true
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "Pipe bandwidth",
|
||||
"result_index": 3
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Pipes] The bandwidth",
|
||||
"description": "bw_pipe",
|
||||
"unit": "MB/s",
|
||||
"legend": "Average pipe bandwidth on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customBiggerIsBetter",
|
||||
"search_pattern": "Pipe bandwidth",
|
||||
"result_index": "3",
|
||||
"description": "bw_pipe",
|
||||
"title": "[Pipes] The bandwidth"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average pipe bandwidth on Linux",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average pipe bandwidth on Asterinas",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/pipe_lat/bench_result.json
Normal file
16
test/benchmark/lmbench/pipe_lat/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": false
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "Pipe latency",
|
||||
"result_index": 3
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Pipes] The cost of write+read (1B)",
|
||||
"description": "lat_pipe",
|
||||
"unit": "\u00b5s",
|
||||
"legend": "Average pipe latency on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customSmallerIsBetter",
|
||||
"search_pattern": "Pipe latency",
|
||||
"result_index": "3",
|
||||
"description": "lat_pipe",
|
||||
"title": "[Pipes] The cost of write+read (1B)"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average pipe latency on Linux",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average pipe latency on Asterinas",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/process_ctx_lat/bench_result.json
Normal file
16
test/benchmark/lmbench/process_ctx_lat/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": false
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "18 ",
|
||||
"result_index": 2
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Process] The cost of context switching",
|
||||
"description": "lat_ctx 2",
|
||||
"unit": "\u00b5s",
|
||||
"legend": "Average context switch latency on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customSmallerIsBetter",
|
||||
"search_pattern": "18 ",
|
||||
"result_index": "2",
|
||||
"description": "lat_ctx 2",
|
||||
"title": "[Process] The cost of context switching"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average context switch latency on Linux",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average context switch latency on Asterinas",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/process_exec_lat/bench_result.json
Normal file
16
test/benchmark/lmbench/process_exec_lat/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": false
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "Process fork\\+execve",
|
||||
"result_index": 3
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Process] The cost of fork+exec+exit",
|
||||
"description": "lat_proc exec",
|
||||
"unit": "\u00b5s",
|
||||
"legend": "Average exec latency on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customSmallerIsBetter",
|
||||
"search_pattern": "Process fork\\+execve",
|
||||
"result_index": "3",
|
||||
"description": "lat_proc exec",
|
||||
"title": "[Process] The cost of fork+exec+exit"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average exec latency on Linux",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average exec latency on Asterinas",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/process_fork_lat/bench_result.json
Normal file
16
test/benchmark/lmbench/process_fork_lat/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": false
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "Process fork",
|
||||
"result_index": 3
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Process] The cost of fork+exit",
|
||||
"description": "lat_proc fork",
|
||||
"unit": "\u00b5s",
|
||||
"legend": "Average Fork latency on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customSmallerIsBetter",
|
||||
"search_pattern": "Process fork",
|
||||
"result_index": "3",
|
||||
"description": "lat_proc fork",
|
||||
"title": "[Process] The cost of fork+exit"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average Fork latency on Linux",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average Fork latency on Asterinas",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/process_getppid_lat/bench_result.json
Normal file
16
test/benchmark/lmbench/process_getppid_lat/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": false
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "Simple syscall:",
|
||||
"result_index": 3
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Process] The cost of getppid",
|
||||
"description": "lat_syscall null",
|
||||
"unit": "\u00b5s",
|
||||
"legend": "Average syscall latency on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customSmallerIsBetter",
|
||||
"search_pattern": "Simple syscall:",
|
||||
"result_index": "3",
|
||||
"description": "lat_syscall null",
|
||||
"title": "[Process] The cost of getppid"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average syscall latency on Linux",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average syscall latency on Asterinas",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/process_shell_lat/bench_result.json
Normal file
16
test/benchmark/lmbench/process_shell_lat/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": false
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "Process fork\\+\\/bin\\/sh",
|
||||
"result_index": 4
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Process] The cost of fork+exec+shell+exit",
|
||||
"description": "lat_proc shell",
|
||||
"unit": "\u00b5s",
|
||||
"legend": "Average shell latency on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customSmallerIsBetter",
|
||||
"search_pattern": "Process fork\\+\\/bin\\/sh",
|
||||
"result_index": "4",
|
||||
"description": "lat_proc shell",
|
||||
"title": "[Process] The cost of fork+exec+shell+exit"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average shell latency on Linux",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average shell latency on Asterinas",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/ramfs_copy_files_bw/bench_result.json
Normal file
16
test/benchmark/lmbench/ramfs_copy_files_bw/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": true
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "lmdd result:",
|
||||
"result_index": 8
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Ramfs] The bandwidth of copying data between files",
|
||||
"description": "lmdd",
|
||||
"unit": "MB/s",
|
||||
"legend": "Average file copy bandwidth on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customBiggerIsBetter",
|
||||
"search_pattern": "lmdd result:",
|
||||
"result_index": "8",
|
||||
"description": "lmdd",
|
||||
"title": "[Ramfs] The bandwidth of copying data between files"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average file copy bandwidth on Linux",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average file copy bandwidth on Asterinas",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": true
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "^0k",
|
||||
"result_index": 2
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Ramfs] The throughput of creating/deleting small files (0KB)",
|
||||
"description": "lat_fs -s 0k",
|
||||
"unit": "number",
|
||||
"legend": "Number of created/deleted files on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customBiggerIsBetter",
|
||||
"search_pattern": "^0k",
|
||||
"result_index": "2",
|
||||
"description": "lat_fs -s 0k",
|
||||
"title": "[Ramfs] The throughput of creating/deleting small files (0KB)"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Number of created/deleted files on Linux",
|
||||
"unit": "number",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Number of created/deleted files on Asterinas",
|
||||
"unit": "number",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": true
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "10k",
|
||||
"result_index": 2
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Ramfs] The throughput of creating/deleting small files (10KB)",
|
||||
"description": "lat_fs -s 10K",
|
||||
"unit": "number",
|
||||
"legend": "Number of created/deleted files on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customBiggerIsBetter",
|
||||
"search_pattern": "10k",
|
||||
"result_index": "2",
|
||||
"description": "lat_fs -s 10K",
|
||||
"title": "[Ramfs] The throughput of creating/deleting small files (10KB)"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Number of created/deleted files on Linux",
|
||||
"unit": "number",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Number of created/deleted files on Asterinas",
|
||||
"unit": "number",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/semaphore_lat/bench_result.json
Normal file
16
test/benchmark/lmbench/semaphore_lat/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": false
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "Semaphore latency:",
|
||||
"result_index": 3
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Semaphores] The cost of semop",
|
||||
"description": "lat_sem",
|
||||
"unit": "\u00b5s",
|
||||
"legend": "Average semaphore latency on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customSmallerIsBetter",
|
||||
"search_pattern": "Semaphore latency:",
|
||||
"result_index": "3",
|
||||
"description": "lat_sem",
|
||||
"title": "[Semaphores] The cost of semop"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average semaphore latency on Linux",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average semaphore latency on Asterinas",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/signal_catch_lat/bench_result.json
Normal file
16
test/benchmark/lmbench/signal_catch_lat/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": false
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "Signal handler overhead:",
|
||||
"result_index": 4
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Signals] The cost of catching a signal",
|
||||
"description": "lat_sig catch",
|
||||
"unit": "\u00b5s",
|
||||
"legend": "Average Signal handler overhead on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customSmallerIsBetter",
|
||||
"search_pattern": "Signal handler overhead:",
|
||||
"result_index": "4",
|
||||
"description": "lat_sig catch",
|
||||
"title": "[Signals] The cost of catching a signal"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average Signal handler overhead on Linux",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average Signal handler overhead on Asterinas",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/signal_install_lat/bench_result.json
Normal file
16
test/benchmark/lmbench/signal_install_lat/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": false
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "Signal handler installation:",
|
||||
"result_index": 4
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Signals] The cost of installing a signal handler",
|
||||
"description": "lat_sig install",
|
||||
"unit": "\u00b5s",
|
||||
"legend": "Average Signal handler install latency on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customSmallerIsBetter",
|
||||
"search_pattern": "Signal handler installation:",
|
||||
"result_index": "4",
|
||||
"description": "lat_sig install",
|
||||
"title": "[Signals] The cost of installing a signal handler"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average Signal handler install latency on Linux",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average Signal handler install latency on Asterinas",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/signal_prot_lat/bench_result.json
Normal file
16
test/benchmark/lmbench/signal_prot_lat/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": false
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "Protection fault:",
|
||||
"result_index": 3
|
||||
},
|
||||
"chart": {
|
||||
"title": "[Signals] The cost of catching a segfault",
|
||||
"description": "lat_sig prot",
|
||||
"unit": "\u00b5s",
|
||||
"legend": "Average protection fault latency on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customSmallerIsBetter",
|
||||
"search_pattern": "Protection fault:",
|
||||
"result_index": "3",
|
||||
"description": "lat_sig prot",
|
||||
"title": "[Signals] The cost of catching a segfault"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average protection fault latency on Linux",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average protection fault latency on Asterinas",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/tcp_loopback_bw_4k/bench_result.json
Normal file
16
test/benchmark/lmbench/tcp_loopback_bw_4k/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": true
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "0.004096 ",
|
||||
"result_index": 2
|
||||
},
|
||||
"chart": {
|
||||
"title": "[TCP sockets] The bandwidth (localhost, 4KB message)",
|
||||
"description": "bw_tcp -l",
|
||||
"unit": "MB/s",
|
||||
"legend": "Average TCP bandwidth on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customBiggerIsBetter",
|
||||
"search_pattern": "0.004096 ",
|
||||
"result_index": "2",
|
||||
"description": "bw_tcp -l",
|
||||
"title": "[TCP sockets] The bandwidth (localhost, 4KB message)"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average TCP bandwidth on Linux",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average TCP bandwidth on Asterinas",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
16
test/benchmark/lmbench/tcp_loopback_bw_64k/bench_result.json
Normal file
16
test/benchmark/lmbench/tcp_loopback_bw_64k/bench_result.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": true
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "0.065536 ",
|
||||
"result_index": 2
|
||||
},
|
||||
"chart": {
|
||||
"title": "[TCP sockets] The bandwidth (localhost, 64KB message)",
|
||||
"description": "bw_tcp -l",
|
||||
"unit": "MB/s",
|
||||
"legend": "Average TCP bandwidth on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customBiggerIsBetter",
|
||||
"search_pattern": "0.065536 ",
|
||||
"result_index": "2",
|
||||
"description": "bw_tcp -l",
|
||||
"title": "[TCP sockets] The bandwidth (localhost, 64KB message)"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average TCP bandwidth on Linux",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average TCP bandwidth on Asterinas",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": false
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "TCP\\/IP connection cost to 127.0.0.1:",
|
||||
"result_index": 6
|
||||
},
|
||||
"chart": {
|
||||
"title": "[TCP sockets] The latency of connect",
|
||||
"description": "lat_connect",
|
||||
"unit": "\u00b5s",
|
||||
"legend": "Average TCP connection latency on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customSmallerIsBetter",
|
||||
"search_pattern": "TCP\\/IP connection cost to 127.0.0.1:",
|
||||
"result_index": "6",
|
||||
"description": "lat_connect",
|
||||
"title": "[TCP sockets] The latency of connect"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average TCP connection latency on Linux",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average TCP connection latency on Asterinas",
|
||||
"unit": "µs",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
@ -0,0 +1,16 @@
|
||||
{
|
||||
"alert": {
|
||||
"threshold": "125%",
|
||||
"bigger_is_better": true
|
||||
},
|
||||
"result_extraction": {
|
||||
"search_pattern": "Avg xfer: ",
|
||||
"result_index": 8
|
||||
},
|
||||
"chart": {
|
||||
"title": "[HTTP] The bandwidth",
|
||||
"description": "bw_http",
|
||||
"unit": "MB/s",
|
||||
"legend": "Average simple HTTP transaction bandwidth on {system}"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"alert_threshold": "125%",
|
||||
"alert_tool": "customBiggerIsBetter",
|
||||
"search_pattern": "Avg xfer: ",
|
||||
"result_index": "8",
|
||||
"description": "bw_http",
|
||||
"title": "[HTTP] The bandwidth"
|
||||
}
|
@ -1,14 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Average simple HTTP transaction bandwidth on Linux",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "linux_result"
|
||||
},
|
||||
{
|
||||
"name": "Average simple HTTP transaction bandwidth on Asterinas",
|
||||
"unit": "MB/s",
|
||||
"value": 0,
|
||||
"extra": "aster_result"
|
||||
}
|
||||
]
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user