rust重构mmio_buddy和mmio (#178)

* rust重构mmio_buddy和mmio

* mmio-buddy文档

---------

Co-authored-by: longjin <longjin@RinGoTek.cn>
This commit is contained in:
houmkh 2023-03-04 18:36:55 +08:00 committed by GitHub
parent f1284c3571
commit c2481452f8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 794 additions and 474 deletions

View File

@ -45,3 +45,145 @@ DragonOS中实现了MMIO地址空间的管理机制本节将介绍它们。
1. 取消mmio区域在页表中的映射。 1. 取消mmio区域在页表中的映射。
2. 将释放MMIO区域的VMA 2. 将释放MMIO区域的VMA
3. 将地址空间归还给mmio的伙伴系统。 3. 将地址空间归还给mmio的伙伴系统。
## MMIO的伙伴算法
### 伙伴的定义
&emsp;&emsp;同时满足以下三个条件的两个内存块被称为伙伴内存块:
1. 两个内存块的大小相同
2. 两个内存块的内存地址连续
3. 两个内存块由同一个大块内存分裂得到
### 伙伴算法
&emsp;&emsp;伙伴buddy算法的作用是维护以及组织大块连续内存块的分配和回收以减少系统时运行产生的外部碎片。伙伴系统中的每个内存块的大小均为$2^n$。 在DragonOS中伙伴系统内存池共维护了1TB的连续存储空间最大的内存块大小为$1G$,即$2^{30}B$,最小的内存块大小为$4K$,即 $2^{12}B$。
&emsp;&emsp;伙伴算法的核心思想是当应用申请内存时,每次都分配比申请的内存大小更大的最小内存块,同时分配出去的内存块大小为$2^nB$。e.g. 假设某应用申请了$3B$内存显然并没有整数值n使$2^n = 3$ ,且$3 \in [2^1,2^2]$,所以系统会去取一块大小为$2^2B$的内存块,将它分配给申请的应用,本次申请内存操作顺利完成。)
&emsp;&emsp;那么当伙伴系统中没有如此“合适”的内存块时该怎么办呢系统先会去寻找更大的内存块如果找到了则会将大内存块分裂成合适的内存块分配给应用。e.g. 假设申请$3B$内存,此时系统中比$3B$大的最小内存块的大小为$16B$,那么$16B$会被分裂成两块$8B$的内存块,一块放入内存池中,一块继续分裂成两块$4B$的内存块。两块$4B$的内存块,一块放入内存池中,一块分配给应用。至此,本次申请内存操作顺利完成。)
&emsp;&emsp;如果系统没有找到更大的内存块系统将会尝试合并较小的内存块直到符合申请空间的大小。e.g. 假设申请$3B$内存,系统检查内存池发现只有两个$2B$的内存块,那么系统将会把这两个$2B$的内存块合并成一块$4B$的内存块,并分配给应用。至此,本次申请内存操作顺利完成。)
&emsp;&emsp;最后,当系统既没有找到大块内存,又无法成功合并小块内存时,就会通知应用内存不够,无法分配内存。
### 伙伴算法的数据结构
```
MmioBuddyMemPool
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ │
│ pool_start_addr │
│ │
├─────────────────────────────────────────────────────────────────────────────────────┤
│ │
│ pool_size │
│ │
├─────────────────────────────────────────────────────────────────────────────────────┤
│ │
│ │
│ free_regions │
│ │
│ ┌────────────┐ │
│ │ │ ┌───────┐ ┌────────┐ │
│ │ ┌────────┬─┼────►│ ├────►│ │ │
│ │ │ list │ │ │ vaddr │ │ vaddr │ │
│ │ │ │◄├─────┤ │◄────┤ │ │
│ MmioFreeRegionList├────────┤ │ └───────┘ └────────┘ │
│ │ │num_free│ │ │
│ │ └────────┘ │ MmioBuddyAddrRegion │
│ MMIO_BUDDY_MIN_EXP - 12 │ 0 │ │
│ ├────────────┤ │
│ │ 1 │ │
│ ├────────────┤ │
│ │ 2 │ │
│ ├────────────┤ │
│ │ 3 │ │
│ ├────────────┤ │
│ │ ... │ │
│ ├────────────┤ │
│ │ ... │ │
│ ├────────────┤ │
│ MMIO_BUDDY_MAX_EXP - 12 │ 18 │ │
│ └────────────┘ │
│ │
│ │
│ │
└─────────────────────────────────────────────────────────────────────────────────────┘
```
```rust
/// 最大的内存块为1G其幂为30
const MMIO_BUDDY_MAX_EXP: u32 = PAGE_1G_SHIFT;
/// 最小的内存块为4K其幂为12
const MMIO_BUDDY_MIN_EXP: u32 = PAGE_4K_SHIFT;
/// 内存池数组的大小为18
const MMIO_BUDDY_REGION_COUNT: u32 = MMIO_BUDDY_MAX_EXP - MMIO_BUDDY_MIN_EXP + 1;
/// buddy内存池
pub struct MmioBuddyMemPool {
/// 内存池的起始地址
pool_start_addr: u64,
/// 内存池大小初始化为1TB
pool_size: u64,
/// 空闲内存块链表数组
/// MMIO_BUDDY_REGION_COUNT = MMIO_BUDDY_MAX_EXP - MMIO_BUDDY_MIN_EXP + 1
free_regions: [SpinLock<MmioFreeRegionList>; MMIO_BUDDY_REGION_COUNT as usize],
}
/// 空闲内存块链表结构体
pub struct MmioFreeRegionList {
/// 存储了空闲内存块信息的结构体的链表
list: LinkedList<Box<MmioBuddyAddrRegion>>,
/// 当前链表空闲块的数量
num_free: i64,
}
/// mmio伙伴系统内部的地址区域结构体
pub struct MmioBuddyAddrRegion {
/// 内存块的起始地址
vaddr: u64,
}
```
### 设计思路
&emsp;&emsp;DragonOS中使用`MmioBuddyMemPool`结构体作为buddy为表述方便以下将伙伴算法简称为buddy内存池的数据结构其记录了内存池的起始地址pool_start_addr以及内存池中内存块的总大小pool_size同时其维护了大小为`MMIO_BUDDY_REGION_COUNT`的双向链表数组free_regions`free_regions`中的各个链表维护了若干空闲内存块MmioBuddyAddrRegion
&emsp;&emsp;`free_regions`的下标index与内存块的大小有关。由于每个内存块大小都为$2^{n}$ bytes那么可以令$exp = n$。index与exp的换算公式如下$index = exp - 12$。e.g. 一个大小为$2^{12}$ bytes的内存块其$exp = 12$,使用上述公式计算得$index = 12 -12 = 0$,所以该内存块会被存入`free_regions[0].list`中。通过上述换算公式,每次取出或释放$2^n$大小的内存块,只需要操作`free_regions[n -12]`即可。DragonOS中buddy内存池最大的内存块大小为$1G = 2^{30}bytes$,最小的内存块大小为 $4K = 2^{12} bytes$,所以$index\in[0,18]$。
&emsp;&emsp;作为内存分配机制buddy服务于所有进程为了解决在各个进程之间实现free_regions中的链表数据同步的问题`free_regions`中的链表类型采用加了 {ref}`自旋锁 <_spinlock_doc_spinlock>`SpinLock的空闲内存块链表MmioFreeRegionList`MmioFreeRegionList`中封装有真正的存储了空闲内存块信息的结构体的链表list和对应链表长度num_free。有了自选锁后同一时刻只允许一个进程修改某个链表如取出链表元素申请内存或者向链表中插入元素释放内存
&emsp;&emsp;`MmioFreeRegionList`中的元素类型为`MmioBuddyAddrRegion`结构体,`MmioBuddyAddrRegion`记录了内存块的起始地址vaddr
### 伙伴算法内部api
**P.S 以下函数均为MmioBuddyMemPool的成员函数。系统中已经创建了一个MmioBuddyMemPool类型的全局引用`MMIO_POOL`,如要使用以下函数,请以`MMIO_POOL.xxx()`形式使用以此形式使用则不需要传入self。**
| **函数名** | **描述** |
|:----------------------------------------------------------------- |:--------------------------------------------------------- |
| __create_region(&self, vaddr) | 将虚拟地址传入,创建新的内存块地址结构体 |
| __give_back_block(&self, vaddr, exp) | 将地址为vaddr幂为exp的内存块归还给buddy |
| __buddy_split(&self,region,exp,list_guard) | 将给定大小为$2^{exp}$的内存块一分为二,并插入内存块大小为$2^{exp-1}$的链表中 |
| __query_addr_region(&self,exp,list_guard) | 从buddy中申请一块大小为$2^{exp}$的内存块 |
| mmio_buddy_query_addr_region(&self,exp) | 对query_addr_region进行封装**请使用这个函数而不是__query_addr_region** |
| __buddy_add_region_obj(&self,region,list_guard) | 往指定的地址空间链表中添加一个内存块 |
| __buddy_block_vaddr(&self, vaddr, exp) | 根据地址和内存块大小,计算伙伴块虚拟内存的地址 |
| __pop_buddy_block( &self, vaddr,exp,list_guard) | 寻找并弹出指定内存块的伙伴块 |
| __buddy_pop_region( &self, list_guard) | 从指定空闲链表中取出内存区域 |
| __buddy_merge(&self,exp,list_guard,high_list_guard) | 合并所有$2^{exp}$大小的内存块 |
| __buddy_merge_blocks(&self,region_1,region_2,exp,high_list_guard) | 合并两个**已经从链表中取出的**内存块 |
### 伙伴算法对外api
| **函数名** | **描述** |
| ----------------------------------------------- | ------------------------------------------- |
| __mmio_buddy_init() | 初始化buddy系统**在mmio_init()中调用,请勿随意调用** |
| __exp2index(exp) | 将$2^{exp}$的exp转换成内存池中的数组的下标index |
| mmio_create(size,vm_flags,res_vaddr,res_length) | 创建一块根据size对齐后的大小的mmio区域并将其vma绑定到initial_mm |
| mmio_release(vaddr, length) | 取消地址为vaddr大小为length的mmio的映射并将其归还到buddy中 |

View File

@ -1,3 +1,3 @@
sphinx sphinx==5.0.2
myst-parser myst-parser==0.18.0
sphinx-rtd-theme sphinx-rtd-theme

View File

@ -16,4 +16,9 @@ x86_64 = "0.14.10"
[build-dependencies] [build-dependencies]
bindgen = "0.61.0" bindgen = "0.61.0"
[dependencies.lazy_static]
version = "1.4.0"
# 由于在no_std环境而lazy_static依赖了spin库因此需要指定其使用no_std
features = ["spin_no_std"]

View File

@ -35,3 +35,4 @@
#include <process/process.h> #include <process/process.h>
#include <sched/sched.h> #include <sched/sched.h>
#include <time/sleep.h> #include <time/sleep.h>
#include <mm/mm-types.h>

View File

@ -33,6 +33,8 @@ mod smp;
mod time; mod time;
extern crate alloc; extern crate alloc;
#[macro_use]
extern crate lazy_static;
use mm::allocator::KernelAllocator; use mm::allocator::KernelAllocator;

View File

@ -2,7 +2,7 @@
CFLAGS += -I . CFLAGS += -I .
all:mm.o slab.o mm-stat.o vma.o mmap.o utils.o mmio.o mmio-buddy.o all:mm.o slab.o mm-stat.o vma.o mmap.o utils.o mmio.o
mm.o: mm.c mm.o: mm.c
$(CC) $(CFLAGS) -c mm.c -o mm.o $(CC) $(CFLAGS) -c mm.c -o mm.o
@ -25,5 +25,3 @@ utils.o: utils.c
mmio.o: mmio.c mmio.o: mmio.c
$(CC) $(CFLAGS) -c mmio.c -o mmio.o $(CC) $(CFLAGS) -c mmio.c -o mmio.o
mmio-buddy.o: mmio-buddy.c
$(CC) $(CFLAGS) -c mmio-buddy.c -o mmio-buddy.o

View File

@ -1,258 +0,0 @@
#include "mmio-buddy.h"
#include <mm/slab.h>
/**
* @brief
*
*/
#define __exp2index(exp) (exp - 12)
/**
* @brief
*
*/
#define buddy_block_vaddr(vaddr, exp) (vaddr ^ (1UL << exp))
static struct mmio_buddy_mem_pool __mmio_pool; // mmio buddy内存池
/**
* @brief
*
* @param index
* @param region
* @return __always_inline
*/
static __always_inline void __buddy_add_region_obj(int index, struct __mmio_buddy_addr_region *region)
{
struct __mmio_free_region_list *lst = &__mmio_pool.free_regions[index];
list_init(&region->list);
list_append(&lst->list_head, &region->list);
++lst->num_free;
}
/**
* @brief
*
* @param vaddr
* @return
*/
static __always_inline struct __mmio_buddy_addr_region *__mmio_buddy_create_region(uint64_t vaddr)
{
// 申请内存块的空间
struct __mmio_buddy_addr_region *region =
(struct __mmio_buddy_addr_region *)kzalloc(sizeof(struct __mmio_buddy_addr_region), 0);
list_init(&region->list);
region->vaddr = vaddr;
return region;
}
/**
* @brief (2^exp)
*
* @param region
* @param exp
*/
static __always_inline void __buddy_split(struct __mmio_buddy_addr_region *region, int exp)
{
// 计算分裂出来的新的伙伴块的地址
struct __mmio_buddy_addr_region *new_region = __mmio_buddy_create_region(buddy_block_vaddr(region->vaddr, exp - 1));
__buddy_add_region_obj(__exp2index(exp - 1), region);
__buddy_add_region_obj(__exp2index(exp - 1), new_region);
}
/**
* @brief
*
* @param x
* @param y
* @param exp xy大小的幂
* @return int
*/
static __always_inline int __buddy_merge_blocks(struct __mmio_buddy_addr_region *x, struct __mmio_buddy_addr_region *y,
int exp)
{
// 判断这两个是否是一对伙伴
if (unlikely(x->vaddr != buddy_block_vaddr(y->vaddr, exp))) // 不是一对伙伴
return -EINVAL;
// === 是一对伙伴,将他们合并
// 减少计数的工作应在该函数外完成
// 释放y
__mmio_buddy_release_addr_region(y);
// 插入x
__buddy_add_region_obj(__exp2index(exp + 1), x);
return 0;
}
/**
* @brief ,
*
* @param exp
* @return __always_inline struct*
*/
static __always_inline struct __mmio_buddy_addr_region *__buddy_pop_region(int exp)
{
if (unlikely(list_empty(&__mmio_pool.free_regions[__exp2index(exp)].list_head)))
return NULL;
struct __mmio_buddy_addr_region *r = container_of(list_next(&__mmio_pool.free_regions[__exp2index(exp)].list_head),
struct __mmio_buddy_addr_region, list);
list_del(&r->list);
// 区域计数减1
--__mmio_pool.free_regions[__exp2index(exp)].num_free;
return r;
}
/**
* @brief
*
* @param x
* @param exp
* @return
*/
static __always_inline struct __mmio_buddy_addr_region *__find_buddy(struct __mmio_buddy_addr_region *x, int exp)
{
// 当前为空
if (unlikely(list_empty(&__mmio_pool.free_regions[__exp2index(exp)].list_head)))
return NULL;
// 遍历链表以寻找伙伴块
uint64_t buddy_vaddr = buddy_block_vaddr(x->vaddr, exp);
struct List *list = &__mmio_pool.free_regions[__exp2index(exp)].list_head;
do
{
list = list_next(list);
struct __mmio_buddy_addr_region *bd = container_of(list, struct __mmio_buddy_addr_region, list);
if (bd->vaddr == buddy_vaddr) // 找到了伙伴块
return bd;
} while (list_next(list) != &__mmio_pool.free_regions[__exp2index(exp)].list_head);
return NULL;
}
/**
* @brief (2^(exp+1))
*
* @param exp 2^exp
*/
static void __buddy_merge(int exp)
{
struct __mmio_free_region_list *free_list = &__mmio_pool.free_regions[__exp2index(exp)];
// 若链表为空
if (list_empty(&free_list->list_head))
return;
struct List *list = list_next(&free_list->list_head);
do
{
struct __mmio_buddy_addr_region *ptr = container_of(list, struct __mmio_buddy_addr_region, list);
// 寻找是否有伙伴块
struct __mmio_buddy_addr_region *bd = __find_buddy(ptr, exp);
// 一定要在merge之前执行,否则list就被重置了
list = list_next(list);
if (bd != NULL) // 找到伙伴块
{
free_list->num_free -= 2;
list_del(&ptr->list);
list_del(&bd->list);
__buddy_merge_blocks(ptr, bd, exp);
}
} while (list != &free_list->list_head);
}
/**
* @brief buddy中申请一块指定大小的内存区域
*
* @param exp (2^exp)
* @return struct __mmio_buddy_addr_region* NULL
*/
struct __mmio_buddy_addr_region *mmio_buddy_query_addr_region(int exp)
{
if (unlikely(exp > MMIO_BUDDY_MAX_EXP || exp < MMIO_BUDDY_MIN_EXP))
{
BUG_ON(1);
return NULL;
}
if (!list_empty(&__mmio_pool.free_regions[__exp2index(exp)].list_head))
goto has_block;
// 若没有符合要求的内存块,则先尝试分裂大的块
for (int cur_exp = exp; cur_exp <= MMIO_BUDDY_MAX_EXP; ++cur_exp)
{
if (unlikely(
list_empty(&__mmio_pool.free_regions[__exp2index(cur_exp)].list_head))) // 一直寻找到有空闲空间的链表
continue;
// 找到了,逐级向下split
for (int down_exp = cur_exp; down_exp > exp; --down_exp)
{
// 取出一块空闲区域
struct __mmio_buddy_addr_region *r = __buddy_pop_region(down_exp);
__buddy_split(r, down_exp);
}
break;
}
if (!list_empty(&__mmio_pool.free_regions[__exp2index(exp)].list_head))
goto has_block;
// 尝试合并小的伙伴块
for (int cur_exp = MMIO_BUDDY_MIN_EXP; cur_exp < exp; ++cur_exp)
__buddy_merge(cur_exp);
// 再次尝试获取符合要求的内存块若仍不成功则说明mmio空间耗尽
if (!list_empty(&__mmio_pool.free_regions[__exp2index(exp)].list_head))
goto has_block;
else
goto failed;
failed:;
return NULL;
has_block:; // 有可用的内存块,分配
return __buddy_pop_region(exp);
}
/**
* @brief buddy
*
* @param vaddr
* @param exp 2^exp
* @return int
*/
int __mmio_buddy_give_back(uint64_t vaddr, int exp)
{
// 确保内存对齐低位都要为0
if (vaddr & ((1UL << exp) - 1))
return -EINVAL;
struct __mmio_buddy_addr_region *region = __mmio_buddy_create_region(vaddr);
// 加入buddy
__buddy_add_region_obj(__exp2index(exp), region);
return 0;
}
/**
* @brief mmio的伙伴系统
*
*/
void mmio_buddy_init()
{
memset(&__mmio_pool, 0, sizeof(struct mmio_buddy_mem_pool));
spin_init(&__mmio_pool.op_lock);
// 初始化各个链表的头部
for (int i = 0; i < MMIO_BUDDY_REGION_COUNT; ++i)
{
list_init(&__mmio_pool.free_regions[i].list_head);
__mmio_pool.free_regions[i].num_free = 0;
}
// 创建一堆1GB的地址块
uint32_t cnt_1g_blocks = (MMIO_TOP - MMIO_BASE) / PAGE_1G_SIZE;
uint64_t vaddr_base = MMIO_BASE;
for (uint32_t i = 0; i < cnt_1g_blocks; ++i, vaddr_base += PAGE_1G_SIZE)
__mmio_buddy_give_back(vaddr_base, PAGE_1G_SHIFT);
}

View File

@ -1,79 +0,0 @@
#pragma once
#include <common/sys/types.h>
#include <common/glib.h>
#include "mm-types.h"
#include "mm.h"
#include "slab.h"
#define MMIO_BUDDY_MAX_EXP PAGE_1G_SHIFT
#define MMIO_BUDDY_MIN_EXP PAGE_4K_SHIFT
#define MMIO_BUDDY_REGION_COUNT (MMIO_BUDDY_MAX_EXP - MMIO_BUDDY_MIN_EXP + 1)
/**
* @brief mmio伙伴系统内部的地址区域结构体
*
*/
struct __mmio_buddy_addr_region
{
struct List list;
uint64_t vaddr; // 该内存对象起始位置的虚拟地址
};
/**
* @brief
*
*/
struct __mmio_free_region_list
{
struct List list_head;
int64_t num_free; // 空闲页的数量
};
/**
* @brief buddy内存池
*
*/
struct mmio_buddy_mem_pool
{
uint64_t pool_start_addr; // 内存池的起始地址
uint64_t pool_size; // 内存池的内存空间总大小
spinlock_t op_lock; // 操作锁
/**
* @brief
* i个元素代表大小为2^(i+12)
*/
struct __mmio_free_region_list free_regions[MMIO_BUDDY_REGION_COUNT];
};
/**
* @brief address region结构体
*
* @param region
*/
static __always_inline void __mmio_buddy_release_addr_region(struct __mmio_buddy_addr_region *region)
{
kfree(region);
}
/**
* @brief buddy
*
* @param vaddr
* @param exp 2^exp
* @return int
*/
int __mmio_buddy_give_back(uint64_t vaddr, int exp);
/**
* @brief mmio的伙伴系统
*
*/
void mmio_buddy_init();
/**
* @brief buddy中申请一块指定大小的内存区域
*
* @param exp (2^exp)
* @return struct __mmio_buddy_addr_region* NULL
*/
struct __mmio_buddy_addr_region *mmio_buddy_query_addr_region(int exp);

View File

@ -1,118 +1,9 @@
#include "mmio.h" #include "mmio.h"
#include "mmio-buddy.h"
#include <common/math.h> #include <common/math.h>
extern void __mmio_buddy_init();
void mmio_init() void mmio_init()
{ {
mmio_buddy_init(); __mmio_buddy_init();
} kinfo("mmio_init success");
/**
* @brief mmio区域vma绑定到initial_mm
*
* @param size mmio区域的大小
* @param vm_flags vma设置成的标志
* @param res_vaddr -
* @param res_length -
* @return int
*/
int mmio_create(uint32_t size, vm_flags_t vm_flags, uint64_t *res_vaddr, uint64_t *res_size)
{
int retval = 0;
// 申请的内存超过允许的最大大小
if (unlikely(size > PAGE_1G_SIZE || size == 0))
return -EPERM;
// 计算要从buddy中申请地址空间大小(按照2的n次幂来对齐)
int size_exp = 31 - __clz(size);
if (size_exp < PAGE_4K_SHIFT)
{
size_exp = PAGE_4K_SHIFT;
size = PAGE_4K_SIZE;
}
else if (size & (~(1 << size_exp)))
{
++size_exp;
size = 1 << size_exp;
}
// 申请内存
struct __mmio_buddy_addr_region *buddy_region = mmio_buddy_query_addr_region(size_exp);
if (buddy_region == NULL) // 没有空闲的mmio空间了
return -ENOMEM;
*res_vaddr = buddy_region->vaddr;
*res_size = size;
// 释放region
__mmio_buddy_release_addr_region(buddy_region);
// ====创建vma===
// 设置vma flags
vm_flags |= (VM_IO | VM_DONTCOPY);
uint64_t len_4k = size % PAGE_2M_SIZE;
uint64_t len_2m = size - len_4k;
// 先创建2M的vma然后创建4k的
for (uint32_t i = 0; i < len_2m; i += PAGE_2M_SIZE)
{
retval = mm_create_vma(&initial_mm, buddy_region->vaddr + i, PAGE_2M_SIZE, vm_flags, NULL, NULL);
if (unlikely(retval != 0))
goto failed;
}
for (uint32_t i = len_2m; i < size; i += PAGE_4K_SIZE)
{
retval = mm_create_vma(&initial_mm, buddy_region->vaddr + i, PAGE_4K_SIZE, vm_flags, NULL, NULL);
if (unlikely(retval != 0))
goto failed;
}
return 0;
failed:;
kerror("failed to create mmio vma. pid=%d", current_pcb->pid);
// todo: 当失败时将已创建的vma删除
return retval;
}
/**
* @brief mmio的映射并将地址空间归还到buddy中
*
* @param vaddr
* @param length
* @return int
*/
int mmio_release(uint64_t vaddr, uint64_t length)
{
int retval = 0;
// 先将这些区域都unmap了
mm_unmap(&initial_mm, vaddr, length, false);
// 将这些区域加入buddy
for (uint64_t i = 0; i < length;)
{
struct vm_area_struct *vma = vma_find(&initial_mm, vaddr + i);
if (unlikely(vma == NULL))
{
kerror("mmio_release failed: vma not found. At address: %#018lx, pid=%ld", vaddr + i, current_pcb->pid);
return -EINVAL;
}
if (unlikely(vma->vm_start != (vaddr + i)))
{
kerror("mmio_release failed: addr_start is not equal to current: %#018lx.", vaddr + i);
return -EINVAL;
}
// 往buddy中插入内存块
retval = __mmio_buddy_give_back(vma->vm_start, 31 - __clz(vma->vm_end - vma->vm_start));
i += vma->vm_end - vma->vm_start;
// 释放vma结构体
vm_area_del(vma);
vm_area_free(vma);
if (unlikely(retval != 0))
goto give_back_failed;
}
return 0;
give_back_failed:;
kerror("mmio_release give_back failed: ");
return retval;
} }

View File

@ -1,24 +1,7 @@
#pragma once #pragma once
#include "mm.h" #include "mm.h"
extern void mmio_buddy_init();
extern void mmio_create();
extern int mmio_release(int vaddr, int length);
void mmio_init(); void mmio_init();
/**
* @brief mmio区域vma绑定到initial_mm
*
* @param size mmio区域的大小
* @param vm_flags vma设置成的标志
* @param res_vaddr -
* @param res_length -
* @return int
*/
int mmio_create(uint32_t size, vm_flags_t vm_flags, uint64_t * res_vaddr, uint64_t *res_size);
/**
* @brief mmio的映射并将地址空间归还到buddy中
*
* @param vaddr
* @param size
* @return int
*/
int mmio_release(uint64_t vaddr, uint64_t size);

634
kernel/src/mm/mmio_buddy.rs Normal file
View File

@ -0,0 +1,634 @@
use crate::{
arch::asm::current::current_pcb,
include::bindings::bindings::{
initial_mm, mm_create_vma, mm_unmap, vm_area_del, vm_area_free, vm_area_struct, vm_flags_t,
vma_find, EINVAL, ENOMEM, EPERM, MMIO_BASE, MMIO_TOP, PAGE_1G_SHIFT, PAGE_1G_SIZE,
PAGE_2M_SIZE, PAGE_4K_SHIFT, PAGE_4K_SIZE, VM_DONTCOPY, VM_IO,
},
kdebug, kerror,
libs::spinlock::{SpinLock, SpinLockGuard},
};
use alloc::{boxed::Box, collections::LinkedList, vec::Vec};
use core::{mem, ptr::null_mut};
// 最大的伙伴块的幂
const MMIO_BUDDY_MAX_EXP: u32 = PAGE_1G_SHIFT;
// 最小的伙伴块的幂
const MMIO_BUDDY_MIN_EXP: u32 = PAGE_4K_SHIFT;
// 内存池数组的范围
const MMIO_BUDDY_REGION_COUNT: u32 = MMIO_BUDDY_MAX_EXP - MMIO_BUDDY_MIN_EXP + 1;
lazy_static! {
pub static ref MMIO_POOL: MmioBuddyMemPool = MmioBuddyMemPool::new();
}
pub enum MmioResult {
SUCCESS,
EINVAL,
ENOFOUND,
WRONGEXP,
ISEMPTY,
}
/// @brief buddy内存池
pub struct MmioBuddyMemPool {
pool_start_addr: u64,
pool_size: u64,
free_regions: [SpinLock<MmioFreeRegionList>; MMIO_BUDDY_REGION_COUNT as usize],
}
impl Default for MmioBuddyMemPool {
fn default() -> Self {
MmioBuddyMemPool {
pool_start_addr: MMIO_BASE as u64,
pool_size: (MMIO_TOP - MMIO_BASE) as u64,
free_regions: unsafe { mem::zeroed() },
}
}
}
impl MmioBuddyMemPool {
fn new() -> Self {
return MmioBuddyMemPool {
..Default::default()
};
}
/// @brief 创建新的地址区域结构体
///
/// @param vaddr 虚拟地址
///
/// @return 创建好的地址区域结构体
fn __create_region(&self, vaddr: u64) -> Box<MmioBuddyAddrRegion> {
let mut region: Box<MmioBuddyAddrRegion> = Box::new(MmioBuddyAddrRegion::new());
region.vaddr = vaddr;
return region;
}
/// @brief 将内存块归还给buddy
///
/// @param vaddr 虚拟地址
///
/// @param exp 内存空间的大小2^exp
///
/// @param list_guard 【exp】对应的链表
///
/// @return Ok(i32) 返回0
///
/// @return Err(i32) 返回错误码
fn __give_back_block(&self, vaddr: u64, exp: u32) -> Result<i32, i32> {
// 确保内存对齐低位都要为0
if (vaddr & ((1 << exp) - 1)) != 0 {
return Err(-(EINVAL as i32));
}
let region: Box<MmioBuddyAddrRegion> = self.__create_region(vaddr);
// 加入buddy
let list_guard: &mut SpinLockGuard<MmioFreeRegionList> =
&mut self.free_regions[__exp2index(exp)].lock();
self.__buddy_add_region_obj(region, list_guard);
return Ok(0);
}
/// @brief 将给定大小为2^{exp}的内存块一分为二并插入内存块大小为2^{exp-1}的链表中
///
/// @param region 要被分割的地址区域结构体(保证其已经从链表中取出)
///
/// @param exp 要被分割的地址区域的大小的幂
///
/// @param list_guard 【exp-1】对应的链表
fn __buddy_split(
&self,
region: Box<MmioBuddyAddrRegion>,
exp: u32,
low_list_guard: &mut SpinLockGuard<MmioFreeRegionList>,
) {
let vaddr: u64 = self.__buddy_block_vaddr(region.vaddr, exp - 1);
let new_region: Box<MmioBuddyAddrRegion> = self.__create_region(vaddr);
self.__buddy_add_region_obj(region, low_list_guard);
self.__buddy_add_region_obj(new_region, low_list_guard);
}
/// @brief 从buddy中申请一块指定大小的内存区域
///
/// @param exp 要申请的内存块的大小的幂(2^exp)
///
/// @param list_guard exp对应的链表
///
/// @return Ok(Box<MmioBuddyAddrRegion>) 符合要求的内存区域。
///
/// @return Err(MmioResult)
/// - 没有满足要求的内存块时返回ENOFOUND
/// - 申请的内存块大小超过合法范围返回WRONGEXP
/// - 调用函数出错时,返回出错函数对应错误码
fn __query_addr_region(
&self,
exp: u32,
list_guard: &mut SpinLockGuard<MmioFreeRegionList>,
) -> Result<Box<MmioBuddyAddrRegion>, MmioResult> {
// 申请范围错误
if exp < MMIO_BUDDY_MIN_EXP || exp > MMIO_BUDDY_MAX_EXP {
kdebug!("__query_addr_region: exp wrong");
return Err(MmioResult::WRONGEXP);
}
// 没有恰好符合要求的内存块
// 注意exp对应的链表list_guard已上锁【注意避免死锁问题】
if list_guard.num_free == 0 {
// 找到最小符合申请范围的内存块
// 将大的内存块依次分成小块内存直到能够满足exp大小即将exp+1分成两块exp
for e in exp + 1..MMIO_BUDDY_MAX_EXP + 1 {
if self.free_regions[__exp2index(e) as usize].lock().num_free == 0 {
continue;
}
for e2 in (exp + 1..e + 1).rev() {
match self
.__buddy_pop_region(&mut self.free_regions[__exp2index(e2) as usize].lock())
{
Ok(region) => {
if e2 != exp + 1 {
// 要将分裂后的内存块插入到更小的链表中
let low_list_guard: &mut SpinLockGuard<MmioFreeRegionList> =
&mut self.free_regions[__exp2index(e2 - 1) as usize].lock();
self.__buddy_split(region, e2, low_list_guard);
} else {
// 由于exp对应的链表list_guard已经被锁住了 不能再加锁
// 所以直接将list_guard传入
self.__buddy_split(region, e2, list_guard);
}
}
Err(err) => {
kdebug!("buddy_pop_region get wrong");
return Err(err);
}
}
}
break;
}
// 判断是否获得了exp大小的内存块
if list_guard.num_free > 0 {
return Ok(list_guard.list.pop_back().unwrap());
}
// 拆分大内存块无法获得exp大小内存块
// 尝试用小内存块合成
// 即将两块exp合成一块exp+1
for e in MMIO_BUDDY_MIN_EXP..exp {
if e != exp - 1 {
let high_list_guard: &mut SpinLockGuard<MmioFreeRegionList> =
&mut self.free_regions[__exp2index(exp + 1)].lock();
match self.__buddy_merge(
e,
&mut self.free_regions[__exp2index(e) as usize].lock(),
high_list_guard,
) {
Ok(_) => continue,
Err(err) => {
return Err(err);
}
}
} else {
match self.__buddy_merge(
e,
&mut self.free_regions[__exp2index(e) as usize].lock(),
list_guard,
) {
Ok(_) => continue,
Err(err) => {
return Err(err);
}
}
}
}
//判断是否获得了exp大小的内存块
if list_guard.num_free > 0 {
return Ok(list_guard.list.pop_back().unwrap());
}
return Err(MmioResult::ENOFOUND);
} else {
return Ok(list_guard.list.pop_back().unwrap());
}
}
/// @brief 对query_addr_region进行封装
///
/// @param exp 内存区域的大小(2^exp)
///
/// @return Ok(Box<MmioBuddyAddrRegion>)符合要求的内存块信息结构体。
/// @return Err(MmioResult) 没有满足要求的内存块时返回__query_addr_region的错误码。
fn mmio_buddy_query_addr_region(
&self,
exp: u32,
) -> Result<Box<MmioBuddyAddrRegion>, MmioResult> {
let list_guard: &mut SpinLockGuard<MmioFreeRegionList> =
&mut self.free_regions[__exp2index(exp)].lock();
match self.__query_addr_region(exp, list_guard) {
Ok(ret) => return Ok(ret),
Err(err) => {
kdebug!("mmio_buddy_query_addr_region failed");
return Err(err);
}
}
}
/// @brief 往指定的地址空间链表中添加一个地址区域
///
/// @param region 要被添加的地址结构体
///
/// @param list_guard 目标链表
fn __buddy_add_region_obj(
&self,
region: Box<MmioBuddyAddrRegion>,
list_guard: &mut SpinLockGuard<MmioFreeRegionList>,
) {
list_guard.list.push_back(region);
list_guard.num_free += 1;
}
/// @brief 根据地址和内存块大小,计算伙伴块虚拟内存的地址
#[inline(always)]
fn __buddy_block_vaddr(&self, vaddr: u64, exp: u32) -> u64 {
return vaddr ^ (1 << exp);
}
/// @brief 寻找并弹出指定内存块的伙伴块
///
/// @param region 对应内存块的信息
///
/// @param exp 内存块大小
///
/// @param list_guard 【exp】对应的链表
///
/// @return Ok(Box<MmioBuddyAddrRegion) 返回伙伴块的引用
/// @return Err(MmioResult)
/// - 当链表为空返回ISEMPTY
/// - 没有找到伙伴块返回ENOFOUND
fn __pop_buddy_block(
&self,
vaddr: u64,
exp: u32,
list_guard: &mut SpinLockGuard<MmioFreeRegionList>,
) -> Result<Box<MmioBuddyAddrRegion>, MmioResult> {
if list_guard.list.len() == 0 {
return Err(MmioResult::ISEMPTY);
} else {
//计算伙伴块的地址
let buddy_vaddr = self.__buddy_block_vaddr(vaddr, exp);
// element 只会有一个元素
let mut element: Vec<Box<MmioBuddyAddrRegion>> = list_guard
.list
.drain_filter(|x| x.vaddr == buddy_vaddr)
.collect();
if element.len() == 1 {
list_guard.num_free -= 1;
return Ok(element.pop().unwrap());
}
//没有找到对应的伙伴块
return Err(MmioResult::ENOFOUND);
}
}
/// @brief 从指定空闲链表中取出内存区域
///
/// @param list_guard 【exp】对应的链表
///
/// @return Ok(Box<MmioBuddyAddrRegion>) 内存块信息结构体的引用。
///
/// @return Err(MmioResult) 当链表为空无法删除时返回ISEMPTY
fn __buddy_pop_region(
&self,
list_guard: &mut SpinLockGuard<MmioFreeRegionList>,
) -> Result<Box<MmioBuddyAddrRegion>, MmioResult> {
if !list_guard.list.is_empty() {
list_guard.num_free -= 1;
return Ok(list_guard.list.pop_back().unwrap());
}
return Err(MmioResult::ISEMPTY);
}
/// @brief 合并所有2^{exp}大小的内存块
///
/// @param exp 内存块大小的幂(2^exp)
///
/// @param list_guard exp对应的链表
///
/// @param high_list_guard exp+1对应的链表
///
/// @return Ok(MmioResult) 合并成功返回SUCCESS
/// @return Err(MmioResult)
/// - 内存块过少无法合并返回EINVAL
/// - __pop_buddy_block调用出错返回其错误码
/// - __buddy_merge_blocks调用出错返回其错误码
fn __buddy_merge(
&self,
exp: u32,
list_guard: &mut SpinLockGuard<MmioFreeRegionList>,
high_list_guard: &mut SpinLockGuard<MmioFreeRegionList>,
) -> Result<MmioResult, MmioResult> {
// 至少要两个内存块才能合并
if list_guard.num_free <= 1 {
return Err(MmioResult::EINVAL);
}
loop {
if list_guard.num_free <= 1 {
break;
}
// 获取内存块
let vaddr: u64 = list_guard.list.back().unwrap().vaddr;
// 获取伙伴内存块
match self.__pop_buddy_block(vaddr, exp, list_guard) {
Err(err) => {
return Err(err);
}
Ok(buddy_region) => {
let region: Box<MmioBuddyAddrRegion> = list_guard.list.pop_back().unwrap();
let copy_region: Box<MmioBuddyAddrRegion> = Box::new(MmioBuddyAddrRegion {
vaddr: region.vaddr,
});
// 在两块内存都被取出之后才进行合并
match self.__buddy_merge_blocks(region, buddy_region, exp, high_list_guard) {
Err(err) => {
// 如果合并失败了要将取出来的元素放回去
self.__buddy_add_region_obj(copy_region, list_guard);
kdebug!("__buddy_merge: __buddy_merge_blocks failed");
return Err(err);
}
Ok(_) => continue,
}
}
}
}
return Ok(MmioResult::SUCCESS);
}
/// @brief 合并两个【已经从链表中取出】的内存块
///
/// @param region_1 第一个内存块
///
/// @param region_2 第二个内存
///
/// @return Ok(MmioResult) 成功返回SUCCESS
///
/// @return Err(MmioResult) 两个内存块不是伙伴块,返回EINVAL
fn __buddy_merge_blocks(
&self,
region_1: Box<MmioBuddyAddrRegion>,
region_2: Box<MmioBuddyAddrRegion>,
exp: u32,
high_list_guard: &mut SpinLockGuard<MmioFreeRegionList>,
) -> Result<MmioResult, MmioResult> {
// 判断是否为伙伴块
if region_1.vaddr != self.__buddy_block_vaddr(region_2.vaddr, exp) {
return Err(MmioResult::EINVAL);
}
// 将大的块放进下一级链表
self.__buddy_add_region_obj(region_1, high_list_guard);
return Ok(MmioResult::SUCCESS);
}
}
/// @brief mmio伙伴系统内部的地址区域结构体
pub struct MmioBuddyAddrRegion {
vaddr: u64,
}
impl MmioBuddyAddrRegion {
pub fn new() -> Self {
return MmioBuddyAddrRegion {
..Default::default()
};
}
}
impl Default for MmioBuddyAddrRegion {
fn default() -> Self {
MmioBuddyAddrRegion {
vaddr: Default::default(),
}
}
}
/// @brief 空闲页数组结构体
pub struct MmioFreeRegionList {
/// 存储mmio_buddy的地址链表
list: LinkedList<Box<MmioBuddyAddrRegion>>,
/// 空闲块的数量
num_free: i64,
}
impl MmioFreeRegionList {
fn new() -> Self {
return MmioFreeRegionList {
..Default::default()
};
}
}
impl Default for MmioFreeRegionList {
fn default() -> Self {
MmioFreeRegionList {
list: Default::default(),
num_free: 0,
}
}
}
/// @brief 初始化mmio的伙伴系统
#[no_mangle]
pub extern "C" fn __mmio_buddy_init() {
// 创建一堆1GB的地址块
let cnt_1g_blocks: u32 = ((MMIO_TOP - MMIO_BASE) / PAGE_1G_SIZE as i64) as u32;
let mut vaddr_base: u64 = MMIO_BASE as u64;
for _ in 0..cnt_1g_blocks {
match MMIO_POOL.__give_back_block(vaddr_base, PAGE_1G_SHIFT) {
Ok(_) => {
vaddr_base += PAGE_1G_SIZE as u64;
}
Err(_) => {
kerror!("__mmio_buddy_init failed");
return;
}
}
}
}
/// @brief 将内存对象大小的幂转换成内存池中的数组的下标
///
/// @param exp内存大小
///
/// @return 内存池数组下标
#[inline(always)]
fn __exp2index(exp: u32) -> usize {
return (exp - 12) as usize;
}
/// @brief 创建一块mmio区域并将vma绑定到initial_mm
///
/// @param size mmio区域的大小字节
///
/// @param vm_flags 要把vma设置成的标志
///
/// @param res_vaddr 返回值-分配得到的虚拟地址
///
/// @param res_length 返回值-分配的虚拟地址空间长度
///
/// @return int 错误码
#[no_mangle]
pub extern "C" fn mmio_create(
size: u32,
vm_flags: vm_flags_t,
res_vaddr: *mut u64,
res_length: *mut u64,
) -> i32 {
if size > PAGE_1G_SIZE || size == 0 {
return -(EPERM as i32);
}
let mut retval: i32 = 0;
// 计算前导0
let mut size_exp: u32 = 31 - size.leading_zeros();
// 记录最终申请的空间大小
let mut new_size: u32 = size;
// 对齐要申请的空间大小
// 如果要申请的空间大小小于4k则分配4k
if size_exp < PAGE_4K_SHIFT {
new_size = PAGE_4K_SIZE;
size_exp = PAGE_4K_SHIFT;
} else if (new_size & (!(1 << size_exp))) != 0 {
// 向左对齐空间大小
size_exp += 1;
new_size = 1 << size_exp;
}
match MMIO_POOL.mmio_buddy_query_addr_region(size_exp) {
Ok(region) => {
unsafe {
*res_vaddr = region.vaddr;
*res_length = new_size as u64;
}
// 创建vma
let flags: u64 = vm_flags | (VM_IO | VM_DONTCOPY) as u64;
let len_4k: u64 = (new_size % PAGE_2M_SIZE) as u64;
let len_2m: u64 = new_size as u64 - len_4k;
let mut loop_i: u64 = 0;
// 先分配2M的vma
loop {
if loop_i >= len_2m {
break;
}
let vma: *mut *mut vm_area_struct = null_mut();
retval = unsafe {
mm_create_vma(
&mut initial_mm,
region.vaddr + loop_i,
PAGE_2M_SIZE.into(),
flags,
null_mut(),
vma,
)
};
if retval != 0 {
kdebug!(
"failed to create mmio 2m vma. pid = {:?}",
current_pcb().pid
);
unsafe {
vm_area_del(*vma);
vm_area_free(*vma);
}
return retval;
}
loop_i += PAGE_2M_SIZE as u64;
}
// 分配4K的vma
loop_i = len_2m;
loop {
if loop_i >= size as u64 {
break;
}
let vma: *mut *mut vm_area_struct = null_mut();
retval = unsafe {
mm_create_vma(
&mut initial_mm,
region.vaddr + loop_i,
PAGE_4K_SIZE.into(),
flags,
null_mut(),
vma,
)
};
if retval != 0 {
kdebug!(
"failed to create mmio 4k vma. pid = {:?}",
current_pcb().pid
);
unsafe {
vm_area_del(*vma);
vm_area_free(*vma);
}
return retval;
}
loop_i += PAGE_4K_SIZE as u64;
}
}
Err(_) => {
kdebug!("failed to create mmio vma.pid = {:?}", current_pcb().pid);
return -(ENOMEM as i32);
}
}
return retval;
}
/// @brief 取消mmio的映射并将地址空间归还到buddy中
///
/// @param vaddr 起始的虚拟地址
///
/// @param length 要归还的地址空间的长度
///
/// @return Ok(i32) 成功返回0
///
/// @return Err(i32) 失败返回错误码
#[no_mangle]
pub extern "C" fn mmio_release(vaddr: u64, length: u64) -> i32 {
//先将要释放的空间取消映射
unsafe {
mm_unmap(&mut initial_mm, vaddr, length, false);
}
let mut loop_i: u64 = 0;
loop {
if loop_i >= length {
break;
}
// 获取要释放的vma的结构体
let vma: *mut vm_area_struct = unsafe { vma_find(&mut initial_mm, vaddr + loop_i) };
if vma == null_mut() {
kdebug!(
"mmio_release failed: vma not found. At address: {:?}, pid = {:?}",
vaddr + loop_i,
current_pcb().pid
);
return -(EINVAL as i32);
}
// 检查vma起始地址是否正确
if unsafe { (*vma).vm_start != (vaddr + loop_i) } {
kdebug!(
"mmio_release failed: addr_start is not equal to current: {:?}. pid = {:?}",
vaddr + loop_i,
current_pcb().pid
);
return -(EINVAL as i32);
}
// 将vma对应空间归还
match MMIO_POOL.__give_back_block(unsafe { (*vma).vm_start }, unsafe {
31 - ((*vma).vm_end - (*vma).vm_start).leading_zeros()
}) {
Ok(_) => {
loop_i += unsafe { (*vma).vm_end - (*vma).vm_start };
unsafe {
vm_area_del(vma);
vm_area_free(vma);
}
}
Err(err) => {
// vma对应空间没有成功归还的话就不删除vma
kdebug!(
"mmio_release give_back failed: pid = {:?}",
current_pcb().pid
);
return err;
}
}
}
return 0;
}

View File

@ -1,2 +1,3 @@
pub mod allocator; pub mod allocator;
pub mod gfp; pub mod gfp;
pub mod mmio_buddy;