Compare commits

...

129 commits

Author SHA1 Message Date
Erin 65e05c809c Changed CMP handling and added simple JMP 2023-09-15 08:43:12 +02:00
able dcd692405e examples 2023-09-12 01:38:32 -05:00
Erin cad2640a4e ABI proposal part 1 2023-09-08 10:46:41 +02:00
Erin e28f5b7924 a 2023-08-20 00:24:27 +02:00
Erin cb35c86add updated macro 2023-08-19 23:57:48 +02:00
Erin f468a02ad4 bye stuff 2023-08-19 23:46:47 +02:00
Erin 76350b5387 cleaned up deps 2023-08-19 23:24:31 +02:00
Erin 68ac6856df Address type, changed behaviour on address overflow 2023-08-18 02:31:49 +02:00
Erin d4b2a1a266 Move 2023-08-18 01:41:05 +02:00
Erin bd9b4e0364 Softpage improvements 2023-08-18 01:28:02 +02:00
Erin af1de4b9ec nope. 2023-08-17 01:37:53 +02:00
Erin 69bbd0ca79 SPID 2023-08-15 17:21:55 +02:00
Erin 9021acf61c Modified memory interface
I have no idea what I am doing rn
2023-08-15 17:05:10 +02:00
Erin a8f2e4fbdf Notice 2023-08-15 16:33:56 +02:00
Erin bcb0ec41e2 Move stuff, deprecate softpage 2023-08-15 16:32:59 +02:00
Erin 7dc8c6cca4 Some merges 2023-08-11 02:19:26 +02:00
Erin 96c5b07cfb h 2023-08-10 12:39:18 +02:00
Erin 4c38b1ffb5 move 2023-08-10 12:39:03 +02:00
Erin 34a82b55dc executable 2023-08-09 20:19:12 +02:00
Erin 12bde3a875 bai 2023-08-09 03:12:09 +02:00
Erin 340ee8bcf3 Edit 0x0 2023-08-09 03:01:42 +02:00
Erin b955b756e3 Comments 2023-08-09 02:59:11 +02:00
Erin afdcee9bd6 Forbid store 2023-08-09 02:57:25 +02:00
Erin 06ce899e71 Now finally, leaving Hardvard! 2023-08-09 02:53:55 +02:00
Erin 6268c96776 Von-Neumann? 2023-08-09 02:33:03 +02:00
Erin 5ac8da923f Added TX instruction (definitely not named after Texas) 2023-08-09 01:24:45 +02:00
Erin d992f40a82 Termination instruction 2023-08-09 01:24:13 +02:00
Erin e62264950a Changed memory interfacing 2023-08-08 03:14:19 +02:00
Erin 2b2d2f2434 fmt 2023-08-08 03:10:23 +02:00
Erin 67a7d8ee25 Added inner memory access 2023-08-08 03:10:11 +02:00
Erin 62d241e78c Changed stuff aroud 2023-08-08 03:03:15 +02:00
Erin 1e92797775 Abstraction of memory 2023-08-08 02:48:47 +02:00
Erin 2aad3a1002 Reimplemented BMC 2023-08-08 02:06:15 +02:00
Erin 2fb695b3a9 const perm check 2023-08-08 01:44:33 +02:00
Erin 33c0499977 Shrunk 2023-08-07 01:50:21 +02:00
Erin a2be0adefa Spec update 2023-08-07 01:43:29 +02:00
Erin 034b482817 Spec update 2023-08-07 01:41:26 +02:00
Erin e9e7f0c585 Changed magic 2023-08-01 22:20:11 +02:00
Erin cc71d00e35 a 2023-08-01 22:17:20 +02:00
Erin 540555d7a9 Added magic 2023-08-01 22:13:22 +02:00
Erin a1efc2dfe4 Link fix 2023-07-26 21:23:03 +02:00
Erin 7e1257a84d Nightly opts 2023-07-26 20:54:24 +02:00
Erin 64ae39295d Added some comments 2023-07-26 20:49:23 +02:00
Erin e3dd5ed944 Fixed mapping problems 2023-07-26 13:04:58 +02:00
Erin c55e3e82c9 Whoops, this is 5-level paging, not 6-level paging 2023-07-26 12:41:18 +02:00
Erin ab26de61f6 Fixed memory (un)mapping 2023-07-26 12:22:28 +02:00
Erin 14aa35d19a Fixed page size, fuzzer now does memory. 2023-07-26 03:27:31 +02:00
Erin 03195f4eef Decreased timeout 2023-07-26 02:35:27 +02:00
Erin f5c45da41f Increased timeout 2023-07-26 02:31:06 +02:00
Erin 8693d13e68 Increased timeout 2023-07-26 02:30:22 +02:00
Erin f9b36d7a8d Fixed few overflows 2023-07-26 02:28:14 +02:00
Erin 66ef81d8a0 BMC is now interruptable 2023-07-26 02:04:26 +02:00
Erin 9d27fb218d restruct + no-alloc support 2023-07-26 01:11:21 +02:00
Erin 5a26bf8299 Added fuzzy tests 2023-07-26 01:01:53 +02:00
Erin 7d8b1c6ed7 a 2023-07-26 00:17:10 +02:00
Erin 3740c88daa Added warning 2023-07-26 00:16:50 +02:00
Erin 972df2f6d7 Reworked macros 2023-07-26 00:12:50 +02:00
Erin 77d807a17d Added runtime bound checking 2023-07-26 00:01:25 +02:00
Erin 8b132dffe3 whoops, fixed builds. 2023-07-25 23:48:59 +02:00
Erin c274611746 Valider 2023-07-25 23:47:51 +02:00
Erin 74f98f610c Valider is now generated from macro (not done yet) 2023-07-25 23:43:06 +02:00
Erin 58310eb858 Quick valider fix 2023-07-25 23:03:06 +02:00
Erin 65efb64cdf Commented valider 2023-07-25 22:44:08 +02:00
able 86232e35a6 changes I GUESS 2023-07-25 12:20:35 -05:00
Erin 05e868999d Fixed endian stuffs 2023-07-25 19:10:00 +02:00
Erin c830688599 Added notice 2023-07-25 14:41:54 +02:00
Erin e1a423a355 Kekw 2023-07-24 20:41:10 +02:00
Erin ab4440ce3c Removed some macros 2023-07-24 18:48:42 +02:00
Erin df41adffde fixed imm shl/r 2023-07-24 16:48:13 +02:00
Erin 15d18ee169 Fixed panic on shift outta bounds
- Pointed out by 5225225
2023-07-24 16:37:37 +02:00
Erin d9eb6f1409 Fixed missing / 2023-07-22 02:42:43 +02:00
Erin 7a847d6585 added contribution guide to instructions 2023-07-22 02:42:21 +02:00
Erin 5fdf5d163a Name correction 2023-07-22 02:34:41 +02:00
Erin 1f54fc1e77 Edits. 2023-07-22 02:29:05 +02:00
Erin db2e5de20b Moved lore 2023-07-22 02:28:05 +02:00
Erin ee5a972921 A 2023-07-22 02:27:03 +02:00
Erin ce323fc2f7 added notice. 2023-07-22 02:26:29 +02:00
Erin 89c08a8602 More comments 2023-07-22 02:26:03 +02:00
Erin 29084d7e55 Removed pagetable hack 2023-07-22 01:06:41 +02:00
Erin 0a396cb601 Zero alloc BMC! 2023-07-22 01:03:09 +02:00
Erin d8eb78ff02 Fixed bug + spec update 2023-07-22 00:46:30 +02:00
Erin 8212ba2f29 Mapping + bye bye memory leaks 2023-07-20 20:47:50 +02:00
able 47c29f0ea5 code and stufd 2023-07-15 06:27:11 -05:00
able dff2542612 Merge branch 'master' of ssh://git.ablecorp.us:20/AbleOS/holey-bytes 2023-07-13 04:23:06 -05:00
able aa1a224427 Add some example code for hbasm 2023-07-13 04:23:00 -05:00
Erin 6808293bf9 Merge pull request 'Added UN instruction and fixed UB' (#7) from fix-ub into master
Reviewed-on: https://git.ablecorp.us/AbleOS/holey-bytes/pulls/7
2023-07-13 09:13:34 +00:00
Erin 2caebe0bb4 Update spec 2023-07-13 11:11:35 +02:00
Erin f272e38761 Added UN instruction and fixed UB 2023-07-13 11:10:07 +02:00
Erin 32e03f9bb2 Merge pull request 'Fixed the number of registers BRC copies' (#6) from bee/holey-bytes:master into master
Reviewed-on: https://git.ablecorp.us/AbleOS/holey-bytes/pulls/6
2023-07-13 09:09:44 +00:00
bee abdce1a873 Merge pull request 'merge' (#1) from AbleOS/holey-bytes:master into master
Reviewed-on: https://git.ablecorp.us/bee/holey-bytes/pulls/1
2023-07-12 17:13:38 +00:00
Egggggg 373f729452 fixed the number of registers BRC copies 2023-07-12 13:12:00 -04:00
Erin 6a03ba9b7b Map APIs 2023-07-12 14:56:11 +02:00
Egggggg 36c5e82c52 hehe oops 2023-07-12 06:50:07 -04:00
Erin a9e4aaba0e JMP → JAL + spec fix 2023-07-12 12:45:50 +02:00
Egggggg 860e8a6c2e fixed argument order of BMC and BRC 2023-07-12 06:25:38 -04:00
Erin ad9868c1c0 fixxed lint 2023-07-12 02:24:05 +02:00
Erin 116a228c5a special-cased BRC 2023-07-12 02:23:47 +02:00
Erin 271ab5a953 Rewritten assembler 2023-07-12 02:16:23 +02:00
Erin 3fc6bb9171 Revised trap API 2023-07-11 17:04:48 +02:00
able 6f4f156ca0 Merge pull request 'master' (#3) from IntoTheNight/holey-bytes:master into master
Reviewed-on: https://git.ablecorp.us/AbleOS/holey-bytes/pulls/3
2023-07-11 09:36:39 +00:00
IntoTheNight 73ad40b369 Merge pull request 'master' (#1) from AbleOS/holey-bytes:master into master
Reviewed-on: https://git.ablecorp.us/IntoTheNight/holey-bytes/pulls/1
2023-07-11 09:28:48 +00:00
MunirG05 0fb89ec4b3 the design is very human 2023-07-11 14:54:49 +05:30
MunirG05 f44220074d add fancy errors 2023-07-11 14:38:20 +05:30
Erin b218aa4a00 doc 2023-07-11 10:33:55 +02:00
MunirG05 63b2dc7514 tried to shove the timer back in 2023-07-11 14:03:25 +05:30
Erin 0351a954d0 Moved 2023-07-11 10:32:26 +02:00
Erin e32f0d1e61 wrap around timer 2023-07-11 10:31:03 +02:00
Erin 81f79dc7a5 Implement timer 2023-07-11 10:29:23 +02:00
Erin 7ca0b1d4eb Improved assembler library 2023-07-11 02:08:55 +02:00
Erin 447f8b2075 Moved 2023-07-10 23:18:23 +02:00
Erin b271d024cd Rename 2023-07-07 15:23:53 +02:00
Erin 7d17f48562 Updated flots 2023-07-07 15:22:03 +02:00
Erin 387d4c7ce7 assert char bit 2023-07-07 14:36:40 +02:00
Erin b7d4243113 Updated C header 2023-07-07 14:33:08 +02:00
Erin 3af50b29fb Updated spec 2023-07-07 14:33:07 +02:00
able 2d639797d9 HBASM: derp forgot that deps also need to be nostd 2023-06-26 05:23:52 -05:00
able a63c252c7a HBASM: no_std compatible now 2023-06-26 05:18:14 -05:00
Erin da1553d030 Improved unhandled trap handling 2023-06-25 00:28:20 +02:00
Erin f0a00ebb8d Stole docs 2023-06-25 00:21:40 +02:00
Erin 2bbf6ceee0 docs 2023-06-25 00:18:31 +02:00
Erin 2c9e315889 Implemented traps 2023-06-25 00:16:14 +02:00
able f53a42977d Initial work on a simple serial driver for ableos 2023-06-21 08:22:56 -05:00
able 8bc0d0020c Update to stable 2023-06-21 08:22:21 -05:00
able f58f801aa9 clear out assets 2023-06-21 07:54:10 -05:00
able a642b68474 NIX: fix nix-shell 2023-06-21 07:53:01 -05:00
Erin 79c367dc18 HoleyBytes, almost adhering the spec
- Changed instruction encoding to be faster to match on
- Implemented all instructions defined in spec
- Bytecode validation
- Assembler
- Implemented 5 level paging (based on SV57)
- Implemented some degree of interrupts (though not fully adhering the spec yet)
2023-06-21 02:07:48 +02:00
Erin 8b9a75adb4 a 2023-05-28 23:38:26 +02:00
Erin 7e233f4ae1 fixup32 2023-05-28 23:37:43 +02:00
Erin 0c69d80fc2 Changed register handling 2023-05-28 16:49:01 +02:00
59 changed files with 3532 additions and 531 deletions

195
Cargo.lock generated
View file

@ -13,6 +13,34 @@ dependencies = [
"version_check", "version_check",
] ]
[[package]]
name = "allocator-api2"
version = "0.2.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0942ffc6dcaadf03badf6e6a2d0228460359d5e34b57ccdc720b7382dfbd5ec5"
[[package]]
name = "ariadne"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72fe02fc62033df9ba41cba57ee19acf5e742511a140c7dbc3a873e19a19a1bd"
dependencies = [
"unicode-width",
"yansi",
]
[[package]]
name = "beef"
version = "0.5.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3a8241f3ebb85c056b509d4327ad0358fbbba6ffb340bf388f26350aeda225b1"
[[package]]
name = "bytemuck"
version = "1.13.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "17febce684fd15d89027105661fec94afb475cb995fbc59d2865198446ba2eea"
[[package]] [[package]]
name = "cfg-if" name = "cfg-if"
version = "1.0.0" version = "1.0.0"
@ -20,8 +48,10 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd" checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd"
[[package]] [[package]]
name = "compiler" name = "fnv"
version = "0.1.0" version = "1.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3f9eec918d3f24069decb9af1554cad7c880e2da24a9afd88aca000531ab82c1"
[[package]] [[package]]
name = "hashbrown" name = "hashbrown"
@ -33,30 +63,169 @@ dependencies = [
] ]
[[package]] [[package]]
name = "hbvm" name = "hashbrown"
version = "0.1.0" version = "0.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2c6201b9ff9fd90a5a3bac2e56a830d0caa509576f0e503818ee82c181b3437a"
dependencies = [ dependencies = [
"hashbrown", "ahash",
"log", "allocator-api2",
] ]
[[package]] [[package]]
name = "log" name = "hbasm"
version = "0.4.17" version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "abb12e687cfb44aa40f41fc3978ef76448f9b6038cad6aef4259d3c095a2382e"
dependencies = [ dependencies = [
"cfg-if", "ariadne",
"bytemuck",
"hashbrown 0.14.0",
"hbbytecode",
"lasso",
"literify",
"logos",
"paste",
]
[[package]]
name = "hbbytecode"
version = "0.1.0"
[[package]]
name = "hbvm"
version = "0.1.0"
dependencies = [
"hbbytecode",
]
[[package]]
name = "lasso"
version = "0.7.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4644821e1c3d7a560fe13d842d13f587c07348a1a05d3a797152d41c90c56df2"
dependencies = [
"ahash",
"hashbrown 0.13.2",
]
[[package]]
name = "literify"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "54e4d365df794ed78b4ce1061886f82eae7afa8e3a98ce4c4b0bfd0c777b1175"
dependencies = [
"litrs",
"proc-macro2",
"quote",
]
[[package]]
name = "litrs"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4f17c3668f3cc1132437cdadc93dab05e52d592f06948d3f64828430c36e4a70"
dependencies = [
"proc-macro2",
]
[[package]]
name = "logos"
version = "0.13.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c000ca4d908ff18ac99b93a062cb8958d331c3220719c52e77cb19cc6ac5d2c1"
dependencies = [
"logos-derive",
]
[[package]]
name = "logos-codegen"
version = "0.13.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dc487311295e0002e452025d6b580b77bb17286de87b57138f3b5db711cded68"
dependencies = [
"beef",
"fnv",
"proc-macro2",
"quote",
"regex-syntax",
"syn",
]
[[package]]
name = "logos-derive"
version = "0.13.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dbfc0d229f1f42d790440136d941afd806bc9e949e2bcb8faa813b0f00d1267e"
dependencies = [
"logos-codegen",
] ]
[[package]] [[package]]
name = "once_cell" name = "once_cell"
version = "1.17.1" version = "1.18.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b7e5500299e16ebb147ae15a00a942af264cf3688f47923b8fc2cd5858f23ad3" checksum = "dd8b5dd2ae5ed71462c540258bedcb51965123ad7e7ccf4b9a8cafaa4a63576d"
[[package]]
name = "paste"
version = "1.0.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "de3145af08024dea9fa9914f381a17b8fc6034dfb00f3a84013f7ff43f29ed4c"
[[package]]
name = "proc-macro2"
version = "1.0.66"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "18fb31db3f9bddb2ea821cde30a9f70117e3f119938b5ee630b7403aa6e2ead9"
dependencies = [
"unicode-ident",
]
[[package]]
name = "quote"
version = "1.0.33"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5267fca4496028628a95160fc423a33e8b2e6af8a5302579e322e4b520293cae"
dependencies = [
"proc-macro2",
]
[[package]]
name = "regex-syntax"
version = "0.6.29"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f162c6dd7b008981e4d40210aca20b4bd0f9b60ca9271061b07f78537722f2e1"
[[package]]
name = "syn"
version = "2.0.29"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c324c494eba9d92503e6f1ef2e6df781e78f6a7705a0202d9801b198807d518a"
dependencies = [
"proc-macro2",
"quote",
"unicode-ident",
]
[[package]]
name = "unicode-ident"
version = "1.0.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "301abaae475aa91687eb82514b328ab47a211a533026cb25fc3e519b86adfc3c"
[[package]]
name = "unicode-width"
version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c0edd1e5b14653f783770bce4a4dabb4a5108a5370a5f5d8cfe8710c361f6c8b"
[[package]] [[package]]
name = "version_check" name = "version_check"
version = "0.9.4" version = "0.9.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "49874b5167b65d7193b8aba1567f5c7d93d001cafc34600cee003eda787e483f" checksum = "49874b5167b65d7193b8aba1567f5c7d93d001cafc34600cee003eda787e483f"
[[package]]
name = "yansi"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "09041cd90cf85f7f8b2df60c646f853b7f535ce68f85244eb6731cf89fa498ec"

View file

@ -1,2 +1,3 @@
[workspace] [workspace]
members = ["hbvm", "compiler"] resolver = "2"
members = ["hbasm", "hbbytecode", "hbvm"]

View file

@ -1,28 +0,0 @@
# Math operations
```
MATH_OP
Add
Sub
Mul
Div
Mod
```
```
MATH_TYPE
Unsigned
Signed
FloatingPoint
```
```
MATH_OP_SIDES
Register Constant
Register Register
Constant Constant
Constant Register
```
`[MATH_OP] [MATH_OP_SIDES] [MATH_TYPE] [IMM_LHS] [IMM_RHS] [REG]`

View file

@ -1,4 +0,0 @@
load 0 a0 ;; 05 00 A0
load 10 a1 ;; 05 10 A1
add a0 1 a0 ;; 01 A0 01 A0
jump_neq a0 a1 0 ;; a1 A0 A1 0

View file

@ -1,4 +0,0 @@
load 10 A1
load 0 A0
add A0 1
jump_less_than A0 A1 0

29
c-abi.md Normal file
View file

@ -0,0 +1,29 @@
# C ABI (proposal)
## C datatypes
| C Type | Description | Size (B) |
|:------------|:-------------------------|-------------:|
| char | Character / byte | 8 |
| short | Short integer | 16 |
| int | Integer | 32 |
| long | Long integer | 64 |
| long long | Long long integer | 64 |
| T* | Pointer | 64 |
| float | Single-precision float | 32 |
| double | Double-precision float | 64 |
| long double | Extended-precision float | **Bikeshed** |
## Registers
| Register | ABI Name | Description | Saver |
|:---------|:---------|:---------------|:-------|
| `r0` | — | Zero register | N/A |
| `r1` | `ra` | Return address | Caller |
| `r2` | `sp` | Stack pointer | Callee |
| `r3` | `tp` | Thread pointer | N/A |
**TODO:** Parameters
**TODO:** Saved
**TODO:** Temp

View file

@ -1,8 +0,0 @@
[package]
name = "compiler"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]

View file

@ -1,5 +0,0 @@
fn main() {
let prog = "load 1, A0
jump 0";
println!("Hello, world!");
}

22
hbasm/Cargo.toml Normal file
View file

@ -0,0 +1,22 @@
[package]
name = "hbasm"
version = "0.1.0"
edition = "2021"
[dependencies]
ariadne = "0.3"
bytemuck = "1.13"
hashbrown = "0.14"
hbbytecode = { path = "../hbbytecode" }
literify = "0.1"
paste = "1.0"
[dependencies.lasso]
version = "0.7"
default-features = false
features = ["no-std"]
[dependencies.logos]
version = "0.13"
default-features = false
features = ["export_derive"]

12
hbasm/assets/add.hbasm Normal file
View file

@ -0,0 +1,12 @@
-- Add two numbers
-- A + B = C
-- r1 A
li r1, 2
-- r2 Result
li r2, 0
-- B = 4
addi r2, r1, 4
-- terminate execution
tx

View file

@ -0,0 +1,16 @@
-- r1 will be the temp in fahrenheit
-- r2 temp in celsius
-- r3/r4/r5 will be used by constants
-- (f - 32) * 5 / 9
li r1, 100
li r3, 32
li r4, 5
li r5, 9
sub r2, r1, r3
mul r2, r2, r4
dir r2, r0, r2, r5
tx

14
hbasm/assets/ecall.hbasm Normal file
View file

@ -0,0 +1,14 @@
li r255, 0
ecall
li r255, 1
li r254, 1
li r253, 100
ecall
li r255, 2
li r254, 0
li r253, 0
ecall
tx

View file

@ -0,0 +1,2 @@
L:
jal r0, r0, L

View file

@ -0,0 +1,4 @@
li r20, 1010
st r20, r24, 0, 1
addi r24, r0, 10
tx

View file

@ -0,0 +1,18 @@
jmp r0, start
start:
jmp r0, init_serial_port
-- Uses r20 to set the port
init_serial_port:
add r20, r30, r10
li r20, 00
-- outb(PORT + 1, 0x00); // Disable all interrupts
-- outb(PORT + 3, 0x80); // Enable DLAB (set baud rate divisor)
-- outb(PORT + 0, 0x03); // Set divisor to 3 (lo byte) 38400 baud
-- outb(PORT + 1, 0x00); // (hi byte)
-- outb(PORT + 3, 0x03); // 8 bits, no parity, one stop bit
-- outb(PORT + 2, 0xC7); // Enable FIFO, clear them, with 14-byte threshold
-- outb(PORT + 4, 0x0B); // IRQs enabled, RTS/DSR set
-- outb(PORT + 4, 0x1E); // Set in loopback mode, test the serial chip
-- outb(PORT + 0, 0xAE); // Test serial chip (send byte 0xAE and check if serial returns same byte)

104
hbasm/src/lib.rs Normal file
View file

@ -0,0 +1,104 @@
//! Holey Bytes Assembler
//!
//! Some people claim:
//! > Write programs to handle text streams, because that is a universal interface.
//!
//! We at AbleCorp believe that nice programatic API is nicer than piping some text
//! into a program. It's less error-prone and faster.
//!
//! So this crate contains both assembleer with API for programs and a text assembler
//! for humans to write
#![no_std]
extern crate alloc;
mod macros;
use {
alloc::{vec, vec::Vec},
hashbrown::HashSet,
};
/// Assembler
///
/// - Opcode-generic, instruction-type-specific methods are named `i_param_<type>`
/// - You likely won't need to use them, but they are here, just in case :)
/// - Instruction-specific methods are named `i_<instruction>`
pub struct Assembler {
pub buf: Vec<u8>,
pub sub: HashSet<usize>,
}
impl Default for Assembler {
fn default() -> Self {
Self {
buf: vec![0; 4],
sub: Default::default(),
}
}
}
hbbytecode::invoke_with_def!(macros::text::gen_text);
impl Assembler {
hbbytecode::invoke_with_def!(macros::asm::impl_asm);
/// Append 12 zeroes (UN) at the end and add magic to the begining
///
/// # HoleyBytes lore
///
/// In reference HBVM implementation checks are done in
/// a separate phase before execution.
///
/// This way execution will be much faster as they have to
/// be done only once.
///
/// There was an issue. You cannot statically check register values and
/// `JAL` instruction could hop at the end of program to some byte, which
/// will be interpreted as some valid opcode and VM in attempt to decode
/// the instruction performed out-of-bounds read which leads to undefined behaviour.
///
/// Several options were considered to overcome this, but inserting some data at
/// program's end which when executed would lead to undesired behaviour, though
/// not undefined behaviour.
///
/// Newly created `UN` (as UNreachable) was chosen as
/// - It was a good idea to add some equivalent to `ud2` anyways
/// - It was chosen to be zero
/// - What if you somehow reached that code, it will appropriately bail :)
/// - (yes, originally `NOP` was considered)
///
/// Why 12 bytes? That's the size of largest instruction parameter part.
pub fn finalise(&mut self) {
self.buf.extend([0; 12]);
self.buf[0..4].copy_from_slice(&0xAB1E0B_u32.to_le_bytes());
}
}
/// Immediate value
///
/// # Implementor notice
/// It should insert exactly 8 bytes, otherwise output will be malformed.
/// This is not checked in any way
pub trait Imm {
/// Insert immediate value
fn insert(&self, asm: &mut Assembler);
}
/// Implement immediate values
macro_rules! impl_imm_le_bytes {
($($ty:ty),* $(,)?) => {
$(
impl Imm for $ty {
#[inline(always)]
fn insert(&self, asm: &mut Assembler) {
// Convert to little-endian bytes, insert.
asm.buf.extend(self.to_le_bytes());
}
}
)*
};
}
impl_imm_le_bytes!(u64, i64, f64);

89
hbasm/src/macros/asm.rs Normal file
View file

@ -0,0 +1,89 @@
//! Macros to generate [`crate::Assembler`]
/// Incremental token-tree muncher to implement specific instruction
/// functions based on generic function for instruction type
macro_rules! impl_asm_opcodes {
( // End case
$generic:ident
($($param_i:ident: $param_ty:ty),*)
=> []
) => {};
(
$generic:ident
($($param_i:ident: $param_ty:ty),*)
=> [$opcode:ident, $($rest:tt)*]
) => {
// Instruction-specific function
paste::paste! {
#[inline(always)]
pub fn [<i_ $opcode:lower>](&mut self, $($param_i: $param_ty),*) {
self.$generic(hbbytecode::opcode::$opcode, $($param_i),*)
}
}
// And recurse!
macros::asm::impl_asm_opcodes!(
$generic($($param_i: $param_ty),*)
=> [$($rest)*]
);
};
}
/// Numeric value insert
macro_rules! impl_asm_insert {
// Immediate - this is trait-based,
// the insertion is delegated to its implementation
($self:expr, $id:ident, I) => {
Imm::insert(&$id, $self)
};
// Length - cannot be more than 2048
($self:expr, $id:ident, L) => {{
assert!($id <= 2048);
$self.buf.extend($id.to_le_bytes())
}};
// Other numbers, just insert their bytes, little endian
($self:expr, $id:ident, $_:ident) => {
$self.buf.extend($id.to_le_bytes())
};
}
/// Implement assembler
macro_rules! impl_asm {
(
$(
$ityn:ident
($($param_i:ident: $param_ty:ident),* $(,)?)
=> [$($opcode:ident),* $(,)?],
)*
) => {
paste::paste! {
$(
// Opcode-generic functions specific for instruction types
pub fn [<i_param_ $ityn>](&mut self, opcode: u8, $($param_i: macros::asm::ident_map_ty!($param_ty)),*) {
self.buf.push(opcode);
$(macros::asm::impl_asm_insert!(self, $param_i, $param_ty);)*
}
// Generate opcode-specific functions calling the opcode-generic ones
macros::asm::impl_asm_opcodes!(
[<i_param_ $ityn>]($($param_i: macros::asm::ident_map_ty!($param_ty)),*)
=> [$($opcode,)*]
);
)*
}
};
}
/// Map operand type to Rust type
#[rustfmt::skip]
macro_rules! ident_map_ty {
(R) => { u8 }; // Register is just u8
(I) => { impl Imm }; // Immediate is anything implementing the trait
(L) => { u16 }; // Copy count
($id:ident) => { $id }; // Anything else → identity map
}
pub(crate) use {ident_map_ty, impl_asm, impl_asm_insert, impl_asm_opcodes};

6
hbasm/src/macros/mod.rs Normal file
View file

@ -0,0 +1,6 @@
//! And here the land of macros begin.
//!
//! They do not bite, really. Have you seen what Yandros is writing?
pub mod asm;
pub mod text;

293
hbasm/src/macros/text.rs Normal file
View file

@ -0,0 +1,293 @@
//! Macros to generate text-code assembler at [`crate::text`]
// Refering in module which generates a module to that module — is that even legal? :D
/// Generate text code based assembler
macro_rules! gen_text {
(
$(
$ityn:ident
($($param_i:ident: $param_ty:ident),* $(,)?)
=> [$($opcode:ident),* $(,)?],
)*
) => {
/// # Text assembler
/// Text assembler generated simply calls methods in the [`crate::Assembler`] type.
///
/// # Syntax
/// ```text
/// instruction op1, op2, …
/// …
/// ```
/// - Opcode names are lowercase
/// - Registers are prefixed with `r` followed by number
/// - Operands are separated by `,`
/// - Instructions are separated by either line feed or `;` (αυτό δεν είναι ερωτηματικό!)
/// - Labels are defined by their names followed by colon `label:`
/// - Labels are referenced simply by their names
/// - Immediates are numbers, can be negative, floats are not yet supported
pub mod text {
use {
crate::{
Assembler,
macros::text::*,
},
hashbrown::HashMap,
lasso::{Key, Rodeo, Spur},
logos::{Lexer, Logos, Span},
};
paste::paste!(literify::literify! {
/// Assembly token
#[derive(Clone, Copy, Debug, PartialEq, Eq, Logos)]
#[logos(extras = Rodeo)]
#[logos(skip r"[ \t\t]+")]
#[logos(skip r"-- .*")]
pub enum Token {
$($(#[token(~([<$opcode:lower>]), |_| hbbytecode::opcode::[<$opcode:upper>])])*)*
Opcode(u8),
#[regex("[0-9]+", |l| l.slice().parse().ok())]
#[regex(
"-[0-9]+",
|lexer| {
Some(u64::from_ne_bytes(lexer.slice().parse::<i64>().ok()?.to_ne_bytes()))
},
)] Integer(u64),
#[regex(
"r[0-9]+",
|lexer| match lexer.slice()[1..].parse() {
Ok(n) => Some(n),
_ => None
},
)] Register(u8),
#[regex(
r"\p{XID_Start}\p{XID_Continue}*:",
|lexer| lexer.extras.get_or_intern(&lexer.slice()[..lexer.slice().len() - 1]),
)] Label(Spur),
#[regex(
r"\p{XID_Start}\p{XID_Continue}*",
|lexer| lexer.extras.get_or_intern(lexer.slice()),
)] Symbol(Spur),
#[token("\n")]
#[token(";")] ISep,
#[token(",")] PSep,
}
});
/// Type of error
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
pub enum ErrorKind {
UnexpectedToken,
InvalidToken,
UnexpectedEnd,
InvalidSymbol,
}
/// Text assembly error
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct Error {
pub kind: ErrorKind,
pub span: Span,
}
/// Parse code and insert instructions
pub fn assemble(asm: &mut Assembler, code: &str) -> Result<(), Error> {
pub struct TextAsm<'a> {
asm: &'a mut Assembler,
lexer: Lexer<'a, Token>,
symloc: HashMap<Spur, usize>,
}
impl<'a> TextAsm<'a> {
fn next(&mut self) -> Result<Token, ErrorKind> {
match self.lexer.next() {
Some(Ok(t)) => Ok(t),
Some(Err(())) => Err(ErrorKind::InvalidToken),
None => Err(ErrorKind::UnexpectedEnd),
}
}
#[inline(always)]
fn run(&mut self) -> Result<(), ErrorKind> {
loop {
match self.lexer.next() {
// Got an opcode
Some(Ok(Token::Opcode(op))) => {
match op {
// Special-cased
hbbytecode::opcode::BRC => {
param_extract_itm!(
self,
p0: R,
p1: R,
p2: u8
);
self.asm.i_param_bbb(op, p0, p1, p2);
},
// Take all the opcodes and match them to their corresponding functions
$(
#[allow(unreachable_patterns)]
$(hbbytecode::opcode::$opcode)|* => paste::paste!({
param_extract_itm!(self, $($param_i: $param_ty),*);
self.asm.[<i_param_ $ityn>](op, $($param_i),*);
}),
)*
// Already matched in Logos, should not be able to obtain
// invalid opcode.
_ => unreachable!(),
}
}
// Insert label to table
Some(Ok(Token::Label(lbl))) => {
self.symloc.insert(lbl, self.asm.buf.len());
}
// Instruction separator (LF, ;)
Some(Ok(Token::ISep)) => (),
Some(Ok(_)) => return Err(ErrorKind::UnexpectedToken),
Some(Err(())) => return Err(ErrorKind::InvalidToken),
None => return Ok(()),
}
}
}
}
let mut asm = TextAsm {
asm,
lexer: Token::lexer(code),
symloc: HashMap::default(),
};
asm.run()
.map_err(|kind| Error { kind, span: asm.lexer.span() })?;
// Walk table and substitute labels
// for their addresses
for &loc in &asm.asm.sub {
// Extract indices from the code and get addresses from table
let val = asm.symloc
.get(
&Spur::try_from_usize(bytemuck::pod_read_unaligned::<u64>(
&asm.asm.buf[loc..loc + core::mem::size_of::<u64>()]) as _
).unwrap()
)
.ok_or(Error { kind: ErrorKind::InvalidSymbol, span: 0..0 })?
.to_le_bytes();
// New address
asm.asm.buf[loc..]
.iter_mut()
.zip(val)
.for_each(|(dst, src)| *dst = src);
}
Ok(())
}
// Fun fact: this is a little hack
// It may slow the things a little bit down, but
// it made the macro to be made pretty nice.
//
// If you have any idea how to get rid of this,
// contributions are welcome :)
// I *likely* won't try anymore.
enum InternalImm {
Const(u64),
Named(Spur),
}
impl $crate::Imm for InternalImm {
#[inline]
fn insert(&self, asm: &mut Assembler) {
match self {
// Constant immediate, just put it in
Self::Const(a) => a.insert(asm),
// Label
Self::Named(a) => {
// Insert to the sub table that substitution will be
// requested
asm.sub.insert(asm.buf.len());
// Insert value from interner in place
asm.buf.extend((a.into_usize() as u64).to_le_bytes());
},
}
}
}
}
};
}
/// Extract item by pattern, otherwise return [`ErrorKind::UnexpectedToken`]
macro_rules! extract_pat {
($self:expr, $pat:pat) => {
let $pat = $self.next()?
else { return Err(ErrorKind::UnexpectedToken) };
};
}
/// Generate extract macro
macro_rules! gen_extract {
// Integer types have same body
($($int:ident),* $(,)?) => {
/// Extract operand from code
macro_rules! extract {
// Register (require prefixing with r)
($self:expr, R, $id:ident) => {
extract_pat!($self, Token::Register($id));
};
($self:expr, L, $id:ident) => {
extract_pat!($self, Token::Integer($id));
if $id > 2048 {
return Err(ErrorKind::InvalidToken);
}
let $id = u16::try_from($id).unwrap();
};
// Immediate
($self:expr, I, $id:ident) => {
let $id = match $self.next()? {
// Either straight up integer
Token::Integer(a) => InternalImm::Const(a),
// …or a label
Token::Symbol(a) => InternalImm::Named(a),
_ => return Err(ErrorKind::UnexpectedToken),
};
};
// Get $int, if not fitting, the token is claimed invalid
$(($self:expr, $int, $id:ident) => {
extract_pat!($self, Token::Integer($id));
let $id = $int::try_from($id).map_err(|_| ErrorKind::InvalidToken)?;
});*;
}
};
}
gen_extract!(u8, u16, u32);
/// Parameter extract incremental token-tree muncher
///
/// What else would it mean?
macro_rules! param_extract_itm {
($self:expr, $($id:ident: $ty:ident)? $(, $($tt:tt)*)?) => {
// Extract pattern
$(extract!($self, $ty, $id);)?
$(
// Require operand separator
extract_pat!($self, Token::PSep);
// And go to the next (recursive)
// …munch munch… yummy token trees.
param_extract_itm!($self, $($tt)*);
)?
};
}
pub(crate) use {extract, extract_pat, gen_text, param_extract_itm};

56
hbasm/src/main.rs Normal file
View file

@ -0,0 +1,56 @@
use std::io::Write;
use hbasm::Assembler;
use {
ariadne::{ColorGenerator, Label, Report, ReportKind, Source},
std::{
error::Error,
io::{stdin, Read},
},
};
fn main() -> Result<(), Box<dyn Error>> {
let mut code = String::new();
stdin().read_to_string(&mut code)?;
let mut assembler = Assembler::default();
if let Err(e) = hbasm::text::assemble(&mut assembler, &code) {
let mut colors = ColorGenerator::new();
let e_code = match e.kind {
hbasm::text::ErrorKind::UnexpectedToken => 1,
hbasm::text::ErrorKind::InvalidToken => 2,
hbasm::text::ErrorKind::UnexpectedEnd => 3,
hbasm::text::ErrorKind::InvalidSymbol => 4,
};
let message = match e.kind {
hbasm::text::ErrorKind::UnexpectedToken => "This token is not expected!",
hbasm::text::ErrorKind::InvalidToken => "The token is not valid!",
hbasm::text::ErrorKind::UnexpectedEnd => {
"The assembler reached the end of input unexpectedly!"
}
hbasm::text::ErrorKind::InvalidSymbol => {
"This referenced symbol doesn't have a corresponding label!"
}
};
let a = colors.next();
Report::build(ReportKind::Error, "engine_internal", e.span.clone().start)
.with_code(e_code)
.with_message(format!("{:?}", e.kind))
.with_label(
Label::new(("engine_internal", e.span))
.with_message(message)
.with_color(a),
)
.finish()
.eprint(("engine_internal", Source::from(&code)))
.unwrap();
} else {
assembler.finalise();
std::io::stdout().lock().write_all(&assembler.buf).unwrap();
}
Ok(())
}

6
hbbytecode/Cargo.toml Normal file
View file

@ -0,0 +1,6 @@
[package]
name = "hbbytecode"
version = "0.1.0"
edition = "2021"
[dependencies]

68
hbbytecode/hbbytecode.h Normal file
View file

@ -0,0 +1,68 @@
/* HoleyBytes Bytecode representation in C
* Requires C23 compiler or better
*
* Uses MSVC pack pragma extension,
* proved to work with Clang and GNU® GCC.
*/
#pragma once
#include <assert.h>
#include <limits.h>
#include <stdint.h>
static_assert(CHAR_BIT == 8, "Cursed architectures are not supported");
enum hbbc_Opcode: uint8_t {
hbbc_Op_UN , hbbc_Op_TX , hbbc_Op_NOP , hbbc_Op_ADD , hbbc_Op_SUB , hbbc_Op_MUL ,
hbbc_Op_AND , hbbc_Op_OR , hbbc_Op_XOR , hbbc_Op_SL , hbbc_Op_SR , hbbc_Op_SRS ,
hbbc_Op_CMP , hbbc_Op_CMPU , hbbc_Op_DIR , hbbc_Op_NEG , hbbc_Op_NOT , hbbc_Op_ADDI ,
hbbc_Op_MULI , hbbc_Op_ANDI , hbbc_Op_ORI , hbbc_Op_XORI , hbbc_Op_SLI , hbbc_Op_SRI ,
hbbc_Op_SRSI , hbbc_Op_CMPI , hbbc_Op_CMPUI , hbbc_Op_CP , hbbc_Op_SWA , hbbc_Op_LI ,
hbbc_Op_LD , hbbc_Op_ST , hbbc_Op_BMC , hbbc_Op_BRC , hbbc_Op_JMP , hbbc_Op_JAL ,
hbbc_Op_JEQ , hbbc_Op_JNE , hbbc_Op_JLT , hbbc_Op_JGT , hbbc_Op_JLTU , hbbc_Op_JGTU ,
hbbc_Op_ECALL , hbbc_Op_ADDF , hbbc_Op_SUBF , hbbc_Op_MULF , hbbc_Op_DIRF , hbbc_Op_FMAF ,
hbbc_Op_NEGF , hbbc_Op_ITF , hbbc_Op_FTI , hbbc_Op_ADDFI , hbbc_Op_MULFI ,
} typedef hbbc_Opcode;
static_assert(sizeof(hbbc_Opcode) == 1);
#pragma pack(push, 1)
struct hbbc_ParamBBBB
{ uint8_t _0; uint8_t _1; uint8_t _2; uint8_t _3; }
typedef hbbc_ParamBBBB;
static_assert(sizeof(hbbc_ParamBBBB) == 32 / 8);
struct hbbc_ParamBBB
{ uint8_t _0; uint8_t _1; uint8_t _2; }
typedef hbbc_ParamBBB;
static_assert(sizeof(hbbc_ParamBBB) == 24 / 8);
struct hbbc_ParamBBDH
{ uint8_t _0; uint8_t _1; uint64_t _2; uint16_t _3; }
typedef hbbc_ParamBBDH;
static_assert(sizeof(hbbc_ParamBBDH) == 96 / 8);
struct hbbc_ParamBBD
{ uint8_t _0; uint8_t _1; uint64_t _2; }
typedef hbbc_ParamBBD;
static_assert(sizeof(hbbc_ParamBBD) == 80 / 8);
struct hbbc_ParamBBW
{ uint8_t _0; uint8_t _1; uint32_t _2; }
typedef hbbc_ParamBBW;
static_assert(sizeof(hbbc_ParamBBW) == 48 / 8);
struct hbbc_ParamBB
{ uint8_t _0; uint8_t _1; }
typedef hbbc_ParamBB;
static_assert(sizeof(hbbc_ParamBB) == 16 / 8);
struct hbbc_ParamBD
{ uint8_t _0; uint64_t _1; }
typedef hbbc_ParamBD;
static_assert(sizeof(hbbc_ParamBD) == 72 / 8);
typedef uint64_t hbbc_ParamD;
static_assert(sizeof(hbbc_ParamD) == 64 / 8);
#pragma pack(pop)

View file

@ -0,0 +1,170 @@
//! Generate HoleyBytes code validator
macro_rules! gen_valider {
(
$(
$ityn:ident
($($param_i:ident: $param_ty:ident),* $(,)?)
=> [$($opcode:ident),* $(,)?],
)*
) => {
#[allow(unreachable_code)]
pub mod valider {
//! Validate if program is sound to execute
/// Program validation error kind
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum ErrorKind {
/// Unknown opcode
InvalidInstruction,
/// VM doesn't implement this valid opcode
Unimplemented,
/// Attempted to copy over register boundary
RegisterArrayOverflow,
/// Program is not validly terminated
InvalidEnd,
/// Program misses magic
MissingMagic
}
/// Error
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub struct Error {
/// Kind
pub kind: ErrorKind,
/// Location in bytecode
pub index: usize,
}
/// Perform bytecode validation. If it passes, the program should be
/// sound to execute.
pub fn validate(mut program: &[u8]) -> Result<(), Error> {
// Validate magic
if program.get(0..4) != Some(&0xAB1E0B_u32.to_le_bytes()) {
return Err(Error {
kind: ErrorKind::MissingMagic,
index: 0,
});
}
// Program has to end with 12 zeroes, if there is less than
// 12 bytes, program is invalid.
if program.len() < 12 {
return Err(Error {
kind: ErrorKind::InvalidEnd,
index: 0,
});
}
// Verify that program ends with 12 zeroes
for (index, item) in program.iter().enumerate().skip(program.len() - 12) {
if *item != 0 {
return Err(Error {
kind: ErrorKind::InvalidEnd,
index,
});
}
}
let start = program;
program = &program[4..];
loop {
use crate::opcode::*;
program = match program {
// End of program
[] => return Ok(()),
// Memory load/store cannot go out-of-bounds register array
// B B D1 D2 D3 D4 D5 D6 D7 D8 H1 H2
[LD..=ST, reg, _, _, _, _, _, _, _, _, _, count_0, count_1, ..]
if usize::from(*reg) * 8
+ usize::from(u16::from_le_bytes([*count_0, *count_1]))
> 2048 =>
{
return Err(Error {
kind: ErrorKind::RegisterArrayOverflow,
index: (program.as_ptr() as usize) - (start.as_ptr() as usize),
});
}
// Block register copy cannot go out-of-bounds register array
[BRC, src, dst, count, ..]
if src.checked_add(*count).is_none()
|| dst.checked_add(*count).is_none() =>
{
return Err(Error {
kind: ErrorKind::RegisterArrayOverflow,
index: (program.as_ptr() as usize) - (start.as_ptr() as usize),
});
}
$(
$crate::gen_valider::inst_chk!(
rest, $ityn, $($opcode),*
)
)|* => rest,
// The plebs
_ => {
return Err(Error {
kind: ErrorKind::InvalidInstruction,
index: (program.as_ptr() as usize) - (start.as_ptr() as usize),
})
}
}
}
}
}
};
}
/// Generate instruction check pattern
macro_rules! inst_chk {
// Sadly this has hardcoded instruction types,
// as I cannot generate parts of patterns+
($rest:ident, bbbb, $($opcode:ident),*) => {
// B B B B
[$($opcode)|*, _, _, _, _, $rest @ ..]
};
($rest:ident, bbb, $($opcode:ident),*) => {
// B B B
[$($opcode)|*, _, _, _, $rest @ ..]
};
($rest:ident, bbdh, $($opcode:ident),*) => {
// B B D1 D2 D3 D4 D5 D6 D7 D8 H1 H2
[$($opcode)|*, _, _, _, _, _, _, _, _, _, _, _, _, $rest @ ..]
};
($rest:ident, bbd, $($opcode:ident),*) => {
// B B D1 D2 D3 D4 D5 D6 D7 D8
[$($opcode)|*, _, _, _, _, _, _, _, _, _, _, $rest @ ..]
};
($rest:ident, bbw, $($opcode:ident),*) => {
// B B W1 W2 W3 W4
[$($opcode)|*, _, _, _, _, _, _, $rest @ ..]
};
($rest:ident, bb, $($opcode:ident),*) => {
// B B
[$($opcode)|*, _, _, $rest @ ..]
};
($rest:ident, bd, $($opcode:ident),*) => {
// B D1 D2 D3 D4 D5 D6 D7 D8
[$($opcode)|*, _, _, _, _, _, _, _, _, _, $rest @ ..]
};
($rest:ident, n, $($opcode:ident),*) => {
[$($opcode)|*, $rest @ ..]
};
($_0:ident, $($_1:ident),*) => {
compile_error!("Invalid instruction type");
}
}
pub(crate) use {gen_valider, inst_chk};

163
hbbytecode/src/lib.rs Normal file
View file

@ -0,0 +1,163 @@
#![no_std]
mod gen_valider;
macro_rules! constmod {
($vis:vis $mname:ident($repr:ty) {
$(#![doc = $mdoc:literal])?
$($cname:ident = $val:expr $(,$doc:literal)?;)*
}) => {
$(#[doc = $mdoc])?
$vis mod $mname {
$(
$(#[doc = $doc])?
pub const $cname: $repr = $val;
)*
}
};
}
#[allow(rustdoc::invalid_rust_codeblocks)]
/// Invoke macro with bytecode definition
/// # Input syntax
/// ```no_run
/// macro!(
/// INSTRUCTION_TYPE(p0: TYPE, p1: TYPE, …)
/// => [INSTRUCTION_A, INSTRUCTION_B, …],
/// …
/// );
/// ```
/// - Instruction type determines opcode-generic, instruction-type-specific
/// function. Name: `i_param_INSTRUCTION_TYPE`
/// - Per-instructions there will be generated opcode-specific functions calling the generic ones
/// - Operand types
/// - R: Register (u8)
/// - I: Immediate
/// - L: Memory load / store size (u16)
/// - Other types are identity-mapped
///
/// # BRC special-case
/// BRC's 3rd operand is plain byte, not a register. Encoding is the same, but for some cases it may matter.
///
/// Please, if you distinguish in your API between byte and register, special case this one.
///
/// Sorry for that :(
#[macro_export]
macro_rules! invoke_with_def {
($macro:path) => {
$macro!(
bbbb(p0: R, p1: R, p2: R, p3: R)
=> [DIR, DIRF, FMAF],
bbb(p0: R, p1: R, p2: R)
=> [ADD, SUB, MUL, AND, OR, XOR, SL, SR, SRS, CMP, CMPU, BRC, ADDF, SUBF, MULF],
bbdh(p0: R, p1: R, p2: I, p3: L)
=> [LD, ST],
bbd(p0: R, p1: R, p2: I)
=> [ADDI, MULI, ANDI, ORI, XORI, CMPI, CMPUI, BMC, JAL, JEQ, JNE, JLT, JGT, JLTU,
JGTU, ADDFI, MULFI],
bbw(p0: R, p1: R, p2: u32)
=> [SLI, SRI, SRSI],
bb(p0: R, p1: R)
=> [NEG, NOT, CP, SWA, NEGF, ITF, FTI],
bd(p0: R, p1: I)
=> [LI],
n()
=> [UN, TX, NOP, ECALL],
);
};
}
invoke_with_def!(gen_valider::gen_valider);
constmod!(pub opcode(u8) {
//! Opcode constant module
UN = 0, "N; Raises a trap";
TX = 1, "N; Terminate execution";
NOP = 2, "N; Do nothing";
ADD = 3, "BBB; #0 ← #1 + #2";
SUB = 4, "BBB; #0 ← #1 - #2";
MUL = 5, "BBB; #0 ← #1 × #2";
AND = 6, "BBB; #0 ← #1 & #2";
OR = 7, "BBB; #0 ← #1 | #2";
XOR = 8, "BBB; #0 ← #1 ^ #2";
SL = 9, "BBB; #0 ← #1 « #2";
SR = 10, "BBB; #0 ← #1 » #2";
SRS = 11, "BBB; #0 ← #1 » #2 (signed)";
CMP = 12, "BBB; #0 ← #1 <=> #2";
CMPU = 13, "BBB; #0 ← #1 <=> #2 (unsigned)";
DIR = 14, "BBBB; #0 ← #2 / #3, #1 ← #2 % #3";
NEG = 15, "BB; #0 ← -#1";
NOT = 16, "BB; #0 ← !#1";
ADDI = 17, "BBD; #0 ← #1 + imm #2";
MULI = 18, "BBD; #0 ← #1 × imm #2";
ANDI = 19, "BBD; #0 ← #1 & imm #2";
ORI = 20, "BBD; #0 ← #1 | imm #2";
XORI = 21, "BBD; #0 ← #1 ^ imm #2";
SLI = 22, "BBW; #0 ← #1 « imm #2";
SRI = 23, "BBW; #0 ← #1 » imm #2";
SRSI = 24, "BBW; #0 ← #1 » imm #2 (signed)";
CMPI = 25, "BBD; #0 ← #1 <=> imm #2";
CMPUI = 26, "BBD; #0 ← #1 <=> imm #2 (unsigned)";
CP = 27, "BB; Copy #0 ← #1";
SWA = 28, "BB; Swap #0 and #1";
LI = 29, "BD; #0 ← imm #1";
LD = 30, "BBDB; #0 ← [#1 + imm #3], imm #4 bytes, overflowing";
ST = 31, "BBDB; [#1 + imm #3] ← #0, imm #4 bytes, overflowing";
BMC = 32, "BBD; [#0] ← [#1], imm #2 bytes";
BRC = 33, "BBB; #0 ← #1, imm #2 registers";
JMP = 34, "D; Unconditional, non-linking absolute jump";
JAL = 35, "BD; Copy PC to #0 and unconditional jump [#1 + imm #2]";
JEQ = 36, "BBD; if #0 = #1 → jump imm #2";
JNE = 37, "BBD; if #0 ≠ #1 → jump imm #2";
JLT = 38, "BBD; if #0 < #1 → jump imm #2";
JGT = 39, "BBD; if #0 > #1 → jump imm #2";
JLTU = 40, "BBD; if #0 < #1 → jump imm #2 (unsigned)";
JGTU = 41, "BBD; if #0 > #1 → jump imm #2 (unsigned)";
ECALL = 42, "N; Issue system call";
ADDF = 43, "BBB; #0 ← #1 +. #2";
SUBF = 44, "BBB; #0 ← #1 -. #2";
MULF = 45, "BBB; #0 ← #1 +. #2";
DIRF = 46, "BBBB; #0 ← #2 / #3, #1 ← #2 % #3";
FMAF = 47, "BBBB; #0 ← (#1 * #2) + #3";
NEGF = 48, "BB; #0 ← -#1";
ITF = 49, "BB; #0 ← #1 as float";
FTI = 50, "BB; #0 ← #1 as int";
ADDFI = 51, "BBD; #0 ← #1 +. imm #2";
MULFI = 52, "BBD; #0 ← #1 *. imm #2";
});
#[repr(packed)]
pub struct ParamBBBB(pub u8, pub u8, pub u8, pub u8);
#[repr(packed)]
pub struct ParamBBB(pub u8, pub u8, pub u8);
#[repr(packed)]
pub struct ParamBBDH(pub u8, pub u8, pub u64, pub u16);
#[repr(packed)]
pub struct ParamBBD(pub u8, pub u8, pub u64);
#[repr(packed)]
pub struct ParamBBW(pub u8, pub u8, pub u32);
#[repr(packed)]
pub struct ParamBB(pub u8, pub u8);
#[repr(packed)]
pub struct ParamBD(pub u8, pub u64);
/// # Safety
/// Has to be valid to be decoded from bytecode.
pub unsafe trait ProgramVal {}
unsafe impl ProgramVal for ParamBBBB {}
unsafe impl ProgramVal for ParamBBB {}
unsafe impl ProgramVal for ParamBBDH {}
unsafe impl ProgramVal for ParamBBD {}
unsafe impl ProgramVal for ParamBBW {}
unsafe impl ProgramVal for ParamBB {}
unsafe impl ProgramVal for ParamBD {}
unsafe impl ProgramVal for u64 {}
unsafe impl ProgramVal for u8 {} // Opcode
unsafe impl ProgramVal for () {}

View file

@ -3,8 +3,13 @@ name = "hbvm"
version = "0.1.0" version = "0.1.0"
edition = "2021" edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html [profile.release]
lto = true
[features]
default = ["alloc"]
alloc = []
nightly = []
[dependencies] [dependencies]
log = "*" hbbytecode.path = "../hbbytecode"
hashbrown = "0.13.2"

0
hbvm/README.md Normal file
View file

BIN
hbvm/assets/add.hb Normal file

Binary file not shown.

Binary file not shown.

BIN
hbvm/assets/ecall.hb Normal file

Binary file not shown.

Binary file not shown.

BIN
hbvm/assets/memory.hb Normal file

Binary file not shown.

5
hbvm/fuzz/.gitignore vendored Normal file
View file

@ -0,0 +1,5 @@
target
artifacts
corpus
coverage
Cargo.lock

30
hbvm/fuzz/Cargo.toml Normal file
View file

@ -0,0 +1,30 @@
[package]
name = "hbvm-fuzz"
version = "0.0.0"
publish = false
edition = "2021"
[package.metadata]
cargo-fuzz = true
[dependencies]
libfuzzer-sys = "0.4"
[dependencies.hbvm]
path = ".."
[dependencies.hbbytecode]
path = "../../hbbytecode"
# Prevent this from interfering with workspaces
[workspace]
members = ["."]
[profile.release]
debug = 1
[[bin]]
name = "vm"
path = "fuzz_targets/vm.rs"
test = false
doc = false

View file

@ -0,0 +1,85 @@
#![no_main]
use {
hbbytecode::valider::validate,
hbvm::{
mem::{
softpaging::{
paging::{PageTable, Permission},
HandlePageFault, PageSize, SoftPagedMem,
},
Address, MemoryAccessReason,
},
Vm,
},
libfuzzer_sys::fuzz_target,
};
fuzz_target!(|data: &[u8]| {
if validate(data).is_ok() {
let mut vm = unsafe {
Vm::<_, 16384>::new(
SoftPagedMem::<_, true> {
pf_handler: TestTrapHandler,
program: data,
root_pt: Box::into_raw(Default::default()),
icache: Default::default(),
},
Address::new(4),
)
};
// Alloc and map some memory
let pages = [
alloc_and_map(&mut vm.memory, 0),
alloc_and_map(&mut vm.memory, 4096),
];
// Run VM
let _ = vm.run();
// Unmap and dealloc the memory
for (i, page) in pages.into_iter().enumerate() {
unmap_and_dealloc(&mut vm.memory, page, i as u64 * 4096);
}
let _ = unsafe { Box::from_raw(vm.memory.root_pt) };
}
});
fn alloc_and_map(memory: &mut SoftPagedMem<TestTrapHandler>, at: u64) -> *mut u8 {
let ptr = Box::into_raw(Box::<Page>::default()).cast();
unsafe {
memory
.map(ptr, Address::new(at), Permission::Write, PageSize::Size4K)
.unwrap()
};
ptr
}
fn unmap_and_dealloc(memory: &mut SoftPagedMem<TestTrapHandler>, ptr: *mut u8, from: u64) {
memory.unmap(Address::new(from)).unwrap();
let _ = unsafe { Box::from_raw(ptr.cast::<Page>()) };
}
#[repr(align(4096))]
struct Page([u8; 4096]);
impl Default for Page {
fn default() -> Self {
unsafe { std::mem::MaybeUninit::zeroed().assume_init() }
}
}
struct TestTrapHandler;
impl HandlePageFault for TestTrapHandler {
fn page_fault(
&mut self,
_: MemoryAccessReason,
_: &mut PageTable,
_: Address,
_: PageSize,
_: *mut u8,
) -> bool {
false
}
}

135
hbvm/src/bmc.rs Normal file
View file

@ -0,0 +1,135 @@
//! Block memory copier state machine
use {
super::{mem::MemoryAccessReason, Memory, VmRunError},
crate::mem::Address,
core::{mem::MaybeUninit, task::Poll},
};
/// Buffer size (defaults to 4 KiB, a smallest page size on most platforms)
const BUF_SIZE: usize = 4096;
/// Buffer of possibly uninitialised bytes, aligned to [`BUF_SIZE`]
#[repr(align(4096))]
struct AlignedBuf([MaybeUninit<u8>; BUF_SIZE]);
/// State for block memory copy
pub struct BlockCopier {
/// Source address
src: Address,
/// Destination address
dst: Address,
/// How many buffer sizes to copy?
n_buffers: usize,
/// …and what remainds after?
rem: usize,
}
impl BlockCopier {
/// Construct a new one
#[inline]
pub fn new(src: Address, dst: Address, count: usize) -> Self {
Self {
src,
dst,
n_buffers: count / BUF_SIZE,
rem: count % BUF_SIZE,
}
}
/// Copy one block
///
/// # Safety
/// - Same as for [`Memory::load`] and [`Memory::store`]
pub unsafe fn poll(&mut self, memory: &mut impl Memory) -> Poll<Result<(), BlkCopyError>> {
// Safety: Assuming uninit of array of MaybeUninit is sound
let mut buf = AlignedBuf(MaybeUninit::uninit().assume_init());
// We have at least one buffer size to copy
if self.n_buffers != 0 {
if let Err(e) = act(
memory,
self.src,
self.dst,
buf.0.as_mut_ptr().cast(),
BUF_SIZE,
) {
return Poll::Ready(Err(e));
}
// Bump source and destination address
self.src += BUF_SIZE;
self.dst += BUF_SIZE;
self.n_buffers -= 1;
return if self.n_buffers + self.rem == 0 {
// If there is nothing left, we are done
Poll::Ready(Ok(()))
} else {
// Otherwise let's advice to run it again
Poll::Pending
};
}
if self.rem != 0 {
if let Err(e) = act(
memory,
self.src,
self.dst,
buf.0.as_mut_ptr().cast(),
self.rem,
) {
return Poll::Ready(Err(e));
}
}
Poll::Ready(Ok(()))
}
}
/// Load to buffer and store from buffer
#[inline]
unsafe fn act(
memory: &mut impl Memory,
src: Address,
dst: Address,
buf: *mut u8,
count: usize,
) -> Result<(), BlkCopyError> {
// Load to buffer
memory
.load(src, buf, count)
.map_err(|super::mem::LoadError(addr)| BlkCopyError {
access_reason: MemoryAccessReason::Load,
addr,
})?;
// Store from buffer
memory
.store(dst, buf, count)
.map_err(|super::mem::StoreError(addr)| BlkCopyError {
access_reason: MemoryAccessReason::Store,
addr,
})?;
Ok(())
}
/// Error occured when copying a block of memory
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub struct BlkCopyError {
/// Kind of access
access_reason: MemoryAccessReason,
/// VM Address
addr: Address,
}
impl From<BlkCopyError> for VmRunError {
fn from(value: BlkCopyError) -> Self {
match value.access_reason {
MemoryAccessReason::Load => Self::LoadAccessEx(value.addr),
MemoryAccessReason::Store => Self::StoreAccessEx(value.addr),
}
}
}

View file

@ -1,2 +0,0 @@
pub mod ops;
pub mod types;

View file

@ -1,68 +0,0 @@
#[repr(u8)]
pub enum Operations {
NOP = 0,
ADD = 1,
SUB = 2,
MUL = 3,
DIV = 4,
MOD = 5,
AND = 6,
OR = 7,
XOR = 8,
NOT = 9,
// LOADs a memory address/constant into a register
LOAD = 15,
// STOREs a register/constant into a memory address
STORE = 16,
MapPage = 17,
UnmapPage = 18,
// SHIFT LEFT 16 A0
Shift = 20,
JUMP = 100,
JumpCond = 101,
RET = 103,
EnviromentCall = 255,
}
pub enum PageMapTypes {
// Have the host make a new VMPage
VMPage = 0,
// Ask the host to map a RealPage into memory
RealPage = 1,
}
pub enum MathOpSubTypes {
Unsigned = 0,
Signed = 1,
FloatingPoint = 2,
}
pub enum MathOpSides {
RegisterConstant = 0,
RegisterRegister = 1,
ConstantConstant = 2,
ConstantRegister = 3,
}
pub enum RWSubTypes {
AddrToReg = 0,
RegToAddr,
ConstToReg,
ConstToAddr,
}
pub enum JumpConditionals {
Equal = 0,
NotEqual = 1,
LessThan = 2,
LessThanOrEqualTo = 3,
GreaterThan = 4,
GreaterThanOrEqualTo = 5,
}

View file

@ -1,9 +0,0 @@
pub const CONST_U8: u8 = 0x00;
pub const CONST_I8: i8 = 0x01;
pub const CONST_U64: u8 = 0x02;
pub const CONST_I64: u8 = 0x03;
pub const CONST_F64: u8 = 0x04;
pub const ADDRESS: u8 = 0x05;

View file

@ -1,6 +0,0 @@
use alloc::vec::Vec;
pub type CallStack = Vec<FnCall>;
pub struct FnCall {
pub ret: usize,
}

View file

@ -1,13 +0,0 @@
pub struct EngineConfig {
pub call_stack_depth: usize,
pub quantum: u32,
}
impl EngineConfig {
pub fn default() -> Self {
Self {
call_stack_depth: 32,
quantum: 0,
}
}
}

View file

@ -1,3 +0,0 @@
use super::Engine;
pub type EnviromentCall = fn(&mut Engine) -> Result<&mut Engine, u64>;

View file

@ -1,100 +0,0 @@
pub mod call_stack;
pub mod config;
pub mod enviroment_calls;
pub mod regs;
#[cfg(test)]
pub mod tests;
use {
self::call_stack::CallStack,
crate::{memory, HaltStatus, RuntimeErrors},
alloc::vec::Vec,
config::EngineConfig,
log::trace,
regs::Registers,
};
// pub const PAGE_SIZE: usize = 8192;
pub struct RealPage {
pub ptr: *mut u8,
}
#[derive(Debug, Clone, Copy)]
pub struct VMPage {
pub data: [u8; 8192],
}
impl VMPage {
pub fn new() -> Self {
Self {
data: [0; 4096 * 2],
}
}
}
pub enum Page {
VMPage(VMPage),
RealPage(RealPage),
}
impl Page {
pub fn data(&self) -> [u8; 4096 * 2] {
match self {
Page::VMPage(vmpage) => vmpage.data,
Page::RealPage(_) => {
unimplemented!("Memmapped hw page not yet supported")
}
}
}
}
pub fn empty_enviroment_call(engine: &mut Engine) -> Result<&mut Engine, u64> {
trace!("Registers {:?}", engine.registers);
Err(0)
}
pub struct Engine {
pub index: usize,
pub program: Vec<u8>,
pub registers: Registers,
pub config: EngineConfig,
/// BUG: This DOES NOT account for overflowing
pub last_timer_count: u32,
pub timer_callback: Option<fn() -> u32>,
pub memory: memory::Memory,
pub enviroment_call_table: [Option<EnviromentCall>; 256],
pub call_stack: CallStack,
}
use crate::engine::enviroment_calls::EnviromentCall;
impl Engine {
pub fn set_timer_callback(&mut self, func: fn() -> u32) {
self.timer_callback = Some(func);
}
pub fn set_register(&mut self, register: u8, value: u64) {}
}
impl Engine {
pub fn new(program: Vec<u8>) -> Self {
let mut mem = memory::Memory::new();
for (addr, byte) in program.clone().into_iter().enumerate() {
let _ = mem.set_addr8(addr as u64, byte);
}
trace!("{:?}", mem.read_addr8(0));
let ecall_table: [Option<EnviromentCall>; 256] = [None; 256];
Self {
index: 0,
program,
registers: Registers::new(),
config: EngineConfig::default(),
last_timer_count: 0,
timer_callback: None,
enviroment_call_table: ecall_table,
memory: mem,
call_stack: Vec::new(),
}
}
pub fn dump(&self) {}
pub fn run(&mut self) -> Result<HaltStatus, RuntimeErrors> {
Ok(HaltStatus::Halted)
}
}

View file

@ -1,32 +0,0 @@
#[rustfmt::skip]
#[derive(Debug, Clone, Copy)]
pub struct Registers {
pub a0: u64, pub b0: u64, pub c0: u64, pub d0: u64, pub e0: u64, pub f0: u64,
pub a1: u64, pub b1: u64, pub c1: u64, pub d1: u64, pub e1: u64, pub f1: u64,
pub a2: u64, pub b2: u64, pub c2: u64, pub d2: u64, pub e2: u64, pub f2: u64,
pub a3: u64, pub b3: u64, pub c3: u64, pub d3: u64, pub e3: u64, pub f3: u64,
pub a4: u64, pub b4: u64, pub c4: u64, pub d4: u64, pub e4: u64, pub f4: u64,
pub a5: u64, pub b5: u64, pub c5: u64, pub d5: u64, pub e5: u64, pub f5: u64,
pub a6: u64, pub b6: u64, pub c6: u64, pub d6: u64, pub e6: u64, pub f6: u64,
pub a7: u64, pub b7: u64, pub c7: u64, pub d7: u64, pub e7: u64, pub f7: u64,
pub a8: u64, pub b8: u64, pub c8: u64, pub d8: u64, pub e8: u64, pub f8: u64,
pub a9: u64, pub b9: u64, pub c9: u64, pub d9: u64, pub e9: u64, pub f9: u64,
}
impl Registers {
#[rustfmt::skip]
pub fn new() -> Self{
Self {
a0: 0, b0: 0, c0: 0, d0: 0, e0: 0, f0: 0,
a1: 0, b1: 0, c1: 0, d1: 0, e1: 0, f1: 0,
a2: 0, b2: 0, c2: 0, d2: 0, e2: 0, f2: 0,
a3: 0, b3: 0, c3: 0, d3: 0, e3: 0, f3: 0,
a4: 0, b4: 0, c4: 0, d4: 0, e4: 0, f4: 0,
a5: 0, b5: 0, c5: 0, d5: 0, e5: 0, f5: 0,
a6: 0, b6: 0, c6: 0, d6: 0, e6: 0, f6: 0,
a7: 0, b7: 0, c7: 0, d7: 0, e7: 0, f7: 0,
a8: 0, b8: 0, c8: 0, d8: 0, e8: 0, f8: 0,
a9: 0, b9: 0, c9: 0, d9: 0, e9: 0, f9: 0,
}
}
}

View file

@ -1,125 +0,0 @@
use {
super::Engine,
crate::{HaltStatus, RuntimeErrors},
alloc::vec,
RuntimeErrors::*,
};
#[test]
fn invalid_program() {
let prog = vec![1, 0];
let mut eng = Engine::new(prog);
let ret = eng.run();
assert_eq!(ret, Err(InvalidOpcodePair(1, 0)));
}
#[test]
fn empty_program() {
let prog = vec![];
let mut eng = Engine::new(prog);
let ret = eng.run();
assert_eq!(ret, Ok(HaltStatus::Halted));
}
#[test]
fn max_quantum_reached() {
let prog = vec![0, 0, 0, 0];
let mut eng = Engine::new(prog);
eng.set_timer_callback(|| {
return 1;
});
eng.config.quantum = 1;
let ret = eng.run();
assert_eq!(ret, Ok(HaltStatus::Running));
}
#[test]
fn jump_out_of_bounds() {
use crate::bytecode::ops::Operations::JUMP;
let prog = vec![JUMP as u8, 0, 0, 0, 0, 0, 0, 1, 0];
let mut eng = Engine::new(prog);
let ret = eng.run();
assert_eq!(ret, Err(InvalidJumpAddress(256)));
}
#[test]
fn invalid_system_call() {
let prog = vec![255, 0];
let mut eng = Engine::new(prog);
let ret = eng.run();
assert_eq!(ret, Err(InvalidSystemCall(0)));
}
#[test]
fn add_u8() {
use crate::bytecode::ops::{MathOpSides::ConstantConstant, Operations::ADD};
let prog = vec![ADD as u8, ConstantConstant as u8, 100, 98, 0xA0];
let mut eng = Engine::new(prog);
let _ = eng.run();
assert_eq!(eng.registers.a0, 2);
}
#[test]
fn sub_u8() {
use crate::bytecode::ops::Operations::SUB;
let prog = vec![SUB as u8];
let mut eng = Engine::new(prog);
let _ = eng.run();
assert_eq!(eng.registers.a0, 1);
}
#[test]
fn mul_u8() {
use crate::bytecode::ops::{MathOpSides::ConstantConstant, Operations::MUL};
let prog = vec![MUL as u8, ConstantConstant as u8, 1, 2, 0xA0];
let mut eng = Engine::new(prog);
let _ = eng.run();
assert_eq!(eng.registers.a0, 2);
}
#[test]
fn div_u8() {
use crate::bytecode::ops::Operations::DIV;
let prog = vec![DIV as u8];
let mut eng = Engine::new(prog);
let _ = eng.run();
assert_eq!(eng.registers.a0, 2);
}
#[test]
fn set_register() {
let prog = alloc::vec![];
let mut eng = Engine::new(prog);
eng.set_register(0xA0, 1);
assert_eq!(eng.registers.a0, 1);
}
#[test]
fn load_u8() {
use crate::bytecode::ops::{Operations::LOAD, RWSubTypes::AddrToReg};
let prog = vec![LOAD as u8, AddrToReg as u8, 0, 0, 0, 0, 0, 0, 1, 0, 0xA0];
let mut eng = Engine::new(prog);
let ret = eng.memory.set_addr8(256, 1);
assert_eq!(ret, Ok(()));
let _ = eng.run();
assert_eq!(eng.registers.a0, 1);
}
#[test]
fn set_memory_8() {
let prog = vec![];
let mut eng = Engine::new(prog);
let ret = eng.memory.set_addr8(256, 1);
assert_eq!(ret, Ok(()));
}
#[test]
fn set_memory_64() {
let prog = vec![];
let mut eng = Engine::new(prog);
let ret = eng.memory.set_addr64(256, 1);
assert_eq!(ret, Ok(()));
}

View file

@ -1,23 +1,108 @@
//! HoleyBytes Virtual Machine
//!
//! # Alloc feature
//! - Enabled by default
//! - Provides mapping / unmapping, as well as [`Default`] and [`Drop`]
//! implementations for soft-paged memory implementation
// # General safety notice:
// - Validation has to assure there is 256 registers (r0 - r255)
// - Instructions have to be valid as specified (values and sizes)
// - Mapped pages should be at least 4 KiB
#![no_std] #![no_std]
#![cfg_attr(feature = "nightly", feature(fn_align))]
#![warn(missing_docs)]
use mem::{Memory, Address};
#[cfg(feature = "alloc")]
extern crate alloc; extern crate alloc;
pub mod bytecode; pub mod mem;
pub mod engine; pub mod value;
pub mod memory;
#[derive(Debug, PartialEq)] mod bmc;
pub enum RuntimeErrors { mod vmrun;
InvalidOpcodePair(u8, u8), mod utils;
RegisterTooSmall,
HostError(u64), use {bmc::BlockCopier, value::Value};
PageNotMapped(u64),
InvalidJumpAddress(u64), /// HoleyBytes Virtual Machine
InvalidSystemCall(u8), pub struct Vm<Mem, const TIMER_QUOTIENT: usize> {
/// Holds 256 registers
///
/// Writing to register 0 is considered undefined behaviour
/// in terms of HoleyBytes program execution
pub registers: [Value; 256],
/// Memory implementation
pub memory: Mem,
/// Program counter
pub pc: Address,
/// Program timer
timer: usize,
/// Saved block copier
copier: Option<BlockCopier>,
} }
// If you solve the halting problem feel free to remove this impl<Mem, const TIMER_QUOTIENT: usize> Vm<Mem, TIMER_QUOTIENT>
#[derive(PartialEq, Debug)] where
pub enum HaltStatus { Mem: Memory,
Halted, {
Running, /// Create a new VM with program and trap handler
///
/// # Safety
/// Program code has to be validated
pub unsafe fn new(memory: Mem, entry: Address) -> Self {
Self {
registers: [Value::from(0_u64); 256],
memory,
pc: entry,
timer: 0,
copier: None,
}
}
}
/// Virtual machine halt error
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
#[repr(u8)]
pub enum VmRunError {
/// Tried to execute invalid instruction
InvalidOpcode(u8),
/// Unhandled load access exception
LoadAccessEx(Address),
/// Unhandled instruction load access exception
ProgramFetchLoadEx(Address),
/// Unhandled store access exception
StoreAccessEx(Address),
/// Register out-of-bounds access
RegOutOfBounds,
/// Address out-of-bounds
AddrOutOfBounds,
/// Reached unreachable code
Unreachable,
}
/// Virtual machine halt ok
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
pub enum VmRunOk {
/// Program has eached its end
End,
/// Program was interrupted by a timer
Timer,
/// Environment call
Ecall,
} }

View file

@ -1,31 +1,83 @@
use hbvm::{ use hbvm::mem::Address;
bytecode::ops::{Operations::*, RWSubTypes::*},
engine::Engine, use {
RuntimeErrors, hbbytecode::valider::validate,
hbvm::{
mem::{
softpaging::{paging::PageTable, HandlePageFault, PageSize, SoftPagedMem},
MemoryAccessReason,
},
Vm,
},
std::io::{stdin, Read},
}; };
fn main() -> Result<(), RuntimeErrors> { fn main() -> Result<(), Box<dyn std::error::Error>> {
// TODO: Grab program from cmdline let mut prog = vec![];
#[rustfmt::skip] stdin().read_to_end(&mut prog)?;
let prog: Vec<u8> = vec![
NOP as u8,
JUMP as u8, 0, 0, 0, 0, 0, 0, 0, 0,
];
let mut eng = Engine::new(prog); if let Err(e) = validate(&prog) {
// eng.set_timer_callback(time); eprintln!("Program validation error: {e:?}");
eng.enviroment_call_table[10] = Some(print_fn); return Ok(());
eng.run()?; } else {
eng.dump(); unsafe {
println!("{:#?}", eng.registers); let mut vm = Vm::<_, 0>::new(
SoftPagedMem::<_, true> {
pf_handler: TestTrapHandler,
program: &prog,
root_pt: Box::into_raw(Default::default()),
icache: Default::default(),
},
Address::new(4),
);
let data = {
let ptr = std::alloc::alloc_zeroed(std::alloc::Layout::from_size_align_unchecked(
4096, 4096,
));
if ptr.is_null() {
panic!("Alloc error tbhl");
}
ptr
};
vm.memory
.map(
data,
Address::new(8192),
hbvm::mem::softpaging::paging::Permission::Write,
PageSize::Size4K,
)
.unwrap();
println!("Program interrupt: {:?}", vm.run());
println!("{:?}", vm.registers);
std::alloc::dealloc(
data,
std::alloc::Layout::from_size_align_unchecked(4096, 4096),
);
vm.memory.unmap(Address::new(8192)).unwrap();
let _ = Box::from_raw(vm.memory.root_pt);
}
}
Ok(()) Ok(())
} }
pub fn time() -> u32 { pub fn time() -> u32 {
9 9
} }
pub fn print_fn(engine: &mut Engine) -> Result<&mut Engine, u64> {
println!("hello"); #[derive(Default)]
Ok(engine) struct TestTrapHandler;
impl HandlePageFault for TestTrapHandler {
fn page_fault(
&mut self,
_: MemoryAccessReason,
_: &mut PageTable,
_: Address,
_: PageSize,
_: *mut u8,
) -> bool {
false
}
} }

110
hbvm/src/mem/addr.rs Normal file
View file

@ -0,0 +1,110 @@
//! Virtual(?) memory address
use core::{fmt::Debug, ops};
use crate::utils::impl_display;
/// Memory address
#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
pub struct Address(u64);
impl Address {
/// A null address
pub const NULL: Self = Self(0);
/// Saturating integer addition. Computes self + rhs, saturating at the numeric bounds instead of overflowing.
#[inline]
pub fn saturating_add<T: AddressOp>(self, rhs: T) -> Self {
Self(self.0.saturating_add(rhs.cast_u64()))
}
/// Saturating integer subtraction. Computes self - rhs, saturating at the numeric bounds instead of overflowing.
#[inline]
pub fn saturating_sub<T: AddressOp>(self, rhs: T) -> Self {
Self(self.0.saturating_sub(rhs.cast_u64()))
}
/// Cast or if smaller, truncate to [`usize`]
pub fn truncate_usize(self) -> usize {
self.0 as _
}
/// Get inner value
#[inline(always)]
pub fn get(self) -> u64 {
self.0
}
/// Construct new address
#[inline(always)]
pub fn new(val: u64) -> Self {
Self(val)
}
/// Do something with inner value
#[inline(always)]
pub fn map(self, f: impl Fn(u64) -> u64) -> Self {
Self(f(self.0))
}
}
impl_display!(for Address =>
|Address(a)| "{a:0x}"
);
impl<T: AddressOp> ops::Add<T> for Address {
type Output = Self;
#[inline]
fn add(self, rhs: T) -> Self::Output {
Self(self.0.wrapping_add(rhs.cast_u64()))
}
}
impl<T: AddressOp> ops::Sub<T> for Address {
type Output = Self;
#[inline]
fn sub(self, rhs: T) -> Self::Output {
Self(self.0.wrapping_sub(rhs.cast_u64()))
}
}
impl<T: AddressOp> ops::AddAssign<T> for Address {
fn add_assign(&mut self, rhs: T) {
self.0 = self.0.wrapping_add(rhs.cast_u64())
}
}
impl<T: AddressOp> ops::SubAssign<T> for Address {
fn sub_assign(&mut self, rhs: T) {
self.0 = self.0.wrapping_sub(rhs.cast_u64())
}
}
impl From<Address> for u64 {
#[inline(always)]
fn from(value: Address) -> Self {
value.0
}
}
impl Debug for Address {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
write!(f, "[{:0x}]", self.0)
}
}
/// Can perform address operations with
pub trait AddressOp {
/// Cast to u64, truncating or extending
fn cast_u64(self) -> u64;
}
macro_rules! impl_address_ops(($($ty:ty),* $(,)?) => {
$(impl AddressOp for $ty {
#[inline(always)]
fn cast_u64(self) -> u64 { self as _ }
})*
});
impl_address_ops!(u8, u16, u32, u64, usize);

86
hbvm/src/mem/mod.rs Normal file
View file

@ -0,0 +1,86 @@
//! Memory implementations
pub mod softpaging;
mod addr;
pub use addr::Address;
use {crate::utils::impl_display, hbbytecode::ProgramVal};
/// Load-store memory access
pub trait Memory {
/// Load data from memory on address
///
/// # Safety
/// - Shall not overrun the buffer
unsafe fn load(
&mut self,
addr: Address,
target: *mut u8,
count: usize,
) -> Result<(), LoadError>;
/// Store data to memory on address
///
/// # Safety
/// - Shall not overrun the buffer
unsafe fn store(
&mut self,
addr: Address,
source: *const u8,
count: usize,
) -> Result<(), StoreError>;
/// Read from program memory to execute
///
/// # Safety
/// - Data read have to be valid
unsafe fn prog_read<T: ProgramVal>(&mut self, addr: Address) -> Option<T>;
/// Read from program memory to exectue
///
/// # Safety
/// - You have to be really sure that these bytes are there, understand?
unsafe fn prog_read_unchecked<T: ProgramVal>(&mut self, addr: Address) -> T;
}
/// Unhandled load access trap
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub struct LoadError(pub Address);
impl_display!(for LoadError =>
|LoadError(a)| "Load access error at address {a}",
);
/// Unhandled store access trap
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub struct StoreError(pub Address);
impl_display!(for StoreError =>
|StoreError(a)| "Load access error at address {a}",
);
/// Reason to access memory
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum MemoryAccessReason {
/// Memory was accessed for load (read)
Load,
/// Memory was accessed for store (write)
Store,
}
impl_display!(for MemoryAccessReason => match {
Self::Load => const "Load";
Self::Store => const "Store";
});
impl From<LoadError> for crate::VmRunError {
fn from(value: LoadError) -> Self {
Self::LoadAccessEx(value.0)
}
}
impl From<StoreError> for crate::VmRunError {
fn from(value: StoreError) -> Self {
Self::StoreAccessEx(value.0)
}
}

View file

@ -0,0 +1,109 @@
//! Program instruction cache
use crate::mem::Address;
use {
super::{lookup::AddrPageLookuper, paging::PageTable, PageSize},
core::{
mem::{size_of, MaybeUninit},
ptr::{copy_nonoverlapping, NonNull},
},
};
/// Instruction cache
#[derive(Clone, Debug)]
pub struct ICache {
/// Current page address base
base: Address,
/// Curent page pointer
data: Option<NonNull<u8>>,
/// Current page size
size: PageSize,
/// Address mask
mask: u64,
}
impl Default for ICache {
fn default() -> Self {
Self {
base: Address::NULL,
data: Default::default(),
size: PageSize::Size4K,
mask: Default::default(),
}
}
}
impl ICache {
/// Fetch instruction from cache
///
/// # Safety
/// `T` should be valid to read from instruction memory
pub(super) unsafe fn fetch<T>(
&mut self,
addr: Address,
root_pt: *const PageTable,
) -> Option<T> {
let mut ret = MaybeUninit::<T>::uninit();
let pbase = self
.data
.or_else(|| self.fetch_page(self.base + self.size, root_pt))?;
// Get address base
let base = addr.map(|x| x & self.mask);
// Base not matching, fetch anew
if base != self.base {
self.fetch_page(base, root_pt)?;
};
let offset = addr.get() & !self.mask;
let requ_size = size_of::<T>();
// Page overflow
let rem = (offset as usize)
.saturating_add(requ_size)
.saturating_sub(self.size as _);
let first_copy = requ_size.saturating_sub(rem);
// Copy non-overflowing part
copy_nonoverlapping(pbase.as_ptr(), ret.as_mut_ptr().cast::<u8>(), first_copy);
// Copy overflow
if rem != 0 {
let pbase = self.fetch_page(self.base + self.size, root_pt)?;
// Unlikely, unsupported scenario
if rem > self.size as _ {
return None;
}
copy_nonoverlapping(
pbase.as_ptr(),
ret.as_mut_ptr().cast::<u8>().add(first_copy),
rem,
);
}
Some(ret.assume_init())
}
/// Fetch a page
unsafe fn fetch_page(&mut self, addr: Address, pt: *const PageTable) -> Option<NonNull<u8>> {
let res = AddrPageLookuper::new(addr, 0, pt).next()?.ok()?;
if !super::perm_check::executable(res.perm) {
return None;
}
(self.size, self.mask) = match res.size {
4096 => (PageSize::Size4K, !((1 << 8) - 1)),
2097152 => (PageSize::Size2M, !((1 << (8 * 2)) - 1)),
1073741824 => (PageSize::Size1G, !((1 << (8 * 3)) - 1)),
_ => return None,
};
self.data = Some(NonNull::new(res.ptr)?);
self.base = addr.map(|x| x & self.mask);
self.data
}
}

View file

@ -0,0 +1,126 @@
//! Address lookup
use crate::mem::addr::Address;
use super::{
addr_extract_index,
paging::{PageTable, Permission},
PageSize,
};
/// Good result from address split
pub struct AddrPageLookupOk {
/// Virtual address
pub vaddr: Address,
/// Pointer to the start for perform operation
pub ptr: *mut u8,
/// Size to the end of page / end of desired size
pub size: usize,
/// Page permission
pub perm: Permission,
}
/// Errornous address split result
pub struct AddrPageLookupError {
/// Address of failure
pub addr: Address,
/// Requested page size
pub size: PageSize,
}
/// Address splitter into pages
pub struct AddrPageLookuper {
/// Current address
addr: Address,
/// Size left
size: usize,
/// Page table
pagetable: *const PageTable,
}
impl AddrPageLookuper {
/// Create a new page lookuper
#[inline]
pub const fn new(addr: Address, size: usize, pagetable: *const PageTable) -> Self {
Self {
addr,
size,
pagetable,
}
}
/// Bump address by size X
pub fn bump(&mut self, page_size: PageSize) {
self.addr += page_size;
self.size = self.size.saturating_sub(page_size as _);
}
}
impl Iterator for AddrPageLookuper {
type Item = Result<AddrPageLookupOk, AddrPageLookupError>;
fn next(&mut self) -> Option<Self::Item> {
// The end, everything is fine
if self.size == 0 {
return None;
}
let (base, perm, size, offset) = 'a: {
let mut current_pt = self.pagetable;
// Walk the page table
for lvl in (0..5).rev() {
// Get an entry
unsafe {
let entry = (*current_pt)
.table
.get_unchecked(addr_extract_index(self.addr, lvl));
let ptr = entry.ptr();
match entry.permission() {
// No page → page fault
Permission::Empty => {
return Some(Err(AddrPageLookupError {
addr: self.addr,
size: PageSize::from_lvl(lvl)?,
}))
}
// Node → proceed waking
Permission::Node => current_pt = ptr as _,
// Leaf → return relevant data
perm => {
break 'a (
// Pointer in host memory
ptr as *mut u8,
perm,
PageSize::from_lvl(lvl)?,
// In-page offset
addr_extract_index(self.addr, lvl),
);
}
}
}
}
return None; // Reached the end (should not happen)
};
// Get available byte count in the selected page with offset
let avail = (size as usize).saturating_sub(offset).clamp(0, self.size);
self.bump(size);
Some(Ok(AddrPageLookupOk {
vaddr: self.addr,
ptr: unsafe { base.add(offset) }, // Return pointer to the start of region
size: avail,
perm,
}))
}
}

View file

@ -0,0 +1,166 @@
//! Automatic memory mapping
use crate::{mem::addr::Address, utils::impl_display};
use {
super::{
addr_extract_index,
paging::{PageTable, Permission, PtEntry, PtPointedData},
PageSize, SoftPagedMem,
},
alloc::boxed::Box,
};
impl<'p, A, const OUT_PROG_EXEC: bool> SoftPagedMem<'p, A, OUT_PROG_EXEC> {
/// Maps host's memory into VM's memory
///
/// # Safety
/// - Your faith in the gods of UB
/// - Addr-san claims it's fine but who knows is she isn't lying :ferrisSus:
/// - Alright, Miri-sama is also fine with this, who knows why
pub unsafe fn map(
&mut self,
host: *mut u8,
target: Address,
perm: Permission,
pagesize: PageSize,
) -> Result<(), MapError> {
let mut current_pt = self.root_pt;
// Decide on what level depth are we going
let lookup_depth = match pagesize {
PageSize::Size4K => 0,
PageSize::Size2M => 1,
PageSize::Size1G => 2,
};
// Walk pagetable levels
for lvl in (lookup_depth + 1..5).rev() {
let entry = (*current_pt)
.table
.get_unchecked_mut(addr_extract_index(target, lvl));
let ptr = entry.ptr();
match entry.permission() {
// Still not on target and already seeing empty entry?
// No worries! Let's create one (allocates).
Permission::Empty => {
// Increase children count
(*current_pt).childen += 1;
let table = Box::into_raw(Box::new(PtPointedData {
pt: PageTable::default(),
}));
core::ptr::write(entry, PtEntry::new(table, Permission::Node));
current_pt = table as _;
}
// Continue walking
Permission::Node => current_pt = ptr as _,
// There is some entry on place of node
_ => return Err(MapError::PageOnNode),
}
}
let node = (*current_pt)
.table
.get_unchecked_mut(addr_extract_index(target, lookup_depth));
// Check if node is not mapped
if node.permission() != Permission::Empty {
return Err(MapError::AlreadyMapped);
}
// Write entry
(*current_pt).childen += 1;
core::ptr::write(node, PtEntry::new(host.cast(), perm));
Ok(())
}
/// Unmaps pages from VM's memory
///
/// If errors, it only means there is no entry to unmap and in most cases
/// just should be ignored.
pub fn unmap(&mut self, addr: Address) -> Result<(), NothingToUnmap> {
let mut current_pt = self.root_pt;
let mut page_tables = [core::ptr::null_mut(); 5];
// Walk page table in reverse
for lvl in (0..5).rev() {
let entry = unsafe {
(*current_pt)
.table
.get_unchecked_mut(addr_extract_index(addr, lvl))
};
let ptr = entry.ptr();
match entry.permission() {
// Nothing is there, throw an error, not critical!
Permission::Empty => return Err(NothingToUnmap),
// Node Save to visited pagetables and continue walking
Permission::Node => {
page_tables[lvl as usize] = entry;
current_pt = ptr as _
}
// Page entry zero it out!
// Zero page entry is completely valid entry with
// empty permission - no UB here!
_ => unsafe {
core::ptr::write_bytes(entry, 0, 1);
break;
},
}
}
// Now walk in order visited page tables
for entry in page_tables.into_iter() {
// Level not visited, skip.
if entry.is_null() {
continue;
}
unsafe {
let children = &mut (*(*entry).ptr()).pt.childen;
*children -= 1; // Decrease children count
// If there are no children, deallocate.
if *children == 0 {
let _ = Box::from_raw((*entry).ptr() as *mut PageTable);
// Zero visited entry
core::ptr::write_bytes(entry, 0, 1);
} else {
break;
}
}
}
Ok(())
}
}
/// Error mapping
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum MapError {
/// Entry was already mapped
AlreadyMapped,
/// When walking a page entry was
/// encounterd.
PageOnNode,
}
impl_display!(for MapError => match {
Self::AlreadyMapped => "There is already a page mapped on specified address";
Self::PageOnNode => "There was a page mapped on the way instead of node";
});
/// There was no entry in page table to unmap
///
/// No worry, don't panic, nothing bad has happened,
/// but if you are 120% sure there should be something,
/// double-check your addresses.
#[derive(Clone, Copy, Debug)]
pub struct NothingToUnmap;
impl_display!(for NothingToUnmap => "There is no entry to unmap");

View file

@ -0,0 +1,296 @@
//! Platform independent, software paged memory implementation
pub mod icache;
pub mod lookup;
pub mod paging;
#[cfg(feature = "alloc")]
pub mod mapping;
use {
super::{addr::Address, LoadError, Memory, MemoryAccessReason, StoreError},
core::mem::size_of,
icache::ICache,
lookup::{AddrPageLookupError, AddrPageLookupOk, AddrPageLookuper},
paging::{PageTable, Permission},
};
/// HoleyBytes software paged memory
///
/// - `OUT_PROG_EXEC`: set to `false` to disable executing program
/// not contained in initially provided program, even the pages
/// are executable
#[derive(Clone, Debug)]
pub struct SoftPagedMem<'p, PfH, const OUT_PROG_EXEC: bool = true> {
/// Root page table
pub root_pt: *mut PageTable,
/// Page fault handler
pub pf_handler: PfH,
/// Program memory segment
pub program: &'p [u8],
/// Program instruction cache
pub icache: ICache,
}
impl<'p, PfH: HandlePageFault, const OUT_PROG_EXEC: bool> Memory
for SoftPagedMem<'p, PfH, OUT_PROG_EXEC>
{
/// Load value from an address
///
/// # Safety
/// Applies same conditions as for [`core::ptr::copy_nonoverlapping`]
unsafe fn load(
&mut self,
addr: Address,
target: *mut u8,
count: usize,
) -> Result<(), LoadError> {
self.memory_access(
MemoryAccessReason::Load,
addr,
target,
count,
perm_check::readable,
|src, dst, count| core::ptr::copy_nonoverlapping(src, dst, count),
)
.map_err(LoadError)
}
/// Store value to an address
///
/// # Safety
/// Applies same conditions as for [`core::ptr::copy_nonoverlapping`]
unsafe fn store(
&mut self,
addr: Address,
source: *const u8,
count: usize,
) -> Result<(), StoreError> {
self.memory_access(
MemoryAccessReason::Store,
addr,
source.cast_mut(),
count,
perm_check::writable,
|dst, src, count| core::ptr::copy_nonoverlapping(src, dst, count),
)
.map_err(StoreError)
}
#[inline(always)]
unsafe fn prog_read<T>(&mut self, addr: Address) -> Option<T> {
if OUT_PROG_EXEC && addr.truncate_usize() > self.program.len() {
return self.icache.fetch::<T>(addr, self.root_pt);
}
let addr = addr.truncate_usize();
self.program
.get(addr..addr + size_of::<T>())
.map(|x| x.as_ptr().cast::<T>().read())
}
#[inline(always)]
unsafe fn prog_read_unchecked<T>(&mut self, addr: Address) -> T {
if OUT_PROG_EXEC && addr.truncate_usize() > self.program.len() {
return self
.icache
.fetch::<T>(addr, self.root_pt)
.unwrap_or_else(|| core::mem::zeroed());
}
self.program
.as_ptr()
.add(addr.truncate_usize())
.cast::<T>()
.read()
}
}
impl<'p, PfH: HandlePageFault, const OUT_PROG_EXEC: bool> SoftPagedMem<'p, PfH, OUT_PROG_EXEC> {
// Everyone behold, the holy function, the god of HBVM memory accesses!
/// Split address to pages, check their permissions and feed pointers with offset
/// to a specified function.
///
/// If page is not found, execute page fault trap handler.
#[allow(clippy::too_many_arguments)] // Silence peasant
fn memory_access(
&mut self,
reason: MemoryAccessReason,
src: Address,
mut dst: *mut u8,
len: usize,
permission_check: fn(Permission) -> bool,
action: fn(*mut u8, *mut u8, usize),
) -> Result<(), Address> {
// Memory load from program section
let (src, len) = if src.truncate_usize() < self.program.len() as _ {
// Allow only loads
if reason != MemoryAccessReason::Load {
return Err(src);
}
// Determine how much data to copy from here
let to_copy = len.clamp(0, self.program.len().saturating_sub(src.truncate_usize()));
// Perform action
action(
unsafe { self.program.as_ptr().add(src.truncate_usize()).cast_mut() },
dst,
to_copy,
);
// Return shifted from what we've already copied
(
src.saturating_add(to_copy as u64),
len.saturating_sub(to_copy),
)
} else {
(src, len) // Nothing weird!
};
// Nothing to copy? Don't bother doing anything, bail.
if len == 0 {
return Ok(());
}
// Create new splitter
let mut pspl = AddrPageLookuper::new(src, len, self.root_pt);
loop {
match pspl.next() {
// Page is found
Some(Ok(AddrPageLookupOk {
vaddr,
ptr,
size,
perm,
})) => {
if !permission_check(perm) {
return Err(vaddr);
}
// Perform specified memory action and bump destination pointer
action(ptr, dst, size);
dst = unsafe { dst.add(size) };
}
// No page found
Some(Err(AddrPageLookupError { addr, size })) => {
// Attempt to execute page fault handler
if self.pf_handler.page_fault(
reason,
unsafe { &mut *self.root_pt },
addr,
size,
dst,
) {
// Shift the splitter address
pspl.bump(size);
// Bump dst pointer
dst = unsafe { dst.add(size as _) };
} else {
return Err(addr); // Unhandleable, VM will yield.
}
}
// No remaining pages, we are done!
None => return Ok(()),
}
}
}
}
/// Extract index in page table on specified level
///
/// The level shall not be larger than 4, otherwise
/// the output of the function is unspecified (yes, it can also panic :)
pub fn addr_extract_index(addr: Address, lvl: u8) -> usize {
debug_assert!(lvl <= 4);
let addr = addr.get();
usize::try_from((addr >> (lvl * 8 + 12)) & ((1 << 8) - 1)).expect("?conradluget a better CPU")
}
/// Page size
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum PageSize {
/// 4 KiB page (on level 0)
Size4K = 4096,
/// 2 MiB page (on level 1)
Size2M = 1024 * 1024 * 2,
/// 1 GiB page (on level 2)
Size1G = 1024 * 1024 * 1024,
}
impl PageSize {
/// Convert page table level to size of page
const fn from_lvl(lvl: u8) -> Option<Self> {
match lvl {
0 => Some(PageSize::Size4K),
1 => Some(PageSize::Size2M),
2 => Some(PageSize::Size1G),
_ => None,
}
}
}
impl core::ops::Add<PageSize> for Address {
type Output = Self;
#[inline(always)]
fn add(self, rhs: PageSize) -> Self::Output {
self + (rhs as u64)
}
}
impl core::ops::AddAssign<PageSize> for Address {
#[inline(always)]
fn add_assign(&mut self, rhs: PageSize) {
*self = Self::new(self.get().wrapping_add(rhs as u64));
}
}
/// Permisison checks
pub mod perm_check {
use super::paging::Permission;
/// Page is readable
#[inline(always)]
pub const fn readable(perm: Permission) -> bool {
matches!(
perm,
Permission::Readonly | Permission::Write | Permission::Exec
)
}
/// Page is writable
#[inline(always)]
pub const fn writable(perm: Permission) -> bool {
matches!(perm, Permission::Write)
}
/// Page is executable
#[inline(always)]
pub const fn executable(perm: Permission) -> bool {
matches!(perm, Permission::Exec)
}
}
/// Handle VM traps
pub trait HandlePageFault {
/// Handle page fault
///
/// Return true if handling was sucessful,
/// otherwise the program will be interrupted and will
/// yield an error.
fn page_fault(
&mut self,
reason: MemoryAccessReason,
pagetable: &mut PageTable,
vaddr: Address,
size: PageSize,
dataptr: *mut u8,
) -> bool
where
Self: Sized;
}

View file

@ -0,0 +1,86 @@
//! Page table and associated structures implementation
use core::{fmt::Debug, mem::MaybeUninit};
/// Page entry permission
#[derive(Clone, Copy, Debug, Default, PartialEq, Eq)]
#[repr(u8)]
pub enum Permission {
/// No page present
#[default]
Empty,
/// Points to another pagetable
Node,
/// Page is read only
Readonly,
/// Page is readable and writable
Write,
/// Page is readable and executable
Exec,
}
/// Page table entry
#[derive(Clone, Copy, Default, PartialEq, Eq)]
pub struct PtEntry(u64);
impl PtEntry {
/// Create new
///
/// # Safety
/// - `ptr` has to point to valid data and shall not be deallocated
/// troughout the entry lifetime
#[inline]
pub unsafe fn new(ptr: *mut PtPointedData, permission: Permission) -> Self {
Self(ptr as u64 | permission as u64)
}
/// Get permission
#[inline]
pub fn permission(&self) -> Permission {
unsafe { core::mem::transmute(self.0 as u8 & 0b111) }
}
/// Get pointer to the data (leaf) or next page table (node)
#[inline]
pub fn ptr(&self) -> *mut PtPointedData {
(self.0 & !((1 << 12) - 1)) as _
}
}
impl Debug for PtEntry {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
f.debug_struct("PtEntry")
.field("ptr", &self.ptr())
.field("permission", &self.permission())
.finish()
}
}
/// Page table
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
#[repr(align(4096))]
pub struct PageTable {
/// How much entries are in use
pub childen: u8,
/// Entries
pub table: [PtEntry; 256],
}
impl Default for PageTable {
fn default() -> Self {
// SAFETY: It's fine, zeroed page table entry is valid (= empty)
Self {
childen: 0,
table: unsafe { MaybeUninit::zeroed().assume_init() },
}
}
}
/// Data page table entry can possibly point to
#[derive(Clone, Copy)]
#[repr(C, align(4096))]
pub union PtPointedData {
/// Node - next page table
pub pt: PageTable,
/// Leaf
pub page: u8,
}

View file

@ -1,70 +0,0 @@
use crate::engine::VMPage;
use {
crate::{engine::Page, RuntimeErrors},
alloc::vec::Vec,
hashbrown::HashMap,
log::trace,
};
pub struct Memory {
inner: HashMap<u64, Page>,
}
impl Memory {
pub fn new() -> Self {
Self {
inner: HashMap::new(),
}
//
}
pub fn map_vec(&mut self, address: u64, vec: Vec<u8>) {
panic!("Mapping vectors into pages is not supported yet");
}
}
impl Memory {
pub fn read_addr8(&mut self, address: u64) -> Result<u8, RuntimeErrors> {
let (page, offset) = addr_to_page(address);
trace!("page {} offset {}", page, offset);
match self.inner.get(&page) {
Some(page) => {
let val = page.data()[offset as usize];
trace!("Value {}", val);
Ok(val)
}
None => {
trace!("page not mapped");
Err(RuntimeErrors::PageNotMapped(page))
}
}
}
pub fn read_addr64(&mut self, address: u64) -> u64 {
unimplemented!()
}
pub fn set_addr8(&mut self, address: u64, value: u8) -> Result<(), RuntimeErrors> {
let (page, offset) = addr_to_page(address);
let ret: Option<(&u64, &mut Page)> = self.inner.get_key_value_mut(&page);
match ret {
Some((_, page)) => {
page.data()[offset as usize] = value;
}
None => {
let mut pg = VMPage::new();
pg.data[offset as usize] = value;
self.inner.insert(page, Page::VMPage(pg));
trace!("Mapped page {}", page);
}
}
Ok(())
}
pub fn set_addr64(&mut self, address: u64, value: u64) -> Result<(), RuntimeErrors> {
unimplemented!()
}
}
fn addr_to_page(addr: u64) -> (u64, u64) {
(addr / 8192, addr % 8192)
}

53
hbvm/src/utils.rs Normal file
View file

@ -0,0 +1,53 @@
macro_rules! impl_display {
(for $ty:ty => $(|$selfty:pat_param|)? $fmt:literal $(, $($param:expr),+)? $(,)?) => {
impl ::core::fmt::Display for $ty {
fn fmt(&self, f: &mut ::core::fmt::Formatter<'_>) -> ::core::fmt::Result {
$(let $selfty = self;)?
write!(f, $fmt, $($param),*)
}
}
};
(for $ty:ty => $str:literal) => {
impl ::core::fmt::Display for $ty {
fn fmt(&self, f: &mut ::core::fmt::Formatter<'_>) -> ::core::fmt::Result {
f.write_str($str)
}
}
};
(for $ty:ty => match {$(
$bind:pat => $($const:ident)? $fmt:literal $(,$($params:tt)*)?;
)*}) => {
impl ::core::fmt::Display for $ty {
fn fmt(&self, f: &mut ::core::fmt::Formatter<'_>) -> ::core::fmt::Result {
match self {
$(
$bind => $crate::utils::internal::impl_display_match_fragment!($($const,)? f, $fmt $(, $($params)*)?)
),*
}
}
}
}
}
#[doc(hidden)]
pub(crate) mod internal {
macro_rules! impl_display_match_fragment {
(const, $f:expr, $lit:literal) => {
$f.write_str($lit)
};
($f:expr, $fmt:literal $(, $($params:tt)*)?) => {
write!($f, $fmt, $($($params)*)?)
};
}
pub(crate) use impl_display_match_fragment;
}
macro_rules! static_assert_eq(($l:expr, $r:expr $(,)?) => {
const _: [(); ($l != $r) as usize] = [];
});
pub(crate) use {impl_display, static_assert_eq};

80
hbvm/src/value.rs Normal file
View file

@ -0,0 +1,80 @@
//! HoleyBytes register value definition
/// Define [`Value`] union
///
/// # Safety
/// Union variants have to be sound to byte-reinterpretate
/// between each other. Otherwise the behaviour is undefined.
macro_rules! value_def {
($($ty:ident),* $(,)?) => {
/// HBVM register value
#[derive(Copy, Clone)]
#[repr(packed)]
pub union Value {
$(
#[doc = concat!(stringify!($ty), " type")]
pub $ty: $ty
),*
}
$(
impl From<$ty> for Value {
#[inline]
fn from(value: $ty) -> Self {
Self { $ty: value }
}
}
crate::utils::static_assert_eq!(
core::mem::size_of::<$ty>(),
core::mem::size_of::<Value>(),
);
impl private::Sealed for $ty {}
unsafe impl ValueVariant for $ty {}
)*
};
}
impl Value {
/// Byte reinterpret value to target variant
#[inline]
pub fn cast<V: ValueVariant>(self) -> V {
/// Evil.
///
/// Transmute cannot be performed with generic type
/// as size is unknown, so union is used.
///
/// # Safety
/// If [`ValueVariant`] implemented correctly, it's fine :)
///
/// :ferrisClueless:
union Transmute<Variant: ValueVariant> {
/// Self
src: Value,
/// Target variant
variant: Variant,
}
unsafe { Transmute { src: self }.variant }
}
}
/// # Safety
/// - N/A, not to be implemented manually
pub unsafe trait ValueVariant: private::Sealed + Copy + Into<Value> {}
mod private {
pub trait Sealed {}
}
value_def!(u64, i64, f64);
crate::utils::static_assert_eq!(core::mem::size_of::<Value>(), 8);
impl core::fmt::Debug for Value {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
// Print formatted as hexadecimal, unsigned integer
write!(f, "{:x}", self.cast::<u64>())
}
}

409
hbvm/src/vmrun.rs Normal file
View file

@ -0,0 +1,409 @@
//! Welcome to the land of The Great Dispatch Loop
//!
//! Have fun
use crate::mem::Address;
use {
super::{
bmc::BlockCopier,
mem::Memory,
value::{Value, ValueVariant},
Vm, VmRunError, VmRunOk,
},
core::{cmp::Ordering, mem::size_of, ops},
hbbytecode::{
ParamBB, ParamBBB, ParamBBBB, ParamBBD, ParamBBDH, ParamBBW, ParamBD, ProgramVal,
},
};
impl<Mem, const TIMER_QUOTIENT: usize> Vm<Mem, TIMER_QUOTIENT>
where
Mem: Memory,
{
/// Execute program
///
/// Program can return [`VmRunError`] if a trap handling failed
#[cfg_attr(feature = "nightly", repr(align(4096)))]
pub fn run(&mut self) -> Result<VmRunOk, VmRunError> {
use hbbytecode::opcode::*;
loop {
// Big match
//
// Contribution guide:
// - Zero register shall never be overwitten. It's value has to always be 0.
// - Prefer `Self::read_reg` and `Self::write_reg` functions
// - Extract parameters using `param!` macro
// - Prioritise speed over code size
// - Memory is cheap, CPUs not that much
// - Do not heap allocate at any cost
// - Yes, user-provided trap handler may allocate,
// but that is not our »fault«.
// - Unsafe is kinda must, but be sure you have validated everything
// - Your contributions have to pass sanitizers and Miri
// - Strictly follow the spec
// - The spec does not specify how you perform actions, in what order,
// just that the observable effects have to be performed in order and
// correctly.
// - Yes, we assume you run 64 bit CPU. Else ?conradluget a better CPU
// sorry 8 bit fans, HBVM won't run on your Speccy :(
unsafe {
match self
.memory
.prog_read::<u8>(self.pc as _)
.ok_or(VmRunError::ProgramFetchLoadEx(self.pc as _))?
{
UN => {
self.decode::<()>();
return Err(VmRunError::Unreachable);
}
TX => {
self.decode::<()>();
return Ok(VmRunOk::End);
}
NOP => self.decode::<()>(),
ADD => self.binary_op(u64::wrapping_add),
SUB => self.binary_op(u64::wrapping_sub),
MUL => self.binary_op(u64::wrapping_mul),
AND => self.binary_op::<u64>(ops::BitAnd::bitand),
OR => self.binary_op::<u64>(ops::BitOr::bitor),
XOR => self.binary_op::<u64>(ops::BitXor::bitxor),
SL => self.binary_op(|l, r| u64::wrapping_shl(l, r as u32)),
SR => self.binary_op(|l, r| u64::wrapping_shr(l, r as u32)),
SRS => self.binary_op(|l, r| i64::wrapping_shl(l, r as u32)),
CMP => {
// Compare a0 <=> a1
// < → 0
// > → 1
// = → 2
let ParamBBB(tg, a0, a1) = self.decode();
self.write_reg(
tg,
self.read_reg(a0)
.cast::<i64>()
.cmp(&self.read_reg(a1).cast::<i64>())
as i64
+ 1,
);
}
CMPU => {
// Unsigned comparsion
let ParamBBB(tg, a0, a1) = self.decode();
self.write_reg(
tg,
self.read_reg(a0)
.cast::<u64>()
.cmp(&self.read_reg(a1).cast::<u64>())
as i64
+ 1,
);
}
NOT => {
// Logical negation
let ParamBB(tg, a0) = self.decode();
self.write_reg(tg, !self.read_reg(a0).cast::<u64>());
}
NEG => {
// Bitwise negation
let ParamBB(tg, a0) = self.decode();
self.write_reg(
tg,
match self.read_reg(a0).cast::<u64>() {
0 => 1_u64,
_ => 0,
},
);
}
DIR => {
// Fused Division-Remainder
let ParamBBBB(dt, rt, a0, a1) = self.decode();
let a0 = self.read_reg(a0).cast::<u64>();
let a1 = self.read_reg(a1).cast::<u64>();
self.write_reg(dt, a0.checked_div(a1).unwrap_or(u64::MAX));
self.write_reg(rt, a0.checked_rem(a1).unwrap_or(u64::MAX));
}
ADDI => self.binary_op_imm(u64::wrapping_add),
MULI => self.binary_op_imm(u64::wrapping_sub),
ANDI => self.binary_op_imm::<u64>(ops::BitAnd::bitand),
ORI => self.binary_op_imm::<u64>(ops::BitOr::bitor),
XORI => self.binary_op_imm::<u64>(ops::BitXor::bitxor),
SLI => self.binary_op_ims(u64::wrapping_shl),
SRI => self.binary_op_ims(u64::wrapping_shr),
SRSI => self.binary_op_ims(i64::wrapping_shr),
CMPI => {
let ParamBBD(tg, a0, imm) = self.decode();
self.write_reg(
tg,
self.read_reg(a0)
.cast::<i64>()
.cmp(&Value::from(imm).cast::<i64>())
as i64,
);
}
CMPUI => {
let ParamBBD(tg, a0, imm) = self.decode();
self.write_reg(tg, self.read_reg(a0).cast::<u64>().cmp(&imm) as i64);
}
CP => {
let ParamBB(tg, a0) = self.decode();
self.write_reg(tg, self.read_reg(a0));
}
SWA => {
// Swap registers
let ParamBB(r0, r1) = self.decode();
match (r0, r1) {
(0, 0) => (),
(dst, 0) | (0, dst) => self.write_reg(dst, 0_u64),
(r0, r1) => {
core::ptr::swap(
self.registers.get_unchecked_mut(usize::from(r0)),
self.registers.get_unchecked_mut(usize::from(r1)),
);
}
}
}
LI => {
let ParamBD(tg, imm) = self.decode();
self.write_reg(tg, imm);
}
LD => {
// Load. If loading more than register size, continue on adjecent registers
let ParamBBDH(dst, base, off, count) = self.decode();
let n: u8 = match dst {
0 => 1,
_ => 0,
};
self.memory.load(
self.ldst_addr_uber(dst, base, off, count, n)?,
self.registers
.as_mut_ptr()
.add(usize::from(dst) + usize::from(n))
.cast(),
usize::from(count).saturating_sub(n.into()),
)?;
}
ST => {
// Store. Same rules apply as to LD
let ParamBBDH(dst, base, off, count) = self.decode();
self.memory.store(
self.ldst_addr_uber(dst, base, off, count, 0)?,
self.registers.as_ptr().add(usize::from(dst)).cast(),
count.into(),
)?;
}
BMC => {
// Block memory copy
match if let Some(copier) = &mut self.copier {
// There is some copier, poll.
copier.poll(&mut self.memory)
} else {
// There is none, make one!
let ParamBBD(src, dst, count) = self.decode();
// So we are still on BMC on next cycle
self.pc -= size_of::<ParamBBD>() + 1;
self.copier = Some(BlockCopier::new(
Address::new(self.read_reg(src).cast()),
Address::new(self.read_reg(dst).cast()),
count as _,
));
self.copier
.as_mut()
.unwrap_unchecked() // SAFETY: We just assigned there
.poll(&mut self.memory)
} {
// We are done, shift program counter
core::task::Poll::Ready(Ok(())) => {
self.copier = None;
self.pc += size_of::<ParamBBD>() + 1;
}
// Error, shift program counter (for consistency)
// and yield error
core::task::Poll::Ready(Err(e)) => {
self.pc += size_of::<ParamBBD>() + 1;
return Err(e.into());
}
// Not done yet, proceed to next cycle
core::task::Poll::Pending => (),
}
}
BRC => {
// Block register copy
let ParamBBB(src, dst, count) = self.decode();
if src.checked_add(count).is_none() || dst.checked_add(count).is_none() {
return Err(VmRunError::RegOutOfBounds);
}
core::ptr::copy(
self.registers.get_unchecked(usize::from(src)),
self.registers.get_unchecked_mut(usize::from(dst)),
usize::from(count),
);
}
JAL => {
// Jump and link. Save PC after this instruction to
// specified register and jump to reg + offset.
let ParamBBD(save, reg, offset) = self.decode();
self.write_reg(save, self.pc.get());
self.pc =
Address::new(self.read_reg(reg).cast::<u64>().saturating_add(offset));
}
// Conditional jumps, jump only to immediates
JEQ => self.cond_jmp::<u64>(Ordering::Equal),
JNE => {
let ParamBBD(a0, a1, jt) = self.decode();
if self.read_reg(a0).cast::<u64>() != self.read_reg(a1).cast::<u64>() {
self.pc = Address::new(jt);
}
}
JLT => self.cond_jmp::<u64>(Ordering::Less),
JGT => self.cond_jmp::<u64>(Ordering::Greater),
JLTU => self.cond_jmp::<i64>(Ordering::Less),
JGTU => self.cond_jmp::<i64>(Ordering::Greater),
ECALL => {
self.decode::<()>();
// So we don't get timer interrupt after ECALL
if TIMER_QUOTIENT != 0 {
self.timer = self.timer.wrapping_add(1);
}
return Ok(VmRunOk::Ecall);
}
ADDF => self.binary_op::<f64>(ops::Add::add),
SUBF => self.binary_op::<f64>(ops::Sub::sub),
MULF => self.binary_op::<f64>(ops::Mul::mul),
DIRF => {
let ParamBBBB(dt, rt, a0, a1) = self.decode();
let a0 = self.read_reg(a0).cast::<f64>();
let a1 = self.read_reg(a1).cast::<f64>();
self.write_reg(dt, a0 / a1);
self.write_reg(rt, a0 % a1);
}
FMAF => {
let ParamBBBB(dt, a0, a1, a2) = self.decode();
self.write_reg(
dt,
self.read_reg(a0).cast::<f64>() * self.read_reg(a1).cast::<f64>()
+ self.read_reg(a2).cast::<f64>(),
);
}
NEGF => {
let ParamBB(dt, a0) = self.decode();
self.write_reg(dt, -self.read_reg(a0).cast::<f64>());
}
ITF => {
let ParamBB(dt, a0) = self.decode();
self.write_reg(dt, self.read_reg(a0).cast::<i64>() as f64);
}
FTI => {
let ParamBB(dt, a0) = self.decode();
self.write_reg(dt, self.read_reg(a0).cast::<f64>() as i64);
}
ADDFI => self.binary_op_imm::<f64>(ops::Add::add),
MULFI => self.binary_op_imm::<f64>(ops::Mul::mul),
op => return Err(VmRunError::InvalidOpcode(op)),
}
}
if TIMER_QUOTIENT != 0 {
self.timer = self.timer.wrapping_add(1);
if self.timer % TIMER_QUOTIENT == 0 {
return Ok(VmRunOk::Timer);
}
}
}
}
/// Decode instruction operands
#[inline(always)]
unsafe fn decode<T: ProgramVal>(&mut self) -> T {
let pc1 = self.pc + 1_u64;
let data = self.memory.prog_read_unchecked::<T>(pc1 as _);
self.pc += 1 + size_of::<T>();
data
}
/// Perform binary operating over two registers
#[inline(always)]
unsafe fn binary_op<T: ValueVariant>(&mut self, op: impl Fn(T, T) -> T) {
let ParamBBB(tg, a0, a1) = self.decode();
self.write_reg(
tg,
op(self.read_reg(a0).cast::<T>(), self.read_reg(a1).cast::<T>()),
);
}
/// Perform binary operation over register and immediate
#[inline(always)]
unsafe fn binary_op_imm<T: ValueVariant>(&mut self, op: impl Fn(T, T) -> T) {
let ParamBBD(tg, reg, imm) = self.decode();
self.write_reg(
tg,
op(self.read_reg(reg).cast::<T>(), Value::from(imm).cast::<T>()),
);
}
/// Perform binary operation over register and shift immediate
#[inline(always)]
unsafe fn binary_op_ims<T: ValueVariant>(&mut self, op: impl Fn(T, u32) -> T) {
let ParamBBW(tg, reg, imm) = self.decode();
self.write_reg(tg, op(self.read_reg(reg).cast::<T>(), imm));
}
/// Jump at `#3` if ordering on `#0 <=> #1` is equal to expected
#[inline(always)]
unsafe fn cond_jmp<T: ValueVariant + Ord>(&mut self, expected: Ordering) {
let ParamBBD(a0, a1, ja) = self.decode();
if self
.read_reg(a0)
.cast::<T>()
.cmp(&self.read_reg(a1).cast::<T>())
== expected
{
self.pc = Address::new(ja);
}
}
/// Read register
#[inline(always)]
unsafe fn read_reg(&self, n: u8) -> Value {
*self.registers.get_unchecked(n as usize)
}
/// Write a register.
/// Writing to register 0 is no-op.
#[inline(always)]
unsafe fn write_reg(&mut self, n: u8, value: impl Into<Value>) {
if n != 0 {
*self.registers.get_unchecked_mut(n as usize) = value.into();
}
}
/// Load / Store Address check-computation überfunction
#[inline(always)]
unsafe fn ldst_addr_uber(
&self,
dst: u8,
base: u8,
offset: u64,
size: u16,
adder: u8,
) -> Result<Address, VmRunError> {
let reg = dst.checked_add(adder).ok_or(VmRunError::RegOutOfBounds)?;
if usize::from(reg) * 8 + usize::from(size) > 2048 {
Err(VmRunError::RegOutOfBounds)
} else {
self.read_reg(base)
.cast::<u64>()
.checked_add(offset)
.and_then(|x| x.checked_add(adder.into()))
.ok_or(VmRunError::AddrOutOfBounds)
.map(Address::new)
}
}
}

View file

@ -1,3 +1,4 @@
hex_literal_case = "Upper" hex_literal_case = "Upper"
imports_granularity = "One" imports_granularity = "One"
struct_field_align_threshold = 5 struct_field_align_threshold = 8
enum_discrim_align_threshold = 8

317
spec.md Normal file
View file

@ -0,0 +1,317 @@
# HoleyBytes ISA Specification
# Bytecode format
- Holey Bytes program should start with following magic: `[0xAB, 0x1E, 0x0B]`
- All numbers are encoded little-endian
- There is 256 registers, they are represented by a byte
- Immediate values are 64 bit
- Program is by spec required to be terminated with 12 zero bytes
### Instruction encoding
- Instruction parameters are packed (no alignment)
- [opcode, …parameters…]
### Instruction parameter types
- B = Byte
- D = Doubleword (64 bits)
- H = Halfword (16 bits)
| Name | Size |
|:----:|:--------|
| BBBB | 32 bits |
| BBB | 24 bits |
| BBDH | 96 bits |
| BBD | 80 bits |
| BBW | 48 bits |
| BB | 16 bits |
| BD | 72 bits |
| D | 64 bits |
| N | 0 bits |
# Instructions
- `#n`: register in parameter *n*
- `imm #n`: for immediate in parameter *n*
- `P ← V`: Set register P to value V
- `[x]`: Address x
## Program execution control
- N type
| Opcode | Name | Action |
|:------:|:----:|:-----------------------------:|
| 0 | UN | Trigger unreachable code trap |
| 1 | TX | Terminate execution |
| 2 | NOP | Do nothing |
## Integer binary ops.
- BBB type
- `#0 ← #1 <op> #2`
| Opcode | Name | Action |
|:------:|:----:|:-----------------------:|
| 3 | ADD | Wrapping addition |
| 4 | SUB | Wrapping subtraction |
| 5 | MUL | Wrapping multiplication |
| 6 | AND | Bitand |
| 7 | OR | Bitor |
| 8 | XOR | Bitxor |
| 9 | SL | Unsigned left bitshift |
| 10 | SR | Unsigned right bitshift |
| 11 | SRS | Signed right bitshift |
### Comparsion
| Opcode | Name | Action |
|:------:|:----:|:-------------------:|
| 12 | CMP | Signed comparsion |
| 13 | CMPU | Unsigned comparsion |
#### Comparsion table
| #1 *op* #2 | Result |
|:----------:|:------:|
| < | 0 |
| = | 1 |
| > | 2 |
### Division-remainder
- Type BBBB
- In case of `#3` is zero, the resulting value is all-ones
- `#0 ← #2 ÷ #3`
- `#1 ← #2 % #3`
| Opcode | Name | Action |
|:------:|:----:|:-------------------------------:|
| 14 | DIR | Divide and remainder combinated |
### Negations
- Type BB
- `#0 ← #1 <op> #2`
| Opcode | Name | Action |
|:------:|:----:|:----------------:|
| 15 | NEG | Bit negation |
| 16 | NOT | Logical negation |
## Integer immediate binary ops.
- Type BBD
- `#0 ← #1 <op> imm #2`
| Opcode | Name | Action |
|:------:|:----:|:--------------------:|
| 17 | ADDI | Wrapping addition |
| 18 | MULI | Wrapping subtraction |
| 19 | ANDI | Bitand |
| 20 | ORI | Bitor |
| 21 | XORI | Bitxor |
### Bitshifts
- Type BBW
| Opcode | Name | Action |
|:------:|:----:|:-----------------------:|
| 22 | SLI | Unsigned left bitshift |
| 23 | SRI | Unsigned right bitshift |
| 24 | SRSI | Signed right bitshift |
### Comparsion
- Comparsion is the same as when RRR type
| Opcode | Name | Action |
|:------:|:-----:|:-------------------:|
| 25 | CMPI | Signed comparsion |
| 26 | CMPUI | Unsigned comparsion |
## Register value set / copy
### Copy
- Type BB
- `#0 ← #1`
| Opcode | Name | Action |
|:------:|:----:|:------:|
| 27 | CP | Copy |
### Swap
- Type BB
- Swap #0 and #1
- Zero register rules:
- Both: no-op
- One: Copy zero to the non-zero register
| Opcode | Name | Action |
|:------:|:----:|:------:|
| 28 | SWA | Swap |
### Load immediate
- Type BD
- `#0 ← #1`
| Opcode | Name | Action |
|:------:|:----:|:--------------:|
| 29 | LI | Load immediate |
## Memory operations
- Type BBDH
- If loaded/store value exceeds one register size, continue accessing following registers
### Load / Store
| Opcode | Name | Action |
|:------:|:----:|:---------------------------------------:|
| 30 | LD | `#0 ← [#1 + imm #3], copy imm #4 bytes` |
| 31 | ST | `[#1 + imm #3] ← #0, copy imm #4 bytes` |
## Block copy
- Block copy source and target can overlap
### Memory copy
- Type BBD
| Opcode | Name | Action |
|:------:|:----:|:--------------------------------:|
| 32 | BMC | `[#1] ← [#0], copy imm #2 bytes` |
### Register copy
- Type BBB
- Copy a block a register to another location (again, overflowing to following registers)
| Opcode | Name | Action |
|:------:|:----:|:--------------------------------:|
| 33 | BRC | `#1 ← #0, copy imm #2 registers` |
## Control flow
### Unconditional jump
- Type D
| Opcode | Name | Action |
|:------:|:----:|:-------------------------------:|
| 34 | JMP | Unconditional, non-linking jump |
### Unconditional linking jump
- Type BBD
| Opcode | Name | Action |
|:------:|:----:|:--------------------------------------------------:|
| 35 | JAL | Save PC past JAL to `#0` and jump at `#1 + imm #2` |
### Conditional jumps
- Type BBD
- Jump at `imm #2` if `#0 <op> #1`
| Opcode | Name | Comparsion |
|:------:|:----:|:------------:|
| 36 | JEQ | = |
| 37 | JNE | ≠ |
| 38 | JLT | < (signed) |
| 39 | JGT | > (signed) |
| 40 | JLTU | < (unsigned) |
| 41 | JGTU | > (unsigned) |
### Environment call
- Type N
| Opcode | Name | Action |
|:------:|:-----:|:-------------------------------------:|
| 42 | ECALL | Cause an trap to the host environment |
## Floating point operations
- Type BBB
- `#0 ← #1 <op> #2`
| Opcode | Name | Action |
|:------:|:----:|:--------------:|
| 43 | ADDF | Addition |
| 44 | SUBF | Subtraction |
| 45 | MULF | Multiplication |
### Division-remainder
- Type BBBB
| Opcode | Name | Action |
|:------:|:----:|:-------------------------:|
| 46 | DIRF | Same as for integer `DIR` |
### Fused Multiply-Add
- Type BBBB
| Opcode | Name | Action |
|:------:|:----:|:---------------------:|
| 47 | FMAF | `#0 ← (#1 * #2) + #3` |
### Negation
- Type BB
| Opcode | Name | Action |
|:------:|:----:|:----------:|
| 48 | NEGF | `#0 ← -#1` |
### Conversion
- Type BB
- Signed
- `#0 ← #1 as _`
| Opcode | Name | Action |
|:------:|:----:|:------------:|
| 49 | ITF | Int to Float |
| 50 | FTI | Float to Int |
## Floating point immediate operations
- Type BBD
- `#0 ← #1 <op> imm #2`
| Opcode | Name | Action |
|:------:|:-----:|:--------------:|
| 51 | ADDFI | Addition |
| 52 | MULFI | Multiplication |
# Registers
- There is 255 registers + one zero register (with index 0)
- Reading from zero register yields zero
- Writing to zero register is a no-op
# Memory
- Addresses are 64 bit
- Program should be in the same address space as all other data
- Memory implementation is arbitrary
- Address `0x0` may or may not be valid. Count with compilers
considering it invalid!
- In case of accessing invalid address:
- Program shall trap (LoadAccessEx, StoreAccessEx) with parameter of accessed address
- Value of register when trapped is undefined
## Recommendations
- If paging used:
- Leave first page invalid
- Pages should be at least 4 KiB
# Program execution
- The way of program execution is implementation defined
- The execution is arbitrary, as long all effects are obervable
in the way as program was executed literally, in order.
# Program validation
- Invalid program should cause runtime error:
- The form of error is arbitrary. Can be a trap or an interpreter-specified error
- It shall not be handleable from within the program
- Executing invalid opcode should trap
- Program can be validaded either before execution or when executing
# Traps
Program should at least implement these traps:
- Environment call
- Invalid instruction exception
- Load address exception
- Store address exception
- Unreachable instruction
and executing environment should be able to get information about them,
like the opcode of invalid instruction or attempted address to load/store.
Details about these are left as an implementation detail.
# Assembly
HoleyBytes assembly format is not defined, this is just a weak description
of `hbasm` syntax.
- Opcode names correspond to specified opcode names, lowercase (`nop`)
- Parameters are separated by comma (`addi r0, r0, 1`)
- Instructions are separated by either line feed or semicolon
- Registers are represented by `r` followed by the number (`r10`)
- Labels are defined by label name followed with colon (`loop:`)
- Labels are references simply by their name (`print`)
- Immediates are entered plainly. Negative numbers supported.