Don’t overdo it ! You’ll need to write a parser …
Source Code
Tip when writing the regex for the tokens
- Begin by explicitly setting the same priority for every regex
- When adding more regex with the same priority you’ll be able to identify the conflicting regex
- The for each conflicting regex you first try to solve the conflict by changing the regex or you set a different priority
Tip when optimizing the regex
- create a big and representative
test.asmfile- when your satisfied with the token_stream generated by your lexer, store the output in a file
test.token_stream- then when you are optimizing the regex run
cargo run > temp.token_stremand rundiff test.token_stream temp.token_streamto see the difference between the two versions- This can be further automated into integration test
todo update logos book (swap callback and extras section for better coherence)