jamtis changes#26
Conversation
jeffro256
commented
Dec 3, 2023
- nominal address tag protection for LWs (extra address key)
- flexible view tags
- churning and pocketchange protection for LWs (auxiliary enote records)
55cad5e to
3cf1863
Compare
|
For unmodified On this branch, I get: This means that for the different test modes |
|
Latest commit adds a unit test case which performs a successful Janus attack with knowledge of one address private key. Will push fix soon... |
|
Currently making modifications to the "Implementing Seraphis" document to reflect the changes in this PR. Will release a PR to the Seraphis repo when done. |
* nominal address tag protection for LWs (extra address key) * flexible view tags * churning and pocketchange protection for LWs (auxiliary enote records) PR implementing changes is here: UkoeHB/monero#26
|
Jamtis changes are documented here: UkoeHB/Seraphis#6 |
d493a6d to
279be2b
Compare
af37a83 to
5ecea4f
Compare
|
Rebased |
f73feab to
1761394
Compare
|
Needs to be updated to reflect the notational changes made in UkoeHB/Seraphis#6. |
1761394 to
2c96c0e
Compare
|
Did a simple rebase again to fix merge conflicts with the derived view balance key PR. Will tackle updating notation today |
|
How does the notation look on first glance? |
There was a problem hiding this comment.
13/72 files done.
One thing you need to do is check that all the KDF transcripts are <= 128 bytes (i.e. one blake2b block). If > 128 then the cost to hash increases a lot. You can test this by editing SpKDFTranscript so the constructor prints the domain separator, and the .size() method prints its size. Then when you run the seraphis unit test suite all the sizes will print.
If you find any that crept over the line, then the relevant domain separator needs to be shaved down.
There was a problem hiding this comment.
The idea behind there being two try_find_sp_enotes_in_tx is that the first one does the meat of the work, but does the least amount of allocations and copying possible. All it really needs to allocate is the output vector of bools. This ideally would be used in actual production code which scans on behalf of users, but doesn't need to send out chunk data yet, and just caches some bools persistently (maybe as unsigned ints for compactness).
|
In regards to the KDF transcripts being smaller than 128 bytes, I applied this diff: I then ran |
66f8605 to
98b443b
Compare
|
Rebased |
* SpTxCoinbaseV1: remove block_reward field Not storing/serializing `block_reward` saves us a few bytes on coinbase transactions, and makes it so that you can't initialize a coinbase transaction that has a block reward not matching its output sum.
98b443b to
c987366
Compare
|
Rebased again |
--------- Co-authored-by: UkoeHB <37489173+UkoeHB@users.noreply.github.com>
|
Wouldn't it be better to use
|
|
Okay, I reviewed the new changes and compared to the new seraphis draft paper and it is looking good to me. Very nice work! |
|
@DangerousFreedom1984 It might be better to use that notation in order to match the implementation paper. It also might be a little easier to read them wrong when skimming. I don't have a super strong opinion either way. |
* direct & compact tx serialization txs are [de]serialized directly from their classes and sizes of containers are not serialized if they can be implied.
Implement async wallet scanner.
Adds a new functional test for direct wallet2 -> live RPC daemon
interactions. This sets up a framework to test pointing the
Seraphis wallet lib to a live daemon.
Tests sending and scanning:
- a normal transfer
- a sweep single (0 change)
- to a single subaddress
- to 3 subaddresses (i.e. scanning using additional pub keys)
* scan machine: option to force reorg avoidance increment first pass
- when pointing to a daemon that does not support returning empty
blocks when the client requests too high of a height, we have to
be careful in our scanner to always request blocks below the chain
tip, in every request.
- by forcing the reorg avoidance increment on first pass, we make
sure clients will always include the reorg avoidance increment
when requesting blocks from the daemon, so the client can expect
the request for blocks should *always* return an ok height.
* core tests: check conversion tool on all legacy enote version types
Stil TODO:
- check complete scanning on all enote types
- hit every branch condition for all enote versions
* conn pool mock: epee http client connection pool
- Enables concurrent network requests using the epee http client.
- Still TODO for production:
1) close_connections
2) require the pool respect max_connections
* enote finding context: IN LegacyUnscannedChunk, OUT ChunkData
- finds owned enotes by legacy view scanning a chunk of blocks
* async: function to remove minimum element from token queue
- Useful when we want to remove elements of the token queue in an
order that is different than insertion order.
* async scanner: scan via RPC, fetching & scanning parallel chunks
*How it works*
Assume the user's wallet must start scanning blocks from height
5000.
1. The scanner begins by launching 10 RPC requests in parallel to
fetch chunks of blocks as follows:
```
request 0: { start_height: 5000, max_block_count: 20 }
request 1: { start_height: 5020, max_block_count: 20 }
...
request 9: { start_height: 5180, max_block_count: 20 }
```
2. As soon as any single request completes, the wallet immediately
parses that chunk.
- This is all in parallel. For example, as soon as request 7
responds, the wallet immediately begins parsing that chunk in
parallel to any other chunks it's already parsing.
3. If a chunk does not include a total of max_block_count blocks,
and the chunk is not the tip of the chain, this means there was a
"gap" in the chunk request. The scanner launches another parallel
RPC request to fill in the gap.
- This gap can occur because the server will **not** return a
chunk of blocks greater than 100mb (or 20k txs) via the
/getblocks.bin` RPC endpoint
([`FIND_BLOCKCHAIN_SUPPLEMENT_MAX_SIZE`](https://github.com/monero-project/monero/blob/053ba2cf07649cea8134f8a188685ab7a5365e5c/src/cryptonote_core/blockchain.cpp#L65))
- The gap is from `(req.start_height + res.blocks.size())` to
`(req.start_height + req.max_block_count)`.
4. As soon as the scanner finishes parsing the chunk, it
immediately submits another parallel RPC request.
5. In parallel, the scanner identifies a user's received (and
spent) enotes in each chunk.
- For example, as soon as request 7 responds and the wallet
parses it, the wallet scans that chunk in parallel to any other
chunks it's already scanning.
6. Once a single chunk is fully scanned locally, the scanner
launches a parallel task to fetch and scan the next chunk.
7. Once the scanner reaches the tip of the chain (the terminal
chunk), the scanner terminates.
*Some technical highlights*
- The wallet scanner is backwards compatible with existing daemons
(though it would need to point to an updated daemon to realize the
perf speed-up).
- On error cases such as the daemon going offline, the same wallet
errors that wallet2 uses (that the wallet API expects) are
propagated up to the higher-level Seraphis lib.
- The implementation uses an http client connection pool (reusing
the epee http client) to support parallel network requests
([related](seraphis-migration/wallet3#58)).
- A developer using the scanner can "bring their own blocks/network
implementation" to the scanner by providing a callback function of
the following type as a param to the async scanner constructor:
`std::function<bool(const cryptonote::COMMAND_RPC_GET_BLOCKS_FAST::request, cryptonote::COMMAND_RPC_GET_BLOCKS_FAST::response)>`
---------
Co-authored-by: jeffro256 <jeffro256@tutanota.com>
--------- Co-authored-by: SNeedlewoods <sneedlewoods_1@protonmail.com>
This PR removes "universal"-style indexing for legacy CLSAG rings, and replaces it with a reference set scheme that uses (amount, index in amount) indexing pairs to reference on-chain enotes. This is the same method that Cryptonote txs use, and is how the current Monero Core LMDB database is referenced. Doing things this way means that the database will not have to be re-indexed, saving at a very minimum 1.6 GB (100M on-chain enotes * (16 bytes for extra table keys)) of storage space, and an expensive database migration involving moving all existing enote data to a new table. We change the MockLedgerContext to support this indexing scheme. In practice, serialized txs under this method shouldn't take up much more space than pre-PR if compressed clever-ly, and assuming most ring members will RingCT enotes. We also add LegacyEnoteOriginContext for contextualized enote records so we can better keep tracked of scanned legacy enotes under the legacy indexing scheme. Co-authored-by: SNeedlewoods <sneedlewoods_1@protonmail.com>
f35a8ed to
9003a71
Compare
* nominal address tag protection for LWs (extra address key) * flexible view tags * churning and pocketchange protection for LWs (auxiliary enote records)
9003a71 to
537ba29
Compare
|
@jeffro256 Sorry the rebase broke this again. Sadly it looks like the work here may end up never being merged (largely my fault for not finishing review earlier this year). |