Q: It was mentioned previously that a litigation checkpoint occurs every 24 hours (and 12 hours eventually). If a node is offline for 2 minutes at hour 23:59 after the previous check, does it risk getting slashed? When does the countdown start? Do we have 12-24 hours to respond to a second check after an initial failed check to give the user a chance to respond?
Short answer - no, you won’t get slashed so easily. Your node will have to be offline for a significant amount of time (hours) so that it doesn’t get to submitting proofs on chain during the proof phase for an asset (lasting hours). The node will attempt to submit all proofs while it's online.
There's no "set checkpoint" , rather a period during which nodes can attempt to submit proofs multiple times. Slashing will be introduced gradually and will apply to nodes that commit to storing (on chain) and then do not send a proof (in the proof phase) in the expected proof period (lasting days) as a part of the commit/proof mechanism. The period will start with a long setting (days) and will be gradually lowered to prevent any potential issues.
Slashing will not hit mainnet immediately with the V6.0.0 release (on day one) and will be introduced over the course of the next releases (there will be some time to adapt)
There will be instructions available for node "retiring" and it will be possible to do part of the process through Houston. Retiring a node will be a process that lasts for some time through,, as your node needs to complete the already commited activities and disengage from any new service provisioning (to not take on any new service requests). We'll provide detailed instructions in the docs
No, lambda will no longer exist in V6. Now each node is responsible for setting its own compensation ask (in TRAC per kb-epoch). A way to understand what a good ask setting is will be to observe how the rest of the nodes on the network have set it. One way to do it would be to for example set your ask to the average of the asks of other nodes (which would correspond to the average market price for the service).
Yes, the highly available setup instructions will also be available. Expect more updates on the topic in official docs and discord
Yes, the nodes in V6 will implement a similar pruning mechanism to previously available (removing expired asset information and data which is no longer likely to be used for receiving rewards, for that specific node)
Nodes are incentivised to host specific assertion replicas that are in the closest proximity (in address space) to it. Each node operator will be able to set how their nodes act with accordance to this distance "growing", essentially deciding how "opportunistic" it wants to act (which influences chances of receiving rewards)
% of the reward shared with delegators will be determined by the node runner himself (via settings in Houston).
Staking for the node-runners will be available with the V6.0.0 release. Delegations mechanism will be added in the coming releases. Follow Discord/Twitter for more info on the upcoming roadmap.
Q: 500x improved capacity/through is great but if we are planning to run the world's supply chain among other things, won't we hit a bottleneck very quickly? How far away are further enhancements and are we confident that down the track the ODN has the capacity to have most of the world's knowledge on it.
We’re quite confident of the increased scalability being able to support growth due to the system design. Scalability is a wide topic and observed through several layers. The DKG with it’s multichain capabilities enables an extension of capacity through additional chains (the blockchains are still the main bottleneck btw, and OT Parachain adds quite a bit of throughput capacity). On the networking side, the DKG V6 is able to withstand really high throughputs through the new features intorduced (such as the Sharding table neighborhood implementation), which have extensively surpassed the existing p2p network tools in capacity/speed (e.g. libp2p). An important aspect is also the decoupling of OT node responsibilities to the dkg-clients as well as triple stores, which will now be easily expandable via any standardized (RDF) implementation (such as AWS Amazon Neptune and others which nodes needing space capacity can take advantage of). Finally, expanding scalability usually also means reaching limits in real life usage and solving problems on the go iteratively, which is what we have been doing since the inception of OriginTrail in 2013. The metrics we've observed on V6 testnet have been really promising and can't wait to see the production system running on mainnet soon
Absolutely. Quantum computing is mostly relevant for hashing algorithms and cryptography in the case of DKG. In the implementation of V6 we have introduced the ability to support multiple hashing algorithms which makes future evolution and changes much easier. Quantum computing is a wide topic but we are following the latest research, just as well as in the topics of AI.
Due to the value V6 DKG features and assets will bring to the table we expect a high number of previously engaged publishers to republish and unbundle datasets to gain benefit of the features.
Several RFCs in the works and pending release post V6 launch on mainnet (including the one for collators). More info will be available in the updated roadmap along with other exciting new updates (e.g. delegation)
You are right, this wasn't really precisely explained in the RFC- both ERC721 and ERC1155 will be supported. There will be different asset types the DKG will support (including support to adding new types), which will extend existing and future NFT standards. We definitely don't want to lock ourselves into one interface (contrary to neutrality and usability principles both)
Regarding the NFT standard discussions - we've been active in several of those, particularly through the Ontochain project, with the goal of getting a common ontology that can be used across all semantic systems (both Web3 and Web2). More to come in future RFCs
Details of the process are available in OT-RFC-14 where 3 criteria are described. Assets will be stored for X amount of epochs (chosen by the publisher). Rewards will be collected and spread equally between the best scored R0 nodes (mentioned in RFC-14) every epoch. The “best scored” list is determined by the smart contract, strictly according to proof submissions of nodes based on a fractional formula involving address space distance and stake, having the distance be a stronger factor. The formula is essentially equivalent to log( stake / distance^2), meaning that a smaller distance is more “powerful” than a high stake. More info will be available in the docs.
The OTP Ledger app is on the way, and has just been released by Zondax team! You can check it out here (its open source).
It is currently in the testing & review process, which is in the hands of the Ledger team. As soon as more information is available from them we will share it.
You also have possibility to use your Ledger with $TRAC token as it is ERC-20 token. Basically new app is needed to support $OTP
Absolutely, there is no specific preference for metamask. Especially given the recent privacy concerns this will become of higher priority
Since TRAC is now a native Parachain asset, you are able to see your TRAC balance through the polkadot.js portal in the Assets section. After importing your account into the interface and navigating to Network -> Assets -> Balances.
The answer is yes. You will be able to send both TRAC and OTP via Ledger. However the first version of the app focuses on OTP support, as TRAC support is already available via Ledger (through connecting ledger wallet via apps like metamask)
It is not possible to send TRAC to an EVM address on OriginTrail Parachain that doesn’t support it, meaning it needs to be mapped to a Substrate wallet that has an existential amount of 1 OTP. That means TRAC can only be distributed to mapped wallet addresses. If you haven’t mapped your wallet, the TRAC distribution will be on hold as it is not possible to execute.
There's no system deadline for mapping. With regards to teleportation process, distributions will be active throughout the batches and possibly longer. We expect the Ledger support to be available much sooner than the last batch of teleporting occurs
We're doing final preps for the mainnet release - expect details on dates next week!
The distribution is executed on a weekly basis (for completed teleport batches) and it usually takes place on Thursdays or Fridays.