November 2022

Discord AMA 2022-11-25

Q: It was mentioned previously that a litigation checkpoint occurs every 24 hours (and 12 hours eventually). If a node is offline for 2 minutes at hour 23:59 after the previous check, does it risk getting slashed? When does the countdown start? Do we have 12-24 hours to respond to a second check after an initial failed check to give the user a chance to respond?

Short answer - no, you won’t get slashed so easily. Your node will have to be offline for a significant amount of time (hours) so that it doesn’t get to submitting proofs on chain during the proof phase for an asset (lasting hours). The node will attempt to submit all proofs while it's online.

There's no "set checkpoint" , rather a period during which nodes can attempt to submit proofs multiple times. Slashing will be introduced gradually and will apply to nodes that commit to storing (on chain) and then do not send a proof (in the proof phase) in the expected proof period (lasting days) as a part of the commit/proof mechanism. The period will start with a long setting (days) and will be gradually lowered to prevent any potential issues.

Q: Does slashing go live the moment we hit mainnet or do we get some time to adjust ourselves?

Slashing will not hit mainnet immediately with the V6.0.0 release (on day one) and will be introduced over the course of the next releases (there will be some time to adapt)

Q: How do you retire a node? Is that function included in Houston?

There will be instructions available for node "retiring" and it will be possible to do part of the process through Houston. Retiring a node will be a process that lasts for some time through,, as your node needs to complete the already commited activities and disengage from any new service provisioning (to not take on any new service requests). We'll provide detailed instructions in the docs

Q: Is TRAC per kb-epoch the new "lambda" ? Can we get more specifics on how that number is set or is it just manually determined by each full node runner ? Any suggested number by the team or suggested number generated by the system?

No, lambda will no longer exist in V6. Now each node is responsible for setting its own compensation ask (in TRAC per kb-epoch). A way to understand what a good ask setting is will be to observe how the rest of the nodes on the network have set it. One way to do it would be to for example set your ask to the average of the asks of other nodes (which would correspond to the average market price for the service).

Q: Can you have 2 instances of the same node running at the same time in case 1 goes down? (high availability node)

Yes, the highly available setup instructions will also be available. Expect more updates on the topic in official docs and discord

Yes, the nodes in V6 will implement a similar pruning mechanism to previously available (removing expired asset information and data which is no longer likely to be used for receiving rewards, for that specific node)

Q: Are node runners able to disable opportunistic storage of assertion replicas and store only the assets they’ve won?

Nodes are incentivised to host specific assertion replicas that are in the closest proximity (in address space) to it. Each node operator will be able to set how their nodes act with accordance to this distance "growing", essentially deciding how "opportunistic" it wants to act (which influences chances of receiving rewards)

Q: What is the split of reward between delegator and node runner?

% of the reward shared with delegators will be determined by the node runner himself (via settings in Houston).

Q: Is there an updated or rough timeline or when various features of v6- such as delegated staking/other staking- will be available?

Staking for the node-runners will be available with the V6.0.0 release. Delegations mechanism will be added in the coming releases. Follow Discord/Twitter for more info on the upcoming roadmap.

Q: 500x improved capacity/through is great but if we are planning to run the world's supply chain among other things, won't we hit a bottleneck very quickly? How far away are further enhancements and are we confident that down the track the ODN has the capacity to have most of the world's knowledge on it.

We’re quite confident of the increased scalability being able to support growth due to the system design. Scalability is a wide topic and observed through several layers. The DKG with it’s multichain capabilities enables an extension of capacity through additional chains (the blockchains are still the main bottleneck btw, and OT Parachain adds quite a bit of throughput capacity). On the networking side, the DKG V6 is able to withstand really high throughputs through the new features intorduced (such as the Sharding table neighborhood implementation), which have extensively surpassed the existing p2p network tools in capacity/speed (e.g. libp2p). An important aspect is also the decoupling of OT node responsibilities to the dkg-clients as well as triple stores, which will now be easily expandable via any standardized (RDF) implementation (such as AWS Amazon Neptune and others which nodes needing space capacity can take advantage of). Finally, expanding scalability usually also means reaching limits in real life usage and solving problems on the go iteratively, which is what we have been doing since the inception of OriginTrail in 2013. The metrics we've observed on V6 testnet have been really promising and can't wait to see the production system running on mainnet soon

Q: Does the team talk about quantum computing and how the OriginTrail technology will fit in when the time arises?

Absolutely. Quantum computing is mostly relevant for hashing algorithms and cryptography in the case of DKG. In the implementation of V6 we have introduced the ability to support multiple hashing algorithms which makes future evolution and changes much easier. Quantum computing is a wide topic but we are following the latest research, just as well as in the topics of AI.

Q: How many publishers are expected from unbundling v5 previously bundled jobs upon V6 launch?

Due to the value V6 DKG features and assets will bring to the table we expect a high number of previously engaged publishers to republish and unbundle datasets to gain benefit of the features.

Q: Do we will get new RFCs before mainnet?

Several RFCs in the works and pending release post V6 launch on mainnet (including the one for collators). More info will be available in the updated roadmap along with other exciting new updates (e.g. delegation)

Q: Why did the team elect for the NFTs produced by the DKG to be ERC721 and not ERC 1155? I believe I read in the RFC those NFTs would use the ERC 721 standard? I could be wrong but I'm justvspeaking off the top of my head here.

You are right, this wasn't really precisely explained in the RFC- both ERC721 and ERC1155 will be supported. There will be different asset types the DKG will support (including support to adding new types), which will extend existing and future NFT standards. We definitely don't want to lock ourselves into one interface (contrary to neutrality and usability principles both)

Q: I remember discussion around creating a new standard for NFTs. Any discussions around that taking place?

Regarding the NFT standard discussions - we've been active in several of those, particularly through the Ontochain project, with the goal of getting a common ontology that can be used across all semantic systems (both Web3 and Web2). More to come in future RFCs

Q: Can we have an updated detailed flow chart of how job winners are picked?

Details of the process are available in OT-RFC-14 where 3 criteria are described. Assets will be stored for X amount of epochs (chosen by the publisher). Rewards will be collected and spread equally between the best scored R0 nodes (mentioned in RFC-14) every epoch. The “best scored” list is determined by the smart contract, strictly according to proof submissions of nodes based on a fractional formula involving address space distance and stake, having the distance be a stronger factor. The formula is essentially equivalent to log( stake / distance^2), meaning that a smaller distance is more “powerful” than a high stake. More info will be available in the docs.

Q: ETA for ledger support for OTP, will we get it at launch?

The OTP Ledger app is on the way, and has just been released by Zondax team! You can check it out here (its open source).

https://github.com/Zondax/ledger-origintrail

It is currently in the testing & review process, which is in the hands of the Ledger team. As soon as more information is available from them we will share it.

You also have possibility to use your Ledger with $TRAC token as it is ERC-20 token. Basically new app is needed to support $OTP

Q: With the recent talk about privacy concerns and MetaMask (Infuria) collecting IP address, does the team plan to support other types of clients on future platforms ? Directly injected web3, Wallet Connect, BlockWallet, Talisman are a few good ones I have come accross

Absolutely, there is no specific preference for metamask. Especially given the recent privacy concerns this will become of higher priority

Q: Is there any way to see our teleported trac tokens on our substrate wallet? Currently, we can only see our teleported TRAC through our EVM wallet after importing the contract address.

Since TRAC is now a native Parachain asset, you are able to see your TRAC balance through the polkadot.js portal in the Assets section. After importing your account into the interface and navigating to Network -> Assets -> Balances.

Q: Once Ledger is supported, does it mean OTP and teleported TRAC will both be accessible from either EVM (Metamask) and/or substrate (dot js) ledger addresses?

The answer is yes. You will be able to send both TRAC and OTP via Ledger. However the first version of the app focuses on OTP support, as TRAC support is already available via Ledger (through connecting ledger wallet via apps like metamask)

Q: Can teleported TRAC be sent to an address without at least 1 OTP in it? Will that reported TRAC be lost, or will it be retrievable once the user sends at least 1 OTP to that wallet to retrieve it?

It is not possible to send TRAC to an EVM address on OriginTrail Parachain that doesn’t support it, meaning it needs to be mapped to a Substrate wallet that has an existential amount of 1 OTP. That means TRAC can only be distributed to mapped wallet addresses. If you haven’t mapped your wallet, the TRAC distribution will be on hold as it is not possible to execute.

Q: Is there a deadline for the mapping? Many community members are waiting for the Ledger integration first before mapping their wallets.

There's no system deadline for mapping. With regards to teleportation process, distributions will be active throughout the batches and possibly longer. We expect the Ledger support to be available much sooner than the last batch of teleporting occurs

Q: Mainnet still expected early December ? or closer to Xmas?

We're doing final preps for the mainnet release - expect details on dates next week!

Q: Is there a specific time frame when teleported TRAC gets distributed after mapping? Is distribution time bound?

The distribution is executed on a weekly basis (for completed teleport batches) and it usually takes place on Thursdays or Fridays.

Last updated