Package net.i2p.router.networkdb.kademlia

Interface Summary
DataStore  
KBucket Group, without inherent ordering, a set of keys a certain distance away from a local key, using XOR as the distance metric
SelectionCollector Visit kbuckets, gathering matches
 

Class Summary
ExpireLeasesJob Periodically search through all leases to find expired ones, failing those keys and firing up a new search for each (in case we want it later, might as well preemptively fetch it)
ExpireRoutersJob Go through the routing table pick routers that are is out of date, but don't expire routers we're actively connected to.
ExploreJob Search for a particular key iteratively until we either find a value, we run out of peers, or the bucket the key belongs in has sufficient values in it.
ExploreKeySelectorJob Go through the kbuckets and generate random keys for routers in buckets not yet full, attempting to keep a pool of keys we can explore with (at least one per bucket)
FloodfillDatabaseLookupMessageHandler Build a HandleDatabaseLookupMessageJob whenever a DatabaseLookupMessage arrives
FloodfillDatabaseStoreMessageHandler Create a HandleDatabaseStoreMessageJob whenever a DatabaseStoreMessage arrives
FloodfillMonitorJob Simple job to monitor the floodfill pool.
FloodfillNetworkDatabaseFacade  
FloodfillPeerSelector This is where we implement semi-Kademlia with the floodfills, by selecting floodfills closest to a given key for searches and stores.
FloodfillStoreJob This extends StoreJob to fire off a FloodfillVerifyStoreJob after success.
FloodfillVerifyStoreJob send a netDb lookup to a random floodfill peer - if it is found, great, but if they reply back saying they dont know it, queue up a store of the key to a random floodfill peer again (via FloodfillStoreJob)
FloodOnlyLookupMatchJob  
FloodOnlyLookupSelector  
FloodOnlyLookupTimeoutJob  
FloodOnlySearchJob Try sending a search to some floodfill peers, failing completely if we don't get a match from one of those peers, with no fallback to the kademlia search Exception (a semi-exception, since we still fail completely without fallback): If we don't know any floodfill peers, we ask a couple of peers at random, who will hopefully reply with some floodfill keys.
FloodSearchJob Try sending a search to some floodfill peers, but if we don't get a successful match within half the allowed lookup time, give up and start querying through the normal (kademlia) channels.
FloodThrottler Count how often we have recently flooded a key This offers basic DOS protection but is not a complete solution.
HandleFloodfillDatabaseLookupMessageJob Handle a lookup for a key received from a remote peer.
HandleFloodfillDatabaseStoreMessageJob Receive DatabaseStoreMessage data and store it in the local net db
HarvesterJob Simple job to try to keep our peer references up to date by aggressively requerying them every few minutes.
HashDistance Moved from PeerSelector
KademliaNetworkDatabaseFacade Kademlia based version of the network database
KBucketImpl  
KBucketSet In memory storage of buckets sorted by the XOR metric from the local router's identity, with bucket N containing routers BASE^N through BASE^N+1 away, up through 2^256 bits away (since we use SHA256).
LocalHash Pull the caching used only by KBucketImpl out of Hash and put it here.
LookupThrottler Count how often we have recently received a lookup request with the reply specified to go to a peer/TunnelId pair.
MessageWrapper Method and class for garlic encrypting outbound netdb traffic, including management of the ElGamal/AES tags
MessageWrapper.WrappedMessage Wrapper so that we can keep track of the key and tags for later notification to the SKM
PeerSelector Mostly unused, see overrides in FloodfillPeerSelector
PersistentDataStore Write out keys to disk when we get them and periodically read ones we don't know about into memory, with newly read routers are also added to the routing table.
RepublishLeaseSetJob Run periodically for each locally created leaseSet to cause it to be republished if the client is still connected.
SearchJob Search for a particular key iteratively until we either find a value or we run out of peers Note that this is rarely if ever used directly, and is primary used by the ExploreJob extension.
SearchMessageSelector Check to see the message is a reply from the peer regarding the current search
SearchReplyJob  
SearchState Data related to a particular search
SearchUpdateReplyFoundJob Called after a match to a db search is found
SingleLookupJob Ask the peer who sent us the DSRM for the RouterInfos.
SingleSearchJob Ask a single peer for a single key.
StartExplorersJob Fire off search jobs for random keys from the explore pool, up to MAX_PER_RUN at a time.
StoreJob  
StoreMessageSelector Check to see the message is a reply from the peer regarding the current store
StoreState  
TransientDataStore  
XORComparator Help sort Hashes in relation to a base key using the XOR metric