Why do "Known" peers get a lot less after restart ?

I2P router issues
Post Reply
User avatar
AntibodyMama
Posts: 29
Joined: 18 Jun 2024 20:45

Why do "Known" peers get a lot less after restart ?

Post by AntibodyMama »

In the left hand sidebar
Screenshot_20241012_121751.png
Screenshot_20241012_121751.png (24.71 KiB) Viewed 1022 times
There are "Known" peers.

Yesterday the known peers were nearly 3000+, after i restarted my Linux machine, now they are as you see above nearly 700.

The known peers as i understood are other computers using i2p.
Now shouldn't those people (peers) be stored in a database so that the next time i restart they don't need to get rediscovered again ?
Shouldn't the peers be 3000+ as i left them before restart and it should start to count up from that ?
User avatar
lgillis
Posts: 165
Joined: 20 Oct 2018 12:52
Contact:

Re: Why do "Known" peers get a lot less after restart ?

Post by lgillis »

The known connections are stored in a database. After a restart, some addresses of the previously known nodes have changed or left in the meantime and others have been added. In addition, actively used nodes cannot be equated with the sum of all known nodes during the online time.
Luther H. Gillis · Private Investigator · Discreet & Confidential
User avatar
AntibodyMama
Posts: 29
Joined: 18 Jun 2024 20:45

Re: Why do "Known" peers get a lot less after restart ?

Post by AntibodyMama »

lgillis wrote: 12 Oct 2024 11:49 The known connections are stored in a database. After a restart, some addresses of the previously known nodes have changed or left in the meantime and others have been added. In addition, actively used nodes cannot be equated with the sum of all known nodes during the online time.
Thanks much
User avatar
zzz
Posts: 184
Joined: 31 Mar 2018 13:15

Re: Why do "Known" peers get a lot less after restart ?

Post by zzz »

yeah we take the opportunity at restart to remove old router infos, and double-check that the others are still around during the first hour of uptime, you'll see the number go down once at startup and a 2nd time at one hour of uptime
User avatar
AntibodyMama
Posts: 29
Joined: 18 Jun 2024 20:45

Re: Why do "Known" peers get a lot less after restart ?

Post by AntibodyMama »

Thanks much for the info.

When i start i2p, it asks me to reseed and the known peers are very low, approximately 58.

Yesterday i had 5000+ known peers.

How can i prevent the deletion of all those working peers, so that i don't have to reseed sometimes ?

I just want i2p to check if the peers from yesterday are online, it's impossible that all those 5000+ peers will be offline, at least couple hundreds will be online.
User avatar
AntibodyMama
Posts: 29
Joined: 18 Jun 2024 20:45

Re: Why do "Known" peers get a lot less after restart ?

Post by AntibodyMama »

(Update)
AntibodyMama wrote: 06 Nov 2024 16:09 When i start i2p, it asks me to reseed and the known peers are very low, approximately 58.
Today i started i2p with 3138 peers, i think the request for reseeding happens when i cannot connect normally to the network, yesterday i got SystemNAT error cuz the port was not open.
User avatar
delta4chat
Posts: 1
Joined: 04 Jan 2025 13:31

Re: Why do "Known" peers get a lot less after restart ?

Post by delta4chat »

This problem I also encountered, it's caused by I2P routers nowadays being too aggressive in deleting the RouterInfo file of *.dat in the local netDb directory, so I applied this hacky patch in my own running router (that making deletion of the dat file an opt-in option, so not deleting it from the disk by default).

Code: Select all

diff --git a/router/java/src/net/i2p/router/networkdb/kademlia/PersistentDataStore.java b/router/java/src/net/i2p/router/networkdb/kademlia/PersistentDataStore.java
index 33c937bca..844cd5bfd 100644
--- a/router/java/src/net/i2p/router/networkdb/kademlia/PersistentDataStore.java
+++ b/router/java/src/net/i2p/router/networkdb/kademlia/PersistentDataStore.java
@@ -58,10 +58,12 @@ public class PersistentDataStore extends TransientDataStore {
     private final ReadJob _readJob;
     private volatile boolean _initialized;
     private final boolean _flat;
+    private final boolean _removeDat;
     private final int _networkID;

     private final static int READ_DELAY = 2*60*1000;
     private static final String PROP_FLAT = "router.networkDatabase.flat";
+    private static final String PROP_REMOVE_DAT = "router.networkDatabase.removeDat";
     static final String DIR_PREFIX = "r";
     private static final String B64 = Base64.ALPHABET_I2P;
     private static final int MAX_ROUTERS_INIT = SystemVersion.isSlow() ? 2000 : 8000;
@@ -73,6 +75,7 @@ public class PersistentDataStore extends TransientDataStore {
         super(ctx);
         _networkID = ctx.router().getNetworkID();
         _flat = ctx.getBooleanProperty(PROP_FLAT);
+        _removeDat = ctx.getBooleanProperty(PROP_REMOVE_DAT);
         _dbDir = getDbDir(dbDir);
         _facade = facade;
         _readJob = new ReadJob();
@@ -143,7 +146,7 @@ public class PersistentDataStore extends TransientDataStore {
     @Override
     public DatabaseEntry remove(Hash key, boolean persist) {
-        if (persist) {
+        if (_removeDat && persist) {
             _writer.remove(key);
         }
         return super.remove(key);
However, while it worked some, it wasn't very effective. Is it because the router's "DB Read Job" has a limit on the number of RIs it can load from disk, up to a maximum of 4,000 dat files (or 1,000 files if your device is determined to be slow), and will delete the remaining dat files that are "considered redundant" after they exceed 4,000 or 1,000 files. So also patch it for removes this behavior as well.

Code: Select all

diff --git a/router/java/src/net/i2p/router/networkdb/kademlia/PersistentDataStore.java b/router/java/src/net/i2p/router/networkdb/kademlia/PersistentDataStore.java
index 35484dc5b..33c937bca 100644
--- a/router/java/src/net/i2p/router/networkdb/kademlia/PersistentDataStore.java
+++ b/router/java/src/net/i2p/router/networkdb/kademlia/PersistentDataStore.java
@@ -59,12 +59,12 @@ public class PersistentDataStore extends TransientDataStore {
     private volatile boolean _initialized;
     private final boolean _flat;
     private final int _networkID;

     private final static int READ_DELAY = 2*60*1000;
     private static final String PROP_FLAT = "router.networkDatabase.flat";
     static final String DIR_PREFIX = "r";
     private static final String B64 = Base64.ALPHABET_I2P;
-    private static final int MAX_ROUTERS_INIT = SystemVersion.isSlow() ? 1000 : 4000;

@@ -459,16 +488,16 @@ public class PersistentDataStore extends TransientDataStore {
                 Collections.shuffle(toRead, _context.random());
                 int i = 0;
                 for (File file : toRead) {
-                    // Take the first 4000 good ones, delete the rest
-                     if (i >= MAX_ROUTERS_INIT && !_initialized) {
-                        file.delete();
-                         continue;
-                     }
                     Hash key = getRouterInfoHash(file.getName());
                     if (key != null) {
                         ReadRouterJob rrj = new ReadRouterJob(file, key);
However, there may be other code that I haven't found due to the sheer amount of code in Java I2P. But for this kind of code that can cause a significant reduction in the number of nodes in a reboot or other cases, I usually modify one once I find one.

Finally, it is recommended that I2P developers should not be too aggressive in removing local netDb peers (this is bad for floodfill routers and causes a lot of unnecessary reseed requests), but should remove them only after several tests of unreachability. This avoids unnecessary reseed requests and reduces the load on the reseed servers, so that they only need to handle new routers or other essential reseed requests, rather than reseed requests caused by mistaken deletions.
Post Reply