Interesting question, we don’t normally optimise for this use case.
The dominant factor here will probably be RocksDB, since each database will create some RocksDB databases. Rocks is relatively memory hungry per database in our experience. If you want to run many databases you should reduce the database cache size to something like 50mb each * number of databases, so say 10gb for 200 small databases.
I think Rocks should also be smart enough to unload the databases that haven’t been used in a while so you may not need more than that amount of cache for long while.