Your cluster should have greater than 20% of available storage space, or greater than 20 GB of storage space, otherwise basic write operations like adding documents and creating indexes can start to fail. This can be checked by GET _cat/allocation?v in the Kibana Dev Tools
A single shard should ideally be between 10-50GB.
Large shards can make it difficult to recover from failure, but because each shard uses some amount of CPU and memory, having too many small shards can cause performance issues and out of memory errors. In other words, shards should be small enough that the underlying instance can handle them, but not so small that they place needless strain on the hardware. You can check the indices and their shard counts by running GET _cat/indices?v in the Kibana Dev Tools
When JVM memory pressure is high, garbage collection takes place more frequently and this is a CPU intensive process. The high CPU utilisation also causes search rejections as the cluster was under high strain.
Shard count can be reduced by deleting or closing indices or by re-indexing into bigger indices, refer link and link
Finally we can also delete old/unnecessary indices to free up space and increase performance and then update the sharding strategy