For the past week or so I've been setting up a Sitecore 9.3 XP Scaled environment on Azure PaaS. We're using Redis for our private and shared session provider, and I've been seeing this error in Application Insights for the Content Delivery role:

SetAndReleaseItemExclusive => StackExchange.Redis.RedisServerException: Key has MOVED from Endpoint 10.100.10.100:15000 and hashslot 9446 but CommandFlags.NoRedirect was specified - redirect not followed for EVAL. IOCP: (Busy=0,Free=1000,Min=2,Max=1000), WORKER: (Busy=5,Free=32762,Min=50,Max=32767), Local-CPU: n/a
   at StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message message, ResultProcessor`1 processor, ServerEndPoint server)
   at StackExchange.Redis.RedisBase.ExecuteSync[T](Message message, ResultProcessor`1 processor, ServerEndPoint server)
   at StackExchange.Redis.RedisDatabase.ScriptEvaluate(String script, RedisKey[] keys, RedisValue[] values, CommandFlags flags)
   at Sitecore.SessionProvider.Redis.StackExchangeClientConnection.<>c__DisplayClass17_0.<Eval>b__0()
   at Sitecore.SessionProvider.Redis.StackExchangeClientConnection.OperationExecutor(Func`1 redisOperation)
   at Sitecore.SessionProvider.Redis.StackExchangeClientConnection.RetryLogic(Func`1 redisOperation)
   at Sitecore.SessionProvider.Redis.StackExchangeClientConnection.Eval(String script, String[] keyArgs, Object[] valueArgs)
   at Sitecore.SessionProvider.Redis.RedisConnectionWrapper.Set(String sessionId, ISessionStateItemCollection data, Int32 sessionTimeout)
   at Sitecore.SessionProvider.Redis.RedisSessionStateProvider.SetAndReleaseItemExclusive(HttpContext context, String id, SessionStateStoreData item, Object lockId, Boolean newItem)

It turns out that the Redis Cache had been configured with a shard count of 2 on the Cluster size blade. Sitecore's documentation for configuring shared session and private session with Redis states clearly:

Note
Sitecore does not support Redis Cluster.

The fix was to simply reduce the shard count to 1 and the errors stopped after the Redis instance scaled down (it took a few minutes as noted on the Cluster size blade).