A signed URL contains the timetamp when the signing happened. Because of this every signed URL is different, so by default they can not be cached. But in some cases, the same file will be downloaded over and over, for example if the page shows the avatar of the current user then the image will be downloaded for every page reload.
Since the root of the problem is that the dates are different the solution is to round the time the signature happened. This makes sure that URLs signed close to each other for the same file will be the same, which enables the browser and proxies to cache them.
For example, let's round the signature times to the last 5-minute marks. So every URL that is signed from 12:00 - 12:04 will have an effective signature date of 12:00, while all URLs signed from 12:05 - 12:09 will be rounded to 12:05 and so on. This scheme allows caching signed URLs for at most 5 minutes by making sure that the backend generates the same URL between two 5-minute marks.
Why would it work?
The signature is only influenced by the input data, the algorithm itself is entirely offline, you could even do it on paper. This means you can change the inputs and as long as you can end up with a "believable" URL, S3 will accept it.
What are the input parameters? The bucket, the object key, expiration, a few AWS-dependent parameters, the Access Key ID, the Secret Access Key, and the current date:
https://bucket.s3-eu-west-1.amazonaws.com/ebook.pdf ?X-Amz-Algorithm=AWS4-HMAC-SHA256 &X-Amz-Credential=... &X-Amz-Date=201... &X-Amz-Expires=900 &X-Amz-Signature=... &X-Amz-SignedHeaders=host