sqs-extended-client-go is an extension to the Amazon SQS client that enables sending and receiving messages up to 2GB via Amazon S3. It is very similar to the SQS Extended Client for Java, but has an adjusted API to be more Gopher friendly.
The Extended Client also comes with a bit of extra functionality for dealing with SQS Events in Lambda. This all comes at no impact to the underlying Amazon SQS client- everything that is possible in the Amazon SQS Client, is possible in the Extended Client.
go get -u github.com/co-go/sqs-extended-client-go/v2import (
"context"
"fmt"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/sqs"
sqsextendedclient "github.com/co-go/sqs-extended-client-go/v2"
)
const queueURL = "https://sqs.amazonaws.com/12345/testing-queue"
func main() {
ctx := context.Background()
// initialize AWS Config
awsCfg, _ := config.LoadDefaultConfig(
context.Background(),
config.WithRegion("us-east-1"),
)
// create a new sqsextendedclient with some options
sqsec, _ := sqsextendedclient.New(
sqs.NewFromConfig(awsCfg),
s3.NewFromConfig(awsCfg),
// use "testing-bucket" for large messages
sqsextendedclient.WithS3BucketName("testing-bucket"),
// set the threshold to 1 KB
sqsextendedclient.WithMessageSizeThreshold(1024),
)
// send a message to the queue
sqsec.SendMessage(ctx, &sqs.SendMessageInput{
MessageBody: aws.String("really interesting message!"),
QueueUrl: aws.String(queueURL),
})
// retrieve messages from the specified queue
resp, _ := sqsec.ReceiveMessage(ctx, &sqs.ReceiveMessageInput{
QueueUrl: aws.String(queueURL),
})
for _, m := range resp.Messages {
// do some processing on each message...
// delete message after processing. can also be
// done more efficiently with 'DeleteMessageBatch'
sqsec.DeleteMessage(ctx, &sqs.DeleteMessageInput{
QueueUrl: aws.String(queueURL),
ReceiptHandle: m.ReceiptHandle,
})
}
}When using an SQS queue as an event source for a Lambda function, the Lambda will be invoked on the configured interval with a batch of messages. Some of these messages might need to be fetched from S3 if they exceeded the limit of the queue and were sent with this (or another) SQS Extended Client. This is the use case for RetrieveLambdaEvent. Very similar to RetrieveMessage, it will parse any extended messages in the event and retrieve them from S3, returning a new event will the full payloads. If none of the events match the extended format, no action is taken!
import (
"context"
"os"
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/sqs"
sqsextendedclient "github.com/co-go/sqs-extended-client-go/v2"
)
type Environment struct {
queueURL string
sqsec *sqsextendedclient.Client
}
func (e *Environment) HandleRequest(
ctx context.Context,
evt events.SQSEvent
) error {
parsedEvt, _ := e.sqsec.RetrieveLambdaEvent(ctx, &evt)
for _, record := range parsedEvt.Records {
// do some processing
// delete message after processing. can also be done
// more efficiently with 'DeleteMessageBatch'. see
// note below about processing extended events.
e.sqsec.DeleteMessage(ctx, &sqs.DeleteMessageInput{
QueueUrl: &e.queueURL,
ReceiptHandle: &record.ReceiptHandle,
})
}
return nil
}
func main() {
// initialize AWS Config
awsCfg, _ := config.LoadDefaultConfig(
context.Background(),
config.WithRegion("us-east-1"),
)
// create a new sqsextendedclient
sqsec, _ := sqsextendedclient.New(
sqs.NewFromConfig(awsCfg),
s3.NewFromConfig(awsCfg),
)
// struct to share initialized client across invocations
e := Environment{
queueURL: os.Getenv("QUEUE_URL"),
sqsec: sqsec,
}
lambda.Start(e.HandleRequest)
}Note
When processing SQS events in a Lambda function, if the invocation doesn’t return an error (indicating success), AWS will delete the SQS messages from the queue to prevent re-processing. This is a good thing! However, due to the special way extended messages are deleted, if AWS deletes an extended message that has a linked payload in S3, AWS will NOT delete the S3 payload.
There are multiple different ways to solve this (S3 lifecycle policies, etc.), but the recommended way to ensure the entire message is always cleaned up after processing is to explicitly call the DeleteMessage (or DeleteMessageBatch) functions.
By default, when extended messages are deleted from SQS, their corresponding S3 payloads are also deleted. You can disable this behavior and leave object cleanup to S3 lifecycle policies or an external job using WithSkipDeleteS3Payloads(true):
sqsec, _ := sqsextendedclient.New(
sqs.NewFromConfig(awsCfg),
s3.NewFromConfig(awsCfg),
sqsextendedclient.WithSkipDeleteS3Payloads(true),
)This flag applies to both DeleteMessage and DeleteMessageBatch. Batch deletions only remove S3 objects for entries that SQS reports as successfully deleted.
If an extended message’s S3 payload cannot be found when calling ReceiveMessage or RetrieveLambdaEvent, by default we will return this error to be handled by you. Sometimes this can be expected (for example, removed via lifecycle policy). You can opt-in to swallow the errors and discard such messages by setting the WithDiscardOrphanedExtendedMessages flag.
sqsec, _ := sqsextendedclient.New(
sqs.NewFromConfig(awsCfg),
s3.NewFromConfig(awsCfg),
sqsextendedclient.WithDiscardOrphanedExtendedMessages(true),
)When enabled, if reading from S3 returns a NoSuchKey error, the client will best-effort delete the message from SQS and omit it from the returned set. This has no impact on any other errors, as those will still be returned back. When disabled (default), all errors (including NoSuchKey) are returned to the caller.
