There are two main use cases for considering messaging in most applications – lightweight eventing and messaging between components, and processing large streams of data. If you are using AWS as your main platform, it makes sense to consider using AWS-native platforms to handle the messaging. For the case of simple peer-to-peer messaging, where the exact order of the messages is not mandated and a single application has to be notified, SQS is a good solution. It is easy to set up and manage, and it can be used to perform a simple event notification or be used in a request/response paradigm. For the case where a number of applications have to be notified of an event, SNS is a good solution, as it supports the concept of topics. SNS can be used to deliver notifications to a wide variety of notification mechanisms (email, SMS, Lamba, REST), and can interface with SQS queues for notifications to individual applications. If your application has a use-case where it needs a heavy amount of streaming, then Kinesis is an easy-to-use streaming platform for processing large streams of data. Kinesis Analytics can be used to perform Complex Event Processing, and to find interesting events within the streaming data. For what most applications need, Kinesis is preferable to Apache Kafka because it is easy to set up and administer, and you will avoid many of the operational complexities that usually plagues a deployment of Kafka.
SQS (Simple Queuing Service)
SQS provides the most basic way to perform interprocess communication – a simple queue. With SQS, you simply send a piece of data (a Message) into the tail end of the queue. One or more consumers will read messages from the front of the queue. If there are multiple consumers reading from the same queue, then any one of these consumers can receive the message that is at the front of the queue. The message is delivered once and only once to a single consumer.
SQS comes with two different kinds of queue. The Standard queue imposes no ordering of messages. The Fifo queue, which is only available in certain AWS regions, will guarantee ordering of the messages within the queue.
There are two different was that an SQS consumer can poll for messages. The first way is short polling. In this scenario, a consumer will look at the queue, read any messages that are in the queue, and then will return. If there are no messages in the queue, then the short-polling consumer will return immediately. The other way is to do long polling. In this scenario, the consumer will wait for a number of seconds for messages to appear in the queue.
Since AWS charges you for each SQS request, it may be more economical for your consumer to do long polling, since there will be fewer requests if messages are put into the queue on an infrequent basis.
A Message contains a payload, which can be any amount of data up to 256K bytes. A message can also contain custom attributes, which are name/value pairs. When a message is placed into a queue, it can have a non-zero Time-to-Live (TTL). If the message has sat in the queue without being consumed, and its TTL has expired, then the message is automatically deleted from the queue.
If you want to have messages up to 2GB in size, AWS has a Java-based library that uses S3 as the message storage.
You can have Cloudwatch monitor certain metrics of a queue and automatically scale out by adding additional instances.
aws autoscaling put-scaling-policy –policy-name my-sqs-scaleout-policy -–auto-scaling-group-name my-asg –scaling-adjustment 1 –adjustment-type ChangeInCapacity
The JMS way of programming is available for SQS. Amazon distributes a library called the Amazon SQS Java Messaging Library, and it supports using SQS as the JMS messaging provider. However, it is only available if you are programming in Java, which leaves the C# and NodeJS developers out in the cold.
SNS (Simple Notification Service)
SNS is a way to send a message to a topic and then route the message to a number of notification mechanisms. The message can be routed simultaneously to one or more destinations. The destinations include:
- HTTP REST endpoint
- AWS Lambda Function
- SQS queue
Unlike SQS, SNS does not have a dead-letter queue where it routes undeliverable messages. SNS is basically a fire-and-forget mechanism.
SNS messages are pushed to the destinations. The destination consumer does not have to worry about polling for messages.
In the SBS world, the RUN Event Dispatcher (RED) has some of the same functionality as SNS.
You will notice that one of the delivery mechanisms that SNS supports is to push a message into an SQS queue. On the other side of the queue could be an application that can read the message and take some action.
You can even push the single message to multiple SQS queues in order to execute some tasks in parallel.
|IMPORTANT NOTE- Kinesis Streams is not available for the AWS Free Tier|
Kinesis is the preferred hosted streaming platform for AWS. It differs from SQS and SNS in that Kinesis feels comfortable ingesting continuous streams of data, such as a stream of real-time stock quotes or a stream of signals from millions of IoT devices.
A Kinesis stream is subdivided into shards. Each shard can process a stream of data in isolation of other shards. This provides a degree of load-balancing. Each piece of data can contain a “partition key”, which directs that piece of data to be processed by a specific shard.
Each Kinesis consumer has a shard iterator which is used to read data from the stream. In this sense, Kinesis is similar to Kafka. Since data is persisted in the stream, a consumer can retrieve data from the beginning of the stream. This supports the concept of “late joiners”, in which a new subscriber can retrieve all of the events that they might have missed. A side benefit of this is that it is easy to replay data for various testing scenarios.
Consumers run of EC2 instances. You can auto-scale consumers by hooking Kinesis up to Cloudwatch, and adding additional EC2 instances dynamically when needed.
AWS has a service that works with Kinesis that allows you to perform queries on the data in the stream as that data passes through the stream. This service is called Kinesis Analytics, and it gives Kinesis the kind of Complex Event Processing (CEP) capabilities that systems like Streambase and Esper have.
Kinesis Analytics uses a dialect of SQL to perform processing. You can use this capability to detect certain conditions and generate events, or use can use this capability to enrich or transform the data.
The streaming SQL code below detects a condition where the change in a stock price is over 1%. If this condition is detected, an event is generated and put into another stream. A consumer of the other side of this new event stream can send a message to a user or trigger some sort of algorithmically-based trade.
CREATE OR REPLACE STREAM “INTERESTING_STOCK_EVENT_STREAM”
(ticker_symbol VARCHAR(4), sector VARCHAR(12), change DOUBLE, price DOUBLE);
CREATE OR REPLACE PUMP “STREAM_PUMP” AS
INSERT INTO “INTERESTING_STOCK_EVENT_STREAM”
SELECT STREAM ticker_symbol, sector, change, price
WHERE (ABS(Change / (Price – Change)) * 100) > 1;
An alternative to the AWS-hosted messaging systems mentioned above is to provision your own EC2 servers and install and run Apache Kafka on those servers.
I will not talk about the technology around Kafka here, as this has been discussed elsewhere. But I will talk about the differences between using Kafka on AWS and using one of the native AWS platforms.
Several articles point to Kafka being more performant that Kinesis for very high-throughput use cases. But if your application does not have the amount of streaming data that would compel you to use Kafka, then Kinesis is a simpler platform to use.
Some Pros for Kinesis
- Managed service
- Removes operational headaches and costs
- Tuning Kafka can be a challenge, and Kafka engineers are difficult to find
- Costs can be lower than Kafka for a similar environment
- With Kafka, you need hardware for the instances, for Zookeeper, for replication, and for data storage for retained messages
- Fits into the rest of the AWS stack seamlessly
- Consolidated monitoring via AWS CloudWatch
- Elasticity – we can bring up Kinesis when we need it.
- Scale-out transparently at times of heavy usage
- Kinesis Analytics add-on
- We avoid the fragile nature of the integration of Zookeeper and Kafka
Some Pros for Kafka
- No vendor lock-in
- Wide support for C# clients
- Most Kinesis APIs are Java-based
- We do not have to pay for Kafka, but we would have to pay for the EC2 servers that host Kafka
- Kafka SQL gives Kafka some of the same capabilities as Kinesis Analytics
- Since Kafka is open-source and part of the Apache project, we have visibility into Kafka (bug fixes, roadmap)
- Supports wildcard subscriptions
- Integrated with other Apache projects like Spark, Storm, and Samza
As a happy medium between performance and a full-managed service, we can consider using a completely managed Kafka service that runs on AWS. This service is run by Confluent, who is a consultancy that specialized in Kafka and was founded by the original Kafka developers at LinkedIn.
|Native AWS Service||Y||Y||Y||N|
|Chargeback model||Per-request plus data egress||Pushes and deliveries. Different pricing for different delivery methods.||Shards per hour||N/A|
|Push vs Pull||Pull||Push|
|Max message size||256K||1 MB||Configurable, but defaults to 1MB|
|Max message throughput||Unlimited for standard queue300 tps for FIFO||1000 PUT records per second per shard1 MB/sec input and 2 MB/sec output per shard|
|Message delivered to multiple consumers?||N||Y||Y|
|Message order preserved?||Only in FIFO queues||N||Y|
|Durable messages?||Messages are stored on multiple servers||Y|
|Replay of messages?||N||N||Y||Y|
|Data retention||60 seconds to 14 days||1-14 days (if not deleted)||1-7 days|
|Max queue depth||120K in-flight messages20K for FIFO queues||N/A||N/A|
|Scaling||Transparently auto-scale through Cloudwatch||Transparently auto-scale through Cloudwatch||You can increase the number of shards used but you need to pre-provision the shards|
|Monitoring through Cloudwatch?||Y||Y||Y|
|Language support for APIs||C#, C++, Java, NodeJS, Python, Ruby, PHP, Go||C#, C++, Java, NodeJS, Python, Ruby, PHP, Go||C#, C++, Java, NodeJS, Python, Ruby, PHP, Go||C/C++, Python, Go, Erlang, .NET, Clojure, Ruby, NodeJS, Perl, PHP, Rust, Java, Scala, Clojure, Swift|