Writing a Redis Module on the Mac

You can extend the functionality of your Redis 4.x installation by writing custom modules in C using the Redis Module SDK. Since Redis 4.x is only available on Unix-based systems, you need to write your Redis modules on a Unix-like system such as MacOS and use compilers like gcc. (Redis for Windows is only supported up until Redis 3.2.) Your Redis module must be a Unix shared library. This shared library can be loaded into Redis when Redis is first started or can be loaded dynamically into an already-running instance of Redis.

I have attempted to document the process of writing a Redis module using gcc and using Visual Studio Code as my development environment. The example shown below comes right out of the Redis Module SDK.

Note that the Redis Module SDK is still under development. For example, it does not yet have an API that supports SET-based functions.

Prerequisites

Make sure that the Gnu gcc compiler is installed on the Mac. Open up a terminal and just enter the command

gcc

Open Microsoft’s Visual Studio Code. It’s helpful to install the official Microsoft C/C++ extension. 

Download the Source

Clone the Git repo for the Redis Module SDK. The main Github site is here. In a Terminal window, navigate to the directory where you want the Git repo to be downloaded to. Then enter the command

git clone https://github.com/RedisLabs/RedisModulesSDK.git

Modify the Source

After the source code is downloaded, edit the file rmutil/sds.h and change line 82 to

#define SDS_HDR_VAR(T,s) struct sdshdr##T *sh = (struct sdshdr##T*)((s)-(sizeof(struct sdshdr##T)));

(Change the “void*” to “struct sdshdr##T*” in order to silence the Mac’s gcc compiler)

Build the Source and the Example Module

In the Terminal, go to the root directory of the Redis Module SDK, and just enter the command

make

This will build the single library (librmutil.a) that you need to link your custom modules with. It also builds the example that comes with the Redis Module SDK. It will also build the shared library (module.so) that is the custom module that you will load into Redis.

Using Visual Studio Code

Run Visual Studio Code. Open the main directory that the Module SDK is in. We need to create JSON-based configuration files that tell Visual Studio Code how to build the application and how to run/debug the application. These configuration files go into the .vscode subdirectory under your project.

The tasks.json file will tell Visual Studio Code how to run the make command.

To run the example, you need to launch the command

/usr/local/bin/redis-4.0.6/bin/redis-server –loadmodule ./module.so

launch.json

{
 "version": "0.2.0",
 "configurations": [
   
     "name": "(lldb) Launch",
     "type": "cppdbg",
     "request": "launch",
     "program": "/usr/local/bin/redis-4.0.6/bin/redis-server",
     "args": ["--loadmodule", "./module.so"],
     "stopAtEntry": false,
     "cwd": "${workspaceFolder}",
     "environment": [],
     "externalConsole": true,
     "MIMode": "lldb"
   
 
}

tasks.json

{
 "version": "0.1.0",
 "command": "make",
 "isShellCommand": true,
 "tasks": [
     
         "taskName": "Makefile",

         // Make this the default build command.
         "isBuildCommand": true,

         // Show the output window only if unrecognized errors occur.
         "showOutput": "always",

         // No args
         "args": ["all"],

         // Use the standard less compilation problem matcher.
         "problemMatcher": {
             "owner": "cpp",
             "fileLocation": ["relative", "${workspaceRoot}"],
             "pattern": {
                 "regexp": "^(.*):(\\d+):(\\d+):\\s+(warning|error):\\s+(.*)$",
                 "file": 1,
                 "line": 2,
                 "column": 3,
                 "severity": 4,
                 "message": 5
             
         
     
 
}

Running the Module

In Visual Studio Code, run the debugger. This will launch a copy of Redis with your new module loaded. You can put breakpoints into your module’s code and watch Redis execute the module.

While the debugger is running a copy of Redis, open up a Terminal and run the redis-cli program. In redis-cli, enter the commands:

127.0.0.1:9979> EXAMPLE.HGETSET foo bar baz
(nil)
127.0.0.1:9979> EXAMPLE.HGETSET foo bar vaz
“baz”
127.0.0.1:9979> EXAMPLE.PARSE SUM 5 2
(integer) 7
127.0.0.1:9979> EXAMPLE.PARSE PROD 5 2
(integer) 10
127.0.0.1:9979> EXAMPLE.TEST
PASS

Creating a Slackbot on AWS using Golang – Part 3 – AWS Lambda Functions

Marc Adler

CTO as a Service

In the previous article of the series, we created a Quote Alerter for Slack and AWS. The Quote Alerter will notify a user on Slack if the price of a stock went above or below a certain target price. The Golang-based code runs on AWS and uses a Postgres database on RDS in order to store all of the alert subscriptions and the list of current stock prices.

In this article, we will migrate the quote-checking logic to an AWS Lambda function.

(There is an article on the CTO as a Service blog that discusses using Lambda with Visual Studio Code and C#/.NET. That article has some good intro material on Lambda functions on AWS, so you are encouraged to glance over it if you have any basic questions about Lambda.)

Why migrate the Slack Stock Bot to Lambda functions? Mainly for illustrative purposes for this series of articles. In reality, there might be some relatively time-consuming business logic that you might want to take out of the main code path of an application and run asynchronously with a Lambda function. In the domain of equities and quotes, you might want to have a separate serverless function that will compute some Greeks and either store those values in a database, or enrich our Slack notification messages with those Greek values (like the delta and gamma).

In the migration path that we are going to undertake, we will just start off with a simple Go-based “Hello World” lambda, and then slowly drag in the parts of the Slack Stock Bot that we need in order to implement the alert mechanism.

The source code to this project can be found here

https://github.com/magmasystems/SlackStockSlashCommand

https://github.com/magmasystems/SlackStockSlashCommand-Lambda

Overview of the Migration

  1. Create a new Lambda Function using the AWS Lambda dashboard
  2. Create a new CloudWatch trigger that will cause the new Lambda Function to run
  3. Create a very simple Go-based Lambda using the Go/AWS SDK, and test it out
  4. Change the existing Slack Stock Bot code so that we can import packages from it easily
  5. Change the code to the new Lambda Function so that it replaces the ticker-based price break checking
  6. Deploy the new Lambda function
  7. Test the function by manually firing the CloudWatch event

Creating a New Lambda Function on AWS

The first step in the process is to create a new Lambda Function by using the AWS Lambda dashboard.

After clicking on the Create Function button, you will be presented with a form that you need to fill out with the information about your new function.

We call our new function priceBreachChecker. We make sure that the function has the Golang runtime installed, and we will use a previous execution role that we have set up. The execution role determines which AWS services the lambda function can access.

After creating the lambda function, we need to specify what kind of events will trigger the execution of the function.

Creating the CloudWatch Trigger

If you recall, the current Slack Stock Bot code creates an application-based ticker that will check for price breaches at certain intervals. This uses the Golang ticker package. At every tick, the code will call the quote service to retrieve all the current prices for the stock symbols that have alerts on them. It will then call some SQL that will ask the Postgres database to see which current prices have breached the price limits that were set up.

In order to simulate this ticker, we will use AWS CloudWatch events. You can set up CloudWatch to call a Lambda Function at regular intervals or on a cron-based schedule (ie: every weekday at 12:00 PM and at 5:00 PM).

Back in the Lambda dashboard, add a new trigger. Choose CloudWatch Events from the list of triggers on the left side of the page.

Now we need to set up the interval that the CloudWatch trigger will fire.

Click on the Add button. For the Rule Type, choose Schedule Expression, and use rate(60 minutes) as the expression. This will tell CloudWatch to fire the event every hour.

Click on the Add button. You will get confirmation that the new Lambda Function has been created.

Before we look at the CloudWatch dashboard, notice that there is a way that you can upload a ZIP file of your Go-based Lambda code. We will not be using this. Instead, we will be using the AWS CLI from within Visual Studio Code to deploy our code.

Changing the CloudWatch Trigger

Let’s look at the CloudWatch dashboard in order to verify that we have a trigger. On the left side of the page, find the Events / Rules menu item and click on it.

If you click on the name of the rule, you can see some further details.

Under the Actions button, choose Edit. You will see that there is a rule that controls the interval that the event will be fired. If you want to change the interval at which the Price Breach Checker will run, then adjust this interval.

You can also have this rule trigger additional Lambda functions. Let’s say that we have a separate price-fetching Lambda function for every different quote service we support. We can have this CloudWatch rule trigger each of the separate Lambda functions. If you want to do this, choose a new Lambda function, and click on the Add Target button.

Creating a Simple Go-based Lambda

The main docs on Lambda and Go can be found here:

https://docs.aws.amazon.com/lambda/latest/dg/go-programming-model.html

We need to download and install the AWS Lambda SDK for Go

go get github.com/aws/aws-lambda-go/lambda

Now let’s get busy with Visual Studio Code. We are going to create a new folder for our new Lambda function.

Creating Tasks for Visual Studio Code

We can create a list of tasks that Visual Studio Code will run to do the build and deploy of the Lambda function. In Visual Studio Code, go to Terminal / Configure Tasks, and edit the tasks.json file.

My tasks.json file looks like this:

{
   // https://code.visualstudio.com/docs/editor/tasks-appendix
   "version": "2.0.0",
   "tasks": [
       {
           "label": "Build",
           "type": "shell",
           "command": "go",
           "args": [ "build", "-o", "priceBreachChecker"],
           "options": {
               "env": {
                   "GOOS": "linux",
                   "GOARCH": "amd64"
               }
           },
           "group": {
               "kind": "build",
               "isDefault": true
           }
       },
       {
           "label": "Zip",
           "command": "zip",
           "args": [ "priceBreachChecker.zip", "priceBreachChecker", "appSettings.json"],
           "dependsOn":[ "Build" ]
       },
       {
           "label": "CreateAndDeploy",
           "command": "aws",
           "type": "shell",
           "args": [
               "lambda", "create-function",
               "--function-name", "priceBreachChecker",
               "--region",  "us-east-2",
               "--profile", "default",
               "--role", "arn:aws:iam::901643335044:role/service-role/woof_garden_canary",
               "--handler", "priceBreachChecker",
               "--runtime", "go1.x",
               "--zip-file", "fileb://./priceBreachChecker.zip"
           ],
           "options": {
           },
           "problemMatcher": [],
           "dependsOn":[ "Zip" ]
       },
       {
           "label": "UpdateAndDeploy",
           "command": "aws",
           "type": "shell",
           "args": [
               "lambda", "update-function-code",
               "--function-name", "priceBreachChecker",
               "--region",  "us-east-2",
               "--profile", "default",
               "--zip-file", "fileb://./priceBreachChecker.zip"
           ],
           "options": {
           },
           "problemMatcher": [],
           "dependsOn":[ "Zip" ]
       }
   ]
}

There are four tasks here.

One is the Build task, which compiles the Go code. Notice that we use two special environment variables that tell the Golang compiler about the platform that the code should be generated for.

“GOOS”: “linux”,
“GOARCH”: “amd64”

The AWS server that runs your Lambda function is a runs Amazon’s own version of Linux and runs Go code that is compiled to the amd64 chipset.

The next task will Zip up the compiled executable and the appSettings.json configuration file. AWS requires that a ZIP file contains the assets for your Lambda function. Notice that the Zip task has a dependency on the Build task, so if you run the Zip task, it will automatically do the build as well.

(Note: Instead of using a separate application settings file, you can set environment variables in the Lambda function dashboard, and then read those environment variables.)

The CreateAndDeploy task will not be used here, because we already created the Lambda function using the AWS Lambda dashboard.

The final task is UpdateAndDeploy. This is used to update AWS with new versions of the code. It will upload the ZIP file that was created by the Zip task. We made the UpdateAndDeploy task dependent on the Zip task so that the build. zip and upload processes can be done with a single command.

Writing a Sample Lambda Function in Go

We will create a simple Go package which will just echo the arguments to the log.

Here is the code:

priceBreachChecker.go

package main

import (
    "context"
    "fmt"
    "log"

    ev "github.com/aws/aws-lambda-go/events"
    "github.com/aws/aws-lambda-go/lambda"
    "github.com/aws/aws-lambda-go/lambdacontext"
)

func main() {
    lambda.Start(priceBreachChecker)
}

func priceBreachChecker(ctx context.Context, event ev.CloudWatchEvent) (int, error) {
    lambdaContext, _ := lambdacontext.FromContext(ctx)
    log.Println(fmt.Sprintf("In priceBreachChecker handler: context is %+v", lambdaContext))
    log.Println(fmt.Sprintf("In priceBreachChecker handler: event is %+v", event))
    return 0, nil
}

Notice the arguments for the priceBreachChecker function. There are several different function signatures for the entry point, and somehow, the AWS Lambda runtime is able to figure out how to marshal the various triggers to the functions. The CloudWatchEvent is the struct that contains all of the information that the CloudWatch trigger generates.

Testing the Lambda

The first step to testing out this simple Lambda function is to build it, zip it up, and deploy it to AWS. To do this, I ran the Zip and the UpdateAndDeploy tasks from within Visual Studio Code.

I went into the CloudWatch Event Rules and temporarily changed the trigger interval to 30 seconds.

Then I went into the CloudWatch logs and waited until the trigger fired. Here is what the log looked like:

Success!!! The two log messages that the function generated can be seen in the CloudWatch log.

(Don’t forget to change the trigger back to 60 minutes, or else your Lambda function will run every 30 seconds)

Packaging the Slack Stock Bot

Most programming environments support the use of packages. In the world of C#, we use NuGet to import third-party packages. In the Node.js world, people use npm, and Java, most developers use Maven.

When we write our new price-checking Lambda function, we would like to import the code from our existing Slack Stock Bot. We have seen that you can import packages from Github using the go get command.

Since our existing code is already up on Github, let’s import it:

go get github.com/magmasystems/SlackStockSlashCommand

Easy enough, right? But look at the various error messages that Go Get gives us. These error messages all look like this:

../../go/src/github.com/magmasystems/SlackStockSlashCommand/stockbot/stockbot.go:11:2: 
       local import "../configuration" in non-local package

What does this mean?

In the file stockbot.go, we have a bunch of imports that look like this:

import config "../configuration"

It seems that Go Get does not like any relative references in the code that it imports. By “relative reference”, we mean an import whose directory is relative to any other directory. These references usually start with the dot character, like “../” or “./”.

So what do we need to do? We need to find all relative references in the import statements in our code, and turn them into references into our Github repository.

import "github.com/magmasystems/SlackStockSlashCommand/configuration"

You can read more about this issue here.

Now that we have fixed all of these references, and we have checked the code back into Github, we can now run the command

go get github.com/magmasystems/SlackStockSlashCommand

Merging the Slash Command Code in with the Lambda

We will import the parts of the Slack Stock Bot package that we need.

When the Lambda function is loaded, the init() function is called. This is a feature of Go. The init function is the place where you can do one-time initialization.

In the init() function, we read the configuration information (we will need the webhook part of the appSettings), we create the Stockbot (which is the interface to the quote services), and we will create the AlertsManager (which does the checking for the price breaches).

When the Lambda function is triggered, we call the function to check for the price breaches, and for every breach, we notify the user through Slack.

We insert a number of logging statements, just so we can trace the running of the code.

package main

import (
    "context"
    "fmt"
    "log"
    "time"

    ev "github.com/aws/aws-lambda-go/events"
    "github.com/aws/aws-lambda-go/lambda"
    "github.com/aws/aws-lambda-go/lambdacontext"

    "github.com/magmasystems/SlackStockSlashCommand/alerts"
    config "github.com/magmasystems/SlackStockSlashCommand/configuration"
    "github.com/magmasystems/SlackStockSlashCommand/slackmessaging"
    "github.com/magmasystems/SlackStockSlashCommand/stockbot"
)


var theBot *stockbot.Stockbot
var theAlertManager *alerts.AlertManager
var appSettings *config.AppSettings

func init() {
    // Put any one-time initialization code here
    configMgr := new(config.ConfigManager)
    appSettings = configMgr.Config()

    theBot = stockbot.CreateStockbot()
    // defer theBot.Close()

    // Create the AlertManager
    theAlertManager = alerts.CreateAlertManager(theBot)
    // defer theAlertManager.Dispose()
}

func main() {
    lambda.Start(priceBreachChecker)
}

func priceBreachChecker(ctx context.Context, event ev.CloudWatchEvent) (int, error) {
    lambdaContext, _ := lambdacontext.FromContext(ctx)
    log.Printf("In priceBreachChecker handler: context is %+v\n", lambdaContext)
    log.Printf("In priceBreachChecker handler: event is %+v\n", event)

    checkForPriceBreaches()

    return 0, nil
}

// checkForPriceBreaches - checks for price breaches
func checkForPriceBreaches() {
    fmt.Println("checkForPriceBreaches: Checking for price breaches at " + time.Now().String())

    theAlertManager.CheckForPriceBreaches(theBot, func(notification alerts.PriceBreachNotification) {
        log.Println("The notification to Slack is:")
        log.Println(notification)
        outputText := fmt.Sprintf("%s has gone %s the target price of %3.2f. The current price is %3.2f.\n",
            notification.Symbol, notification.Direction, notification.TargetPrice, notification.CurrentPrice)

        slackmessaging.PostSlackNotification(
                               notification.SlackUserName, notification.Channel, outputText, appSettings)
    })

    fmt.Printf("checkForPriceBreaches: Finished checking for price breaches at %s\n", time.Now().String())
}

That’s all we need to do for the new Lambda function. We are ready to deploy and test the code.

Deploying the New Lambda Function

Run the UpdateAndDeploy task from Visual Studio Code. You will see this output:

{
    "FunctionName": "priceBreachChecker", 
    "LastModified": "2019-06-20T13:14:25.885+0000", 
    "RevisionId": "7278be22-93ab-4bee-8c85-b4fea3a9857e", 
    "MemorySize": 512, 
    "Version": "$LATEST", 
    "Role": "arn:aws:iam::XXXXXXXXXXX:role/service-role/woof_garden_canary", 
    "Timeout": 15, 
    "Runtime": "go1.x", 
    "TracingConfig": {
        "Mode": "PassThrough"
    }, 
    "CodeSha256": "GVzIhBYObNJY4+ENZ78Emr081ApWxJPOS3KAD/AMbA4=", 
    "Description": "", 
    "VpcConfig": {
        "SubnetIds": [], 
        "VpcId": "", 
        "SecurityGroupIds": []
    }, 
    "CodeSize": 4848883, 
    "FunctionArn": "arn:aws:lambda:us-east-2:XXXXXXXXXX:function:priceBreachChecker", 
    "Handler": "priceBreachChecker"
}

This confirms that the new version of the code has been uploaded.

Testing the Lambda Function

In the Lambda Console, create a new test event.

Since our priceBreachChecker lambda function reacts to a CloudWatch trigger, we choose an Event Template that mimics a CloudWatch event.

After you click on the Create button to create the event, go back into the Lambda console and click on the Save button.

Now that the test event has been created and saved, click on the Test button in order to manually fire a CloudWatch event. You should see a Slack notification generated in the log.

Success!!! We successfully created a Lambda function that does the alerting on price breaches. And the notifications show up in Slack too.

All of the source code to this article can be found here:

https://github.com/magmasystems/SlackStockSlashCommand-Lambda

As always comments are welcome.

Possible Enhancements

Currently, the Lambda function runs and just notifies the Slack user when a price breach occurs. We can enhance the code to compute other values, and output those values to other AWS services. In the last article, we talked about the computation of Greeks. We can send those Greek values to an SNS topic, we can store them in DynamoDb, or we can feed them into a Kinesis stream. We can do this by calling other parts of the AWS-Go SDK from within the Lambda function.

Once you have a Lambda function running inside of AWS, the possibilities are many.

About Me

Marc Adler is the founder of CTO as a Service, a consultancy that provides senior-level technical services to companies who are in need of a CTO or Chief Architect on a “pay for what you use” basis. He was formerly the Chief Architect of companies like Citigroup, MetLife, ADP, and Quantifi. He likes to get himself in trouble with his CIOs by insisting on coding.

Creating a Slackbot on AWS using Golang – Part 2 – Price-based Alerting

Introduction

In the previous article, I talked about how to create a Slack Slash Command that would return the current price of a stock. So, you could enter the command /quote MSFT into a Slack message field and it would return the current price of Microsoft stock.

The Golang-based server was first run locally and then migrated to AWS using Elastic Beanstalk.

The article ended with a list of features that I would like to eventually implemented in my little Go/AWS/Slack application. This article, and subsequent articles, will focus on developing some of these features.

For this article, I wanted to implement an alerting feature in the Stockbot. With this feature, someone could enter a target price of a stock and be alerted when the current price of the stock went above or fell below the target. Maybe AMZN stock fell below $1000 a share and you want to rush to your financial advisor and buy a share or two?

This new feature requires creating a database which will be used to hold both the alerting subscriptions and the current prices of all of the stocks that all users are interested in. This will let us introduce how to set up a database in AWS and how to talk to that database from a Go application.

A New Branch

Let’s go to our Git repository for a second. We would like to create a feature branch for the new alerting feature. So let’s create a new branch on our local machine.

You can create a new branch through the command line

git checkout -b alerting

or the branch can be created from inside Visual Studio Code:

Business Requirements – Designing the New Slash Command

The requirements of the new command are simple.

A user will tell the Stockbot that they want to be notified asynchronously through Slack whenever the current price of a stock goes above or below a certain price target.

The Stockbot will poll the quote service at regularly-scheduled intervals and will retrieve the current prices of all of the stocks that users want to be alerted on. Whenever a price breaches the alerting price, the Stockbot will send a message to the user.

By default, the user will be notified in Slack by a Direct Message (DM). The user can also choose to be notified through a specific channel. Usually, that channel is a private channel that the user has set up, just for price alerts, but it can also be a public channel.

As far as additional user interactions go, we would also like a way to list all of the alerts that a user has, a way to delete a specific alert, and a way to delete all alerts.

When the user creates an alert, we want to make sure that the symbol is a valid stock. If not, an error should be returned. If the user already has an alert for this symbol, the alert will be updated with the new price target (and possibly with the new direction).

Given these requirements, we can design the new slash command.

/quote-alert [symbol price [below]] [symbol delete] [deleteall] [#channel]

Examples:

/quote-alertlists all of the alerts you have
/quote-alert HELPprints a help message
/quote-alert MSFT 130sends an alert when Microsoft stock reaches $130
/quote-alert MSFT 130 #myalertssends an alert to the #myalert channel when MSFT stock reaches $130
/quote-alert MSFT 130 BELOWsends an alert when Microsoft stock goes below $130
/quote-alert MSFT deleteremoves the existing alert on MSFT stock that you have subscribed to
/quote-alert deletealldeletes all alerts that you have

The Alerts Database

Given these requirements, we can design the schema for a database that will hold the subscriptions. The database can also hold current prices.

Amazon’s RDS service gives the developer a choice of several different databases to create. For this exercise, let’s choose Postgres since it is one of the databases available on the RDS Free Tier.

Every alert should have at least the following properties:

  • A unique id
  • The id of the Slack user that created the alert
  • The symbol of the stock that the user wants to monitor
  • The target price of the stock
  • The “direction” of the check (above or below the price)
  • The Slack channel that the user wants to be notified in
    • If the channel is empty, then the user should be sent a direct message through Slack
  • An indication that tells us whether this alert has been triggered
    • In case the sending of the alert is slow, we don’t want alerts to pile up

We also would like a simple table that holds the current price for each symbol that has an alert on it.

Let’s look at the SQL that will be used to create the database. Since we will be creating a Postgres database, the SQL below has the Postgres dialect.

create type slackstockbot.direction as enum ('ABOVE', 'BELOW');

alter type slackstockbot.direction owner to magmasystems;

create table slackstockbot.alertsubscription
(
  id serial not null
     constraint alertsubscription_pk
        primary key,
  slackuser varchar(128) not null,
  symbol varchar(16) not null,
  targetprice double precision not null,
  wasnotified boolean default false,
  direction slackstockbot.direction default 'ABOVE'::slackstockbot.direction,
  channel varchar(128) default ''::character varying not null
);

alter table slackstockbot.alertsubscription owner to magmasystems;

create unique index alertsubscription_id_uindex
  on slackstockbot.alertsubscription (id);

create table slackstockbot.stockprice
(
  symbol varchar(32) not null,
  price double precision not null,
  time timestamp
);

alter table slackstockbot.stockprice owner to magmasystems;

create index stockprice_symbol_index
  on slackstockbot.stockprice (symbol);

In addition to the two tables shows above, we may want to think of having a table with administrative info, such as the time that the last polling was done, the frequency of the polling, and the name of the quote service to pull from. We will leave this for a future exercise.

Finding Price Breaches using SQL

We can join the AlertSubscriptions with the current prices and find all rows that have prices that are either above or below the price target.

SELECT a.slackuser, a.webhook, a.symbol, a.targetprice, a.direction, p.price
  FROM slackstockbot.alertsubscription a, slackstockbot.stockprice p
  WHERE a.wasnotified = false AND a.symbol = p.symbol AND p.price > 0 AND
     ( (a.direction = 'ABOVE' AND p.price >= a.targetprice) OR 
       (a.direction = 'BELOW' AND p.price <= a.targetprice) )

Creating the Database in AWS

If you recall from the previous article, we created a development environment for the Slack Stock Bot on Elastic Beanstalk.

If you click on the green box, you will see the dashboard for the SlackStockBot environment.

In the side panel, click on Configuration. Then scroll down until you see a panel for the Database. You will notice that it is empty.

After you click on the Modify link, you will see a list of databases that are associated with this environment. Click the Create Database button.

You will be presented with a list of database engines. If you are looking to save money, make sure that you check the box at the bottom which will only present you with options that are eligible for the RDS Free Tier. We will choose Postgres.

Name the database and pick the authentication credentials.

In the Network and Security section, I like to make the database publicly accessible so that I can administer the database from my local machine using tools like DbVisualizer or DataGrip.

After the database is created, I will use something like DataGrip to create the tables using the SQL that was shown above.

In addition to setting up this Postgres database in AWS, I also set up a local version of the database for local testing. If you recall from the first article, we used localtunnel in order to have Slack interact with a local version of the Slack Stock Bot.

Progress So Far

We designed the API for the new /quote-alert command. We also created a database and created the two tables that will hold the alert subscriptions and the local prices.

The next stage is to create a new Slash Command in Slack and hook it up to the new version of our server. Then we will write the Golang code which implements the AlertManager.

Adding the New Slash Command to Slack

In the previous article, we saw how to add a new Slash Command to Slack. Let’s do the same thing again. We will create the new /quote-alert slash command.

Once the Slash Command has been created, we need to give it permission to send a message to a user directly and to a specific channel. Click on the OAuth & Permissions link on the side panel.

Then pull down the dropdown under Select Permission Scopes and choose the two permissions.

Click on the Save Changes button.

Now that the permissions have been granted to perform certain actions, we need to set up two Webhooks for the communication. First, enable Webhooks in your command.

In the side panel, click on Incoming Webhooks, and make sure that the webhooks are activated.

Scroll down a bit and create two new webhooks.

At the end of this process, you should have two Webhooks, one for posting to a channel and one for sending a message to a user.

By default, a /quote-alert will send a price alert directly to the user with one of the webhooks. If you enter the command

/quote-alert MSFT 130 #myalerts

then the alert will be sent to the #myalerts channel, using the other webhook.


Modifying the Golang Source

Now that all of the environmental stuff has been set up, we can write some Golang code.

The Source code is located here (alerting branch)
https://github.com/magmasystems/SlackStockSlashCommand

In order to access the Postgres database, we use the pq package. You need to install this package from github.com/lib/pq, and then reference it within the application.

go get github.com/lib/pq

Major Changes to the Code

There have been several things added to the version of the Slack Stock Bot that was developed in the previous article. We are not going to cover each change in this article. But, at a high level, those changes include:

  • The introduction of environment-specific configuration files (appSettings[.env].json), plus a configuration manager
  • A logging manager
  • A Slack Messaging package that encapsulates all interactions with Slack
  • An AlertManager that encapsulates all of the price breach alerting logic
  • Integration with Postgres (either local or RDS)

Changes to the Configuration File

A new Database section has been added to the appSettings.json file. This contains the standard database connection information that will be used to connect to Postgres. There are two new fields for the webhooks that the alerting mechanism will use to send messages back to Slack. Finally, there is the quoteInterval, which is the number of seconds that will elapse between price checks. Bear in mind that the free quote services will limit the number of quotes that you can request per day, so you do not want your price checker running too frequently.

{
   "apiKeys": {
       "quandl": "[Your Quandl API Key]",
       "worldtrading": "[Your World Trading Data API Key]",
       "alphavantage": "[Your AlphaVantage API Key]"
   },
   "driver": "alphavantage",
   "slackSecret": [Your Slack App's Secret Key]",
   "webhook": "https://hooks.slack.com/services/[Your Webhook for Channels]",
   "dmwebhook": "https://hooks.slack.com/services/[Your Webhook for direct messaging]",
   "port": 5000,
   "database": {
       "host": "slackstockbot.XXXXXXXX.us-east-2.rds.amazonaws.com",
       "port": 5432,
       "dbname": "slackstockbot",
       "user":  "[Your database user name]",
       "password": "[Your database password]",
       "SSL": true
   },
   "quoteCheckInterval": 600,
   "disablePriceBreachChecking": false
}

Polling for Price Breaches

In application.go, a Ticker is created using an interval which is set in the appSettings.json configuration file. Every time the ticker elapses, a function is called to check the prices.

// Create a ticker that will continually check for a price breach
if !appSettings.DisablePriceBreachChecking {
    priceBreachCheckingTicker = time.NewTicker(time.Duration(appSettings.QuoteCheckInterval) * time.Second)
    defer priceBreachCheckingTicker.Stop()

    // Every time the ticker elapses, we check for a price breach
    go func() {
        for range priceBreachCheckingTicker.C {
            onPriceBreachTickerElapsed()
        }
    }()
}

The responsibility for the price checks is in the AlertManager. We pass a callback function that the AlertManager calls for every price breach. This callback will create an informative message and will post it to Slack using a webhook.

// onPriceBreachTickerElapsed - This gets called every time the Price Breach Ticker ticks
func onPriceBreachTickerElapsed() {
    theAlertManager.CheckForPriceBreaches(theBot, func(notification alerts.PriceBreachNotification) {
        outputText := fmt.Sprintf("%s has gone %s the target price of %3.2f. The current price is %3.2f.\n",
            notification.Symbol, notification.Direction, notification.TargetPrice, notification.CurrentPrice)
        postSlackNotification(notification, outputText)
    })
}

The check for price breaches works like this:

  • Get a list of all of the stocks that have alerts on them
  • Call the quote service to get the current prices for all of the stocks
  • Save the prices to the database
  • Use SQL to check for price breaches. The SQL code for the check is shown at the start of this article.
  • For each alert that was triggered, set a flag that “logically deletes” the alert so that we do not check again.
    • We can enhance the /quote-alert command so that an alert can be reset
  • Call the passed-in callback function, which is responsible for alerting Slack.
// CheckForPriceBreaches - gets called by the application at periodic intervals to check for price breaches
func (alertManager *AlertManager) CheckForPriceBreaches(stockbot *stockbot.Stockbot, callback func(PriceBreachNotification)) {
    // Get the latest quotes
    prices := alertManager.GetQuotesForAlerts(stockbot)
    if prices == nil {
        return
    }

    // Save the prices to the database
    alertManager.SavePrices(prices)

    // Check for any price breaches
    notifications := alertManager.GetPriceBreaches()

    // Go through all of the price breaches and notify the Slack user
    for _, notification := range notifications {
        // Set the wasNotified field to TRUE on the alert
        alertManager.setWasNotified(notification.SubscriptionID)

        // Do the notification to slack synchronously
        callback(notification)
    }
}

A Word About Architecture and Strategy

By this time, you must be wondering why we used a SQL-based function to detect price breaches, especially if we were ever going to support real-time streaming quotes. After all, making calls to the database is costly in terms of performance, latency, and (in the case of RDS), monetary cost.

Wouldn’t we be much better off using some in-memory collections? For example, we could use a map where the keys are the list of stocks that have active alerts, and each value could be a sorted collection of alerts.

One of the reasons that I chose the database-centric way of doing the comparison is just so that I could find a way to introduce a database in this series of articles. I wanted to give the reader exposure to uses databases both in Golang and in an Elastic Beanstalk environment.

If we wanted to be architecturally flexible, we could introduce a Strategy Pattern. We could have a strategy for database-based comparisons and a different strategy for memory-based comparisons.

We can implement the Strategy Pattern by using a factory to create the quote comparator, and we can assign the comparator to a field within the AlertManager struct.

type QuoteComparator interface {
    findPriceBreaches(alerts AlertMap, currentQuotes []QuoteInfo)
}

type AlertManager struct {
    . . .
    quoteComparator QuoteComparator
    . . .
}

func createAlertManager() {
    . . . 
    alertManager.quoteComparator = quoteComparatorFactory("memory")
    . . . 
}

func quoteComparatorFactory(strategy string) (comparator QuoteComparator, errs error) {
    switch strategy {
    case "database":
        return &amp;DatabaseQuoteComparator{}, nil
    case "memory":
        return &amp;MemoryQuoteComparator{}, nil
    default:
        return nil, errors.New("the strategy cannot be found")
    }
}

Deploying to Elastic Beanstalk

We need to change the Buildfile so that it fetches the pq library for Postgres, and so that the app’s configuration file is located in the same directory as the binary. The new Buildfile is:

go get github.com/nlopes/slack
go get github.com/lib/pq
go build -o bin/application application.go
cp ./appSettings.json bin

An important thing to note is that, by default, Go applications on Elastic Beanstalk use Port 5000. If you change the port from within the configuration file, then you should also tell Slack that the Stock Bot command uses the new port.

Another thing that we might want to consider is, instead of using RDS, using Docker and deploying our own Postgres database. Elastic Beanstalk fully supports setting up Go applications using Docker. We can leave Docker to a future article.

Testing the new Slash Command

Let’s put in an alert for Johnson and Johnson’s stock.

/quote-alert JNJ 140.0

We see that the Slack Stock Bot works

That message looks a lot nicer than just printing out plain old text. Slack allows you to format output in different ways.

attachment := slack.Attachment{
    Color:    "good",
    Fallback: "You successfully posted by Incoming Webhook URL!",
    Text: outputText,
    //Footer:        "slack api",
    //FooterIcon:    "https://platform.slack-edge.com/img/default_application_icon.png",
    Ts: json.Number(strconv.FormatInt(time.Now().Unix(), 10)),
}

msg := slack.WebhookMessage{
    Attachments: []slack.Attachment{attachment},
    Username:    slackUserName,
    Channel:     slackChannel,
}
    
slack.PostWebhook(getWebhook(slackChannel, appSettings), &amp;msg)

Slack also supports something called Blocks, which allow more complex formatting and options for the user to interact with your message. Conceivably, we can use Blocks to present a “Buy” or “Sell” button, which would generate an order to the user’s financial advisor.

Merging the Alerting Branch Into Master

Now that we are done implementing the alerting feature, we can merge the alerting branch back into the master.

Go to the Github repository.

Click on the green button that is labeled Compare & pull request.

Type in some comments and then click on Create pull request.

You will see that there are no merge conflicts. Click on the Merge pull request button.

Confirm the merge

You will get the confirmation that everything was merged successfully, Now you can pull the new branch to your local machine.

Summary

In this article, we enhanced the original Slack Stock Bot code so that the user could subscribe to alerts. The alerts were stored in an AWS RDS database, and we used a SQL-based strategy to detect any price breaches. We came up with a new Slash Command called /quote-alert which allows a user to create or delete a price breach alert. Finally, we deployed the new code to Elastic Beanstalk and successfully tested it out.

In the next article, we will make a few more enhancements. One of the things that I am thinking of is making the price comparison into an AWS Lambda function. We can also use the new Slack Messaging to implement a simple workflow. We should start putting in unit tests, and we can start taking advantage of AWS CodeBuild and CodeDeploy.

Stay tuned for the next article.

Appendix

Trouble Connecting to the Postgres Database from Elastic Beanstalk

If you find that you are having problems connecting the Slack Stock Bot to RDS, go into the EC2 instance that hosts the database and change the Incoming Connection rules.

  1. Go into the RDS dashboard and find your database. Then click on the name of your database.
  2. In the dashboard for your database, go to the Security Group Rules section, and find the Security Group that is associated with Inbound connections. Click on that.
  3. In the Security Group, look at the Inbound tab. Make sure that port 5432 (Postgres) is open to your application.

Creating a Golang-based Slackbot on AWS

Marc Adler

CTO as a Service

Introduction

Since leaving the corporate workforce and starting CTO as a Service, I have been slowly learning some things that have been on my TODO list for a while. Not having full-time management duties frees up your time, and every day, I find that there is so much more to learn. So, as I wind my way down the TODO list, I figure that I would start documenting some of my learnings so that it might be of use to others.

Even though I have been a Chief Architect and CTO for the last 15 years, I have still kept myself very technical, and I still code for pleasure, and occasionally, for my CTO as a Service clients. I am pretty good at C#, Java, C++, and NodeJS/TypeScript. I can also stumble around in Python and Scala.

One of the languages that I have been meaning to teach myself in Golang. I kept hearing that Go is a great language for writing distributed systems, and I certainly have written my fair share of distributed systems. I started life way back when as a C programmer, and with Golang, I feel that I have come full-circle. The nice thing about Golang is the support for writing multi-threaded applications.

I always like to write something useful when I learn about new technologies. I have been spending an increasing amount of time in Slack, and I come from the world of finance. So I figured that I could combine the two for my first application in Go

The source code to this project can be found here

https://github.com/magmasystems/SlackStockSlashCommand

Outline of the Steps We Will Take

  1. Create a console-based Go program that gets the price of a stock
  2. Make the application run in a web server
  3. Run the application using a local tunnel
  4. Change the code so it uses the Go-based Slack API to support a Slack Slash Command
  5. Create a new Slack application that has a Slash Command
  6. Point the new Slack application to the application that is running on the local server
  7. Test the Slash Command from within Slack
  8. Migrate to AWS by creating a new Elastic Beanstalk-based application
  9. Migrate our existing code so that it runs on Elastic Beanstalk
  10. Deploy the code to Elastic Beanstalk
  11. Change our Slack application so that it points to the new Elastic Beanstalk server
  12. Test the Slash Command again from within Slack

First Steps

The first thing that I wanted to do is just to write a simple Go program that retrieved the price of MSFT stock and printed it out on the terminal. Easy enough, right? Just a simple HTTP GET request to the website of a quote provider.

It used to be as simple as making a call to the Yahoo Finance API. However, Yahoo deprecated their API, so I had to do a search for other quote providers who had up-to-date quote data that you could access for free. I did a search on Quora and found this discussion. I decided to try three quote providers: Quandl, AlphaVantage and World Trading Data.

In order to be flexible in choosing a specific quote provider, I implemented a driver factory in the code. I also put the authentication information for each quote provider within the application’s configuration file.

I used Visual Studio Code as my IDE for this project. VSC has extensions that support Golang and provides a very light way to just dive right in and write Golang code.

The code below shows the simple main loop. You are prompted to type the name of a symbol, and then the quote provider is called to retrieve the price of the stock.

package main

import (
   "bufio"
   "encoding/json"
   "errors"
   "fmt"
   "io/ioutil"
   "log"
   "os"
   "strings"

   av "./alphavantageprovider"
   quandl "./quandlprovider"
   q "./quoteproviders"
   wtd "./worldtradingdata"
)

var quoteProvider q.QuoteProvider

func main() {
   appSettings := getConfig()
   driver := appSettings.Driver
   apiKey := appSettings.APIKeys[driver]

   quoteProvider, _ = quoteProviderFactory(driver, apiKey)

   scanner := bufio.NewScanner(os.Stdin)
   print("Enter the symbol: ")
   for scanner.Scan() {
       symbol := scanner.Text()
       if len(symbol) == 0 {
           break
       }
       price := quote(symbol)
       fmt.Println(price)
       print("Enter the symbol: ")
   }
}

The quoteProviderFactory method simply returns the driver whose name was specified in the appSettings.json file.

// quoteProviderFactory - a factory that creates a quote provider
func quoteProviderFactory(providerName string, apiKey string) (q.QuoteProvider, error) {
   var provider q.QuoteProvider

   switch strings.ToLower(providerName) {
   case "alphavantage":
       provider = av.CreateQuoteProvider(apiKey)
   case "worldtradingdata":
       provider = wtd.CreateQuoteProvider(apiKey)
   case "quandl":
       provider = quandl.CreateQuoteProvider(apiKey)
   default:
       return nil, errors.New("the Quote Provider cannot be found")
   }

   return provider, nil
}

In the C# world, I would probably have put the full .NET type name of the driver within the config file, and used Activator.CreateInstance() to instantiate the driver. I don’t like having to explicitly reference the namespace of the individual drivers in Golang. I just have to get used to the fact that Golang does not have the same “power” as C#.

The Quote Provider

The quote provider package just provides a simple way of requesting the prices of a stock. We basically do the following steps:

  1. Format a URL for the specific quote service. That URL contains the name of the stock.
  2. Make an HTTP GET call to the quote service’s API.
  3. Marshal the returned payload into a Golang struct.
  4. Return the value of the field in the struct that has the stock’s current price.

All of the quote providers “inherit” from a “base class” called BaseQuoteProvider. I use quotes around the terms “inherit” and “base class” because Golang has no concept of classes and inheritance. Golang uses composition instead of inheritance.

const quoteURL = "https://www.alphavantage.co/query?function=GLOBAL_QUOTE&symbol={symbol}&apikey={apiKey}"

// AVQuoteProvider - gets quotes from the provider
type AVQuoteProvider struct {
   qp.BaseQuoteProvider
}

// CreateQuoteProvider - creates a new quote provider
func CreateQuoteProvider(apiKey string) qp.QuoteProvider {
   quoteProvider := new(AVQuoteProvider)
   quoteProvider.APIKey = apiKey
   return quoteProvider
}

// FetchQuote - gets a quote
func (provider AVQuoteProvider) FetchQuote(symbol string) float32 {
   url := provider.PrepareURL(quoteURL, symbol)
   payload, err := provider.FetchJSONResponse(url)

   if err == nil {
       data := new(quoteData)
       json.Unmarshal(payload, &data)
       //fmt.Println(data)

       f, _ := strconv.ParseFloat(data.GlobalQuote.Price, 32)
       return float32(f)
   }

   return 0
}

Now that everything was working, it was time to start thinking about integrating the quote provider with Slack.

Integrating the Quote Provider with Slack

Since I spend so much time within Slack, I would like the ability to manually check the current price of a stock from within a Slack channel. I want to issue a Slack Slash Command like “/quote symbol”, and have Slack print out the name of the stock and its current price.

Note: I called this project a Stock “bot”, but in Slack vernacular, a bot and a slash command are two different things. A bot is like a Slack user, and it has full access to the Slack message stream.

There are many enhancements that can be made to this “quote server”, such as being delivered the most recent prices at a regularly-scheduled interval, or altering a user when a stock crosses some sort of limit. But, for now, I want to keep things very simple and just be able to see the price of a single stock when I want it.

This article was helpful in outlining the steps that you need to take in order to create a new Slack application that supports Slash Commands.

The Slack API and Golang

The first thing that I needed to do was to find a Golang version of the Slack API. There seems to be one Github project that is popular among Go developers. This package can be found here:

https://github.com/nlopes/slack

The Golang/Slack API has some structs and methods that marshal HTTP requests to and from Slack. All that I needed to do was read an HTTP GET request that comes from Slack, parse the request into a SlashCommand object, call the QuoteProvider to retrieve the price of the stock and return that data back to Slack. A fairly simple enhancement.

We have to import the Golang/Slack package. In C# you would use NuGet, and in Java, you might use Maven. In Golang, you need to download the package to your local machine. In a terminal, run the command:

go get github.com/nlopes/slack

This command will download the package and put it into a directory that is in your GOPATH. On my MacBook, it is placed in the directory ~/go/pkg/darwin_amd64/github.com/nlopes/slack

Inside the program, you can import directly from a URL.

import “github.com/nlopes/slack”

In the code, we will start a web server to process the requests from Slack. We need to first retrieve the signing secret that Slack gives us when we create a new Slack app. More on that below.

We have an HTTP request handler. The request is marshaled into a SlashCommand object, and the quote provider is called. The data is formatted and returned to Slack.

// Get the signing secret from the config
signingSecret := appSettings.SlackSecret
if signingSecret == "" {
   log.Fatal("The signing secret is not in the appSettings.json file")
}

// The HTTP request handler
http.HandleFunc("/quote", func(w http.ResponseWriter, r *http.Request) {
   slashCommand, err := processIncomingRequest(r, w, signingSecret)
   if err != nil {
       return
   }

    // See which slash command the message contains
   switch slashCommand.Command {
   case "/quote":
       getQuotes(slashCommand, w)

   default:
       // Unknown command
       w.WriteHeader(http.StatusInternalServerError)
       return
   }
})

The incoming request is first verified against the signing secret, just to make sure that there are no man-in-the-middle attacks. The new SlashCommand object is returned to the caller.

func processIncomingRequest(r *http.Request, w http.ResponseWriter, signingSecret string) (slashCommand slack.SlashCommand, errs error) {
   verifier, err := slack.NewSecretsVerifier(r.Header, signingSecret)
   if err != nil {
       w.WriteHeader(http.StatusInternalServerError)
       return
   }

   r.Body = ioutil.NopCloser(io.TeeReader(r.Body, &verifier))
   slashCommand, err = slack.SlashCommandParse(r)
   if err != nil {
       w.WriteHeader(http.StatusInternalServerError)
       return slashCommand, err
   }

   // Verify that the request came from Slack
   if err = verifier.Ensure(); err != nil {
       w.WriteHeader(http.StatusUnauthorized)
       return slashCommand, err
   }

   return slashCommand, nil
}

The getQuotes() function parses the slash command in order to get the multiple stock symbols. We call the quote provider as a Goroutine, and wait on a channel for the quote provider to retrieve all of the quotes.

We format the symbols and the prices into a single text block, and we create a SlackMsg that will contain the response. We then send the JSON-encoded message back to Slack.

func getQuotes(slashCommand slack.SlashCommand, w http.ResponseWriter) {
   outputText := ""

   symbols := strings.Split(slashCommand.Text, ",")
   go func() {
       theBot.QuoteAsync(symbols)
   }()

   select {
   case quotes := <-theBot.QuoteReceived:
       for _, q := range quotes {
           outputText += fmt.Sprintf("%s: %3.2f\n", strings.ToUpper(q.Symbol), q.LastPrice)
       }
       // Create an output message for Slack and turn it into Json
       outputPayload := &slack.Msg{Text: outputText}
       bytes, err := json.Marshal(outputPayload)

       // Was there a problem marshalling?
       if err != nil {
           w.WriteHeader(http.StatusInternalServerError)
           return
       }
       // Send the output back to Slack
       w.Header().Set("Content-Type", "application/json")
       w.Write(bytes)

   case <-time.After(3 * time.Second):
       w.WriteHeader(http.StatusInternalServerError)
   }
}

As you can see, all we did to integrate the quote provider with Slack is to read the request from Slack, get the prices, marshal the data into a response that Slack can understand, and send the response back to Slack.

Now we have to create a new Slack application and a Slash Command and hook our code up to Slack.

Hooking up Slack to the SlashCommand

The first thing we need to do is to tell Slack how to access the quote server. But first, we will talk about local development for testing.

Local Tunnel

For a first step, I want to have my quote server run on a local web server on my laptop. But how will Slack know how to “reach in” and communicate with my local web server? The answer is to use a local tunnel proxy.

There are a few frameworks for establishing a local tunnel between Slack and your laptop. Such frameworks include:

For now, I am going to use localtunnel. To install it, go into a Terminal and run the command

npm install -g localtunnel

Running the Stockbot Locally

Start Stockbot normally.

go run application.go

Then launch localtunnel.

$ lt –port 5000 –subdomain slackstockbot
your url is: https://slackstockbot.localtunnel.me

As you can see, Stockbot can be accessed at https://slackstockbot.localtunnel.me:5000. Remember this URL because we need to tell Slack the address where it directs the /quote command to.

Creating a New Slack Application

The first thing to do is to create your own private Slack workspace in which you can experiment. I created a new Slack workspace for my CTO as a Service consultancy. This new workspace can be found at ctoasaservice.slack.com.

The Slack API homepage allows you to create a new Slack app.

Slack will automatically assign you various secret codes that you will use for authorization and verification purposes.

Then you will choose the box that says “Create a Slash Command”. You will be presented with another form in which you will specify the syntax of the new Slash Command, along with the Request URL (remember I told you to remember that localtunnel URL).

The /quote command will take a comma-separated list of strings, where each string is the symbol of a stock.

After saving the form, Slack will confirm that it knows about the new Slash Command.

Now we can set up OAuth and permissions so that Slack can finish connecting your Slack workspace to the new app.

Click on the Install App to Workspace button.

You will get your OAuth token. Also, input the redirect URL.

When all of this is done, Slack will ask you to install the app within your Slack workspace.

Click on the Install button. When this is done, Slack will show you that the Stockbot app is now installed within your workspace.

Testing the Stockbot

Run the Stockbot app and run the localtunnel

go run application.go
lt –port 5000 –subdomain slackstockbot

Now go into the Slack workspace, and in the message area, type a slash. Slack will begin to show you the list of slash commands that are available. As you type more, Slack will further filter the list of available commands. Finally, when you type /quote, Slack will show you the Stockbot command.

Type /quote AMZN. Slack should then come back with the current price of Amazon’s stock.

Success !!!!!!

Next Steps – Moving to the Cloud

Now that we have everything running on a local web server, the next step is to move it to an external host. For that, we will use Amazon Web Services (AWS). There is a service on AWS called Elastic Beanstalk that makes the process of setting up a web application very simple. There are a few small files that we will need to add to our application in order to have it work properly within Elastic Beanstalk.

In order to move the Stockbot to Elastic Beanstalk, I am going to take a slight detour. I will set up the Elastic Beanstalk web server, download the sample Go-based application code that Elastic Beanstalk generates, merge the Stockbot code into the generated code, and then deploy the merged code up to Elastic Beanstalk.

An Outline of What We Will Do

  1. Create the Elastic Beanstalk-based Go application for the Slack Stock Bot. This application will come with some sample code that Elastic Beanstalk generates.
  2. Set up a directory on our local machine for our Slack Stock Bot source code, and initialize that directory.
  3. Set up SSH
  4. Copy the generated sample code to our local directory so that we can have a jump-start.
  5. Modify the generated code so that it implements all of the logic in our Slack Stock Bot.
  6. Deploy the new code to the Elastic Beanstalk environment

Setting up Elastic Beanstalk on AWS

Prerequisites

  • Download the Elastic Beanstalk command line utility to your local computer.
  • On AWS, create a new key pair, and call it aws-eb.
    • After the key pairs are created, the files aws-eb and aws-eb.pub should be located in your ~/.ssh directory.

Create the new ElasticBeanstalk Application

Go to ElasticBeanstalk. We are going to create a new Go application called Slackstockbot.

I chose the option to create a sample application, just so I have some AWS config files that I know will work.

After clicking on the Create New Application link, we will see this

Create the New ElasticBeanstalk Environment

An application can have multiple environments. For example, one environment might be “production” and another environment might be “development”.

In the dashboard, find the Actions button and choose the Create New Environment menu. Then create a new Web Server Environment.

After you click on the Select button, you will get a dialog that lets you configure the new environment.

After you click on the Create button, ElasticBeanstalk will start to create a new environment. This takes a few minutes.

When the new environment has been created, you can see it in the ElasticBeanstalk dashboard.

If you navigate to this new site using Chrome, you will see the following website:

Getting the Source Code Ready

Create a new directory that will hold the source code of the new Slackstockbot. I created a new directory in ~/Projects/SlackStockBot.

We need to initialize the source code directory for ElasticBeanstalk to use. We created SlackStockBot in the us-east-2 region.

Run the command

eb init

You will see a list of AWS regions. After we choose the proper region, you should see SlackStockBot come up in the list of available applications.

Setting up SSH on the New Environment

We will eventually need to SSH into the new EC2 instance that is associated with the new environment. From the same directory that you used above, enter the command:

eb ssh –setup slackstockbot-dev

SSH into the New Server

Run the command

eb ssh slackstockbot-dev

You should see something like this:

The newly-deployed Go app will be in the /var/app/current directory.

To find out the IP address of the new instance, you can go into the EC2 dashboard and find the machine that was just created for the new instance.

The Go Source Code to the Sample Website

When we first set up the EB application, we chose to have sample code generated for us. The sample code is shown below.

A log file is set up, and the HTTP server listens on port 5000 for GET and POST requests. If a GET / request is received, it serves up index.html. If POST is received, the body of the post is echoed. If a POST /scheduled is received, then some info from the request headers is logged.

package main

import (
  "io/ioutil"
  "log"
  "net/http"
  "os"
)

func main() {
  port := os.Getenv("PORT")
      if port == "" {
          port = "5000"
      }

      f, _ := os.Create("/var/log/golang/golang-server.log")
      defer f.Close()
      log.SetOutput(f)

      const indexPage = "public/index.html"
      http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
          if r.Method == "POST" {
              if buf, err := ioutil.ReadAll(r.Body); err == nil {
                  log.Printf("Received message: %s\n", string(buf))
              }
          } else {
              log.Printf("Serving %s to %s...\n", indexPage, r.RemoteAddr)
              http.ServeFile(w, r, indexPage)
          }
      })

      http.HandleFunc("/scheduled", func(w http.ResponseWriter, r *http.Request){
          if r.Method == "POST" {
              log.Printf("Received task %s scheduled at %s\n",
                     r.Header.Get("X-Aws-Sqsd-Taskname"), r.Header.Get("X-Aws-Sqsd-Scheduled-At"))
          }
      })

      log.Printf("Listening on port %s\n\n", port)
      http.ListenAndServe(":"+port, nil)
}

We can change this source code so that the logic for the Slack Stock Bot is in there.

In the source directory is a Procfile. It’s just a single line:

web: bin/application

It specifies the name and path of the program to start. In this case, the compiled Go file named application should be run.

There is also a Buildfile that tells ElasticBeanstalk how to build your application. In this case, it’s just a single-line file:

build: go build -o bin/application application.go

Copying Files To and From the New EC2 Machine

Now we can use scp to recursively copy all of the files from the EC2 machine to the current directory on our local machine:

scp -r -i ~/.ssh/slackstockbot-dev.pem ec2-user@3.13.171.203:/var/app/current/* .

Notice that the key is in a file called slackstockbot-dev.pem. When I first set up SSH on the new server, it created a private key file called aws-eb (without an extension), because there was already a file called aws-eb.pem in the ~/.ssh directory. I copied aws-eb to a file named slackstockbot-dev.pem because it’s a more descriptive name.

Note that we can also use FileZilla instead of using scp.

Modify the Source Code

Now that we have download the Elastic Beanstalk-generated source code to our laptop, we need to merge the Slack Stock Bot code that we already wrote with the code that Elastic Beanstalk expects. Luckily, there isn’t too much to do.

There are a bunch of configuration and build files that were generated that Elastic Beanstalk needs. One is a directory called .elasticbeanstalk that contains some configuration files that Elastic Beanstalk needs. One is a file named Buildfile that tells Elastic Beanstalk how to build the source code that you deploy. The final file is named Procfile, and it tells Elastic Beanstalk how to run your Go application.

I want to mention how we need to change the Buildfile so that it does what we need.

The Slack Stock Bot is written in Go, and in order to interact with Slack, we use a package that is found on Github. This package is found at github.com/nlopes/slack. In order to import this package, we have the following line in our application.go file:

import “github.com/nlopes/slack”

Before Elastic Beanstalk builds the code, it has to install this package locally. We usually do this by issuing the command:

go get github.com/nlopes/slack

We need to make this command part of the build process. So, we will create a small shell file named build.sh that has the commands in it that will build the Stock Bot.

build.sh

go get github.com/nlopes/slack
go build -o bin/application application.go
cp ./appSettings.json bin

We also need to change the Buildfile to this:

Buildfile

build: build.sh

Deploying a New Version of the Application

We have to ZIP the source of the application up. Zip up everything from within the root. (Important: Do not run the ZIP command from the root folder of the project)

We can upload and deploy the new code from the Elastic Beanstalk console.

When you click on the Deploy button, you will see Elastic Beanstalk stop the environment, upload and build the new code, and restart the environment with the new code deployed in the /var/app/current directory.

If you see that the application did not start up properly, you will have to examine the log files. In the event that the github.com/nlopes/slack package did not download correctly, you may need to SSH into the server and pull it down yourself using the command go get  github.com/nlopes/slack.

Pointing to the new URL

If you recall from above, our Slack Stock Bot still points to our local web server through the local tunnel. Now that we are being hosted on AWS, Slack needs to know about this new location.

You need to go back into the Slack API website and change the URL of the Slack Stock Bot so that it points to the new Elastic Beanstalk environment.

Click on SlackStockBot

Click on Add features and functionality

Click on Slash Commands

Click on /quote and enter the URL of the Elastic Beanstalk environment:

http://slackstockbot-dev.us-east-2.elasticbeanstalk.com/quote

The Future

We have accomplished our mission, which was to write a first Golang application, integrate it with a Slack SlashCommand, and run the server on AWS.

There are some enhancements which I would like to make in the future.

  1. Make this available to other Slack workspaces besides my own. See item 2 below on why this is not feasible right now. (Hint – we are in danger of exhausting the quota of free quotes very quickly)
  2. Free unlimited real-time quotes. The three quote services all have limits around the number of quotes that you can request. Ideally, I would like to use a quote service that provides an unlimited number of quotes for free. Maybe if you are from Bloomberg or Reuters and you are reading this, how about giving me access to free quotes in exchange for attribution 🙂
  3. Alerting. I would like to have a user input a stock symbol and a target price, and be alerted through Slack when the stock reaches that target. This means using a database and storing a list of users, their webhooks, the symbols and the target prices. We could check stocks against their targets on a daily basis, or we can schedule the checks on a more frequent basis. It would also be ideal if the quote service provided alerts and could call into our server when a stock hits the target.
  4. More advanced analytics. We can deliver more information about the stock other than its current price.
  5. Graphs and better formatting. We can use Slack’s Blocks to provide a richer user interface.
  6. Trading. Wouldn’t it be cool to hook up an interface from Slack to your broker? Of course, there are all sorts of compliance and legal issues, but nevertheless, we can dream.
  7. Serverless. We can easily transform the quote-retrieval process into a lambda function.

About Me

Marc Adler is the founder of CTO as a Service, a consultancy that provides senior-level technical services to companies who are in need of a CTO or Chief Architect on a “pay for what you use” basis. He was formerly the Chief Architect of companies like Citigroup, MetLife, ADP and Quantifi. He likes to get himself in trouble with his CIOs by insisting on coding.

Onboarding Senior Developers – Keys to Success

I have hired a bunch of senior developers for some of my clients as part of my CTO-as-a-Service consultancy. I have also hired many senior developers in my past lives in large corporations.

I think that the main thing that I need to ensure is that the new developer does not experience a sense of regret and frustration when they walk in the door. I remember the things that have frustrated me in the past, and I make sure that these situations are not repeated with the new developer.

Here is what I try to have set up on the day that they join:

1) New laptop with enough power that a heavy-duty developer needs.

2) All accounts have been set up. Nothing more frustrating than having the developer sit around for a few days, waiting for access to email and Github.

3) All of the software has been licensed and (maybe) pre-installed. This includes all third-party frameworks and tools that require subscriptions.

4) Up-to-date Wiki and Jira (or whatever project-management software the company is using). Make sure that the architecture and system documentation is up to date.

5) Clear tasks defined for the first few weeks. Maybe there is a small feature that the app needs right away? Give it to the new dev to get them warmed up to the codebase.

6) All HR and Payroll-related items are done. If the person needs a company credit card, the card (or the application for the card) is waiting for them.

7) Proper introductions to the senior team. Does everyone know that the senior developer is joining? Do they already know how the senior developer aligns to the success of the company? If the senior developer is aligned to a certain business unit, do the people in that business unit know what the senior developer will be working on?

8) Make sure that there are people around to answer questions. Especially if the codebase has tricky parts that are difficult to understand. Make sure that all important architecture decisions have been memorialized on the Wiki.

There is nothing more satisfying to the new developer than hearing someone say “XYZ really hit the ground running, and has made an immediate impact”. Do everything you can to make sure that the new developer gets to hear that sentiment expressed by your senior staff.

Getting Started with Visual Studio Code, .NET Core, and AWS Lambda

The way that I have usually written C# .NET Core applications is to use Visual Studio 2017/2019 on my Windows laptop. Since my primary laptop is now a Mid-2012 MacBook Air, I have been using Visual Studio for Mac as my primary IDE for writing C# apps, mainly using Xamarin. I have been a big fan of Visual Studio Code for writing Node and Python apps, but I never tried to write a .NET Core app that targeted AWS Lambda. This article details the steps that I took in order to write my first C#-based Lambda function on my MacBook and deploy it to AWS.

One of CTO-as-a-Service’s clients current has a synchronous function that is used to generate Microsoft Word documents from data that is stored in an SQL Server database. Currently, the document generation process runs on a single HP DL360 server. When many people need to generate documents at the same time, the performance of the server degrades so severely that it impacts the company.

As part of the migration to AWS that I am doing, I wanted to take this document-generation process and move it to a Lambda function on AWS. This way, when the company has to generate a lot of documents at month-end, we can kick off a Lambda function that would generate a single document. The generated document will then be stored on S3, and a notification would be broadcast on SNS.

I wanted to write refactor the code as a Lambda function, but I wanted to do so using Visual Studio Code on my lightweight MacBook instead of using my much-heavier Windows machine. Of course, I could have used my MacBook to remote into my Windows machine, but out of curiosity, I wanted to see if I could do everything on my MacBook.

The first step was to read the AWS documentation on writing Lambda functions using C#. The AWS reference article on .NET development and Lambda functions is here:

https://docs.aws.amazon.com/lambda/latest/dg/dotnet-programming-model.html

Prerequisites on your Computer

Install the AWS CLI

Install the extensions to the dotnet command line. These extensions will let you deploy and invoke a Lambda function from the command line.

dotnet tool install -g Amazon.Lambda.Tools

Install the AWS Lambda Templates extension to the dotnet command line, and ensure that the AWS templates have been installed

dotnet new -i Amazon.Lambda.Templates

Make sure that the new templates have been installed by running this command:

dotnet new -all

In order to generate the code, you need to know which profile you will be using when the Lambda function is deployed and executed. You can find the name of your profile by viewing the file ~/.aws/credentials. The profile should contain your access key, your secret key, and optionally, the region and the output format.

Also, before you start, go into the IAM console on AWS and make sure that the IAM role that you use has policies that will let you access Lambda functions, as well as letting the Lambda functions access certain AWS services (like S3, SNS, Dynamo, etc).

In Visual Studio Code, you should do the following:

  • Install the AWS Toolkit for Visual Studio Code extension
  • Make sure that the various C# extensions have been installed, most notably C# for Visual Studio Code
  • Install the NuGet Package Manager extension

Generate the Project and Code

Generate a simple skeleton project

Open us a Terminal. Create a new directory, and cd to that directory. For example,

mkdir MyFirstLambda

cd MyFirstLambda

We want to generate the skeleton project and code. Run the command:

dotnet new lambda.EmptyFunction –name DocGenerator –profile default –region us-east-1

This will create a directory called ./MyFirstLambda/DocGenerator.

Notice that we are using a profile named default. This should be an entry in the ~/.aws/credentials file.

In Visual Studio Code, open the folder containing your new project. Note that you should open the folder below the new directory you created. In the case above, open the DocGenerator folder, not the MyFirstLambda folder.

When you open this folder in Visual Studio Code, you will be prompted to restore some files. In addition, the .vscode directory might be created for you.

Add build, deploy, and invoke commands to the tasks.json file. (see the Appendix below)

Once the tasks.json file has been set up, you have three commands available to you. Build, Deploy, and Invoke. I just cycle through these commands using Visual Studio Code’s Terminal/Run Task menu. At the end of this cycle, you should have your new Lambda function built, deployed on AWS, and tested.

A simple Lambda function will be generated for you. This function looks like this:

// Function.cs

using Amazon.Lambda.Core;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace DocGenerator
{
  public class Function
  {
      public string FunctionHandler(string input, ILambdaContext context)
      {
          return input?.ToUpper();
      }
  }
}

The entry point is defined in the file named ./src/DocGenerator/aws-lambda-tools-defaults.json

"function-handler" :              
"DocGenerator::DocGenerator.SNSFunction::FunctionHandler"

Once your Lambda function is running, you can use the AWS Explorer panel to view the Lambda.

Adding SNS Support

In Visual Studio Code, go to the Command Palette, and use the NuGet Package Manager:Add Package function to install the Amazon.Lambda.SNSEvents package.

Write the new SNS function handler.

using Newtonsoft.Json;

namespace DocGenerator
{
public class Function
{
public void SNSMessageFunctionHandler(SNSEvent snsEvent, ILambdaContext context)
{
var jsonEvent = JsonConvert.SerializeObject(snsEvent);
var jsonContext = JsonConvert.SerializeObject(context);

context.Logger.Log(jsonEvent);
context.Logger.Log(jsonContext);
context.Logger.LogLine("-----------------------------------------");
}
}
}

In ./src/DocGenerator/aws-lambda-tools-defaults.json, change the function handler:

“function-handler” :  “DocGenerator::DocGenerator.Function::SNSMessageFunctionHandler”

Build and deploy the new code.

Testing the Code

Go into the SNS Console and create a new topic. Let’s call it Simple-Lambda-Notification.

In the SNS Console, create a new subscription for this topic. For the protocol, choose AWS Lambda. For the endpoint, choose the DocGenerator function.

In the SNS Console, publish a message on the topic. Then look at the CloudWatch log. You should see the log messages that indicate that the message was received from SNS.

The Lambda Context

The LambdaContext is passed into the handler function and contains information about the environment that the function is operating in. You can use the LambdaContext to perform logging to CloudWatch, to determine who called the function, and to get the unique request id in case you need to notify the caller asynchronously that the function has completed. The LambdaContext looks like this:

{
"FunctionName": "DocGenerator",
"FunctionVersion": "$LATEST",
"LogGroupName": "/aws/lambda/DocGenerator",
"LogStreamName": "2019/04/30/[$LATEST]10dd5bcf08994166b84a1d3189f2f18b",
"MemoryLimitInMB": 256,
"AwsRequestId": "c533b777-e333-45ca-a78e-0b12d63c513d",
"InvokedFunctionArn": "arn:aws:lambda:us-east-1:901643335044:function:DocGenerator",
"RemainingTime": "00:00:27.7060000",
"ClientContext": null,
"Identity": {
"IdentityId": "",
"IdentityPoolId": ""
},
"Logger": {}
}

Adding Support for the API Gateway

As illustrated in the architecture diagram above, the entry point to our Lambda function should be a REST call emanating from the AWS API Gateway.

You need to import the NuGet package named Amazon.Lambda.APIGatewayEvents in order to be able to use the C# classes that support the AWS API Gateway.

Create a new class called APIGatewayFunction. Here is the code:

using Amazon.Lambda.Core;
using Amazon.Lambda.APIGatewayEvents;
using Newtonsoft.Json;
using System.Collections.Generic;
using System.Net;

namespace DocGenerator
{
  public class APIGatewayFunction
  {
      public APIGatewayProxyResponse FunctionHandler(APIGatewayProxyRequest request,

ILambdaContext context)
      {
          var jsonEvent = JsonConvert.SerializeObject(request);
          var jsonContext = JsonConvert.SerializeObject(context);
         
          context.Logger.Log(jsonEvent);
          context.Logger.Log(jsonContext);
          context.Logger.LogLine("-----------------------------------------");

          return this.CreateResponse(request);
      }

      private APIGatewayProxyResponse CreateResponse(APIGatewayProxyRequest request)
      {
          int statusCode = (request != null) ?  (int) HttpStatusCode.OK

: (int) HttpStatusCode.InternalServerError;

          PostPayload payload = JsonConvert.DeserializeObject<PostPayload>(
           request.Body ?? "{\"message\": \"ERROR: No Payload\"}");
         
          // The response body is just the upper-case version of the string

// that was passed in
          string body = (payload?.message != null)

? JsonConvert.SerializeObject(payload.message.ToUpper())
                              : string.Empty;

          var response = new APIGatewayProxyResponse
          {
              StatusCode = statusCode,
              Body = body,
              Headers = new Dictionary<string, string>
              {
                  { "Content-Type", "application/json" },
                  { "Access-Control-Allow-Origin", "*" }
              }
          };
 
          return response;
      }
  }

  public class PostPayload
  {
      public string message { get; set;}
  }
}

In ./src/DocGenerator/aws-lambda-tools-defaults.json, change the function handler:

"function-handler" :  "DocGenerator::DocGenerator.APIGatewayFunction::FunctionHandler"

Build and deploy the new code. Now it’s time to create a new API Gateway that will be used to handle the REST integration to the Lambda function.

Create the API Gateway

In the first step, we go into the API Gateway dashboard and create a new REST API called DocGeneratorAPI.


We will create a single resource called Document. Any API calls for this resource should contain /document in the URL path.


We will hook up the API to the new DocGenerator Lambda function that we just created. Notice that we check the Use Lambda Proxy Integration option.


After saving the API, we will just go to the Lambda dashboard for a second to make sure that the API Gateway is a new input source for the Lambda.


Back in the API Gateway dashboard, we want to test the new API. Click on the blue lightning bolt in order to run a test.


We will POST a request that has a simple message body. If successful, we should get a response that has a capitalized version of the message.


We see that the response is indeed the capitalized version.

We can examine the APIGatewayProxyRequest that our function was invoked with.


{
   "Resource": "/document",
   "Path": "/document",
   "HttpMethod": "POST",
   "Headers": null,
   "MultiValueHeaders": null,
   "QueryStringParameters": null,
   "MultiValueQueryStringParameters": null,
   "PathParameters": null,
   "StageVariables": null,
   "Body": "{\n    \"message\": \"This is a document\"\n}",
   "RequestContext": {
       "Path": "/document",
       "AccountId": "XXXXXXXXXXXXXXXX",
       "ResourceId": "ec0wv3",
       "Stage": "test-invoke-stage",
       "RequestId": "88182293-6c2a-11e9-9e20-09edba43f9b6",
       "Identity": {
           "CognitoIdentityPoolId": null,
           "AccountId": "XXXXXXXXXXXXXXXX",
           "CognitoIdentityId": null,
           "Caller": "XXXXXXXXXXXXXXXX",
           "ApiKey": "test-invoke-api-key",
           "SourceIp": "test-invoke-source-ip",
           "CognitoAuthenticationType": null,
           "CognitoAuthenticationProvider": null,
           "UserArn": "arn:aws:iam::XXXXXXXXXXXXXXXX:root",
           "UserAgent": "aws-internal/3 aws-sdk-java/1.11.534 Linux/4.9.137-0.1.ac.218.74.329.metal1.x86_64 OpenJDK_64-Bit_Server_VM/25.202-b08 java/1.8.0_202 vendor/Oracle_Corporation",
           "User": "XXXXXXXXXXXXXXXX"
       },
       "ResourcePath": "/document",
       "HttpMethod": "POST",
       "ApiId": "brpqzm8gdj",
       "ExtendedRequestId": "ZAtsDEw2oAMFu6Q=",
       "ConnectionId": null,
       "ConnectionAt": 0,
       "DomainName": "testPrefix.testDomainName",
       "EventType": null,
       "MessageId": null,
       "RouteKey": null,
       "Authorizer": null
   },
   "IsBase64Encoded": false
}

Enhancements

You might notice that the FunctionHandler in the C# code does not examine the HttpMethod and the Path of the request in order to implement different behaviors. The code assumes that a POST request and a specific payload are being passed in. Of course, the FunctionHandler needs to be made bullet-proof so that it will handle different methods and paths.

Appendix

tasks.json

The tasks.json file is located in the .vscode directory of a project and contains a JSON-formatted list of tasks that Visual Studio Code can invoke.


{
   "version": "2.0.0",
   "tasks": [
       {
           "label": "build",
           "command": "dotnet",
           "type": "process",
           "args": [
               "build",
               "${workspaceFolder}/test/DocGenerator.Tests/DocGenerator.Tests.csproj"
           ],
           "problemMatcher": "$tsc"
       },
       {
           "label": "deploy",
           "command": "dotnet",
           "type": "process",
           "args": [
               "lambda",
               "deploy-function",
               "DocGenerator",
               "--region",
               "us-east-1",
               "--profile",
               "default",
               "--function-role",
               "woof_garden_canary"
           ],
           "options": {
               "cwd": "${workspaceFolder}/src/DocGenerator"
           },
           "problemMatcher": []
       },
       {
           "label": "invoke",
           "command": "dotnet",
           "type": "process",
           "args": [
               "lambda",
               "invoke-function",
               "DocGenerator",
               "--region",
               "us-east-1",
               "--profile",
               "default",
               "--payload",
               "Just Checking If Everything is OK"
           ],
           "problemMatcher": []
       }
   ]
}

Thoughts about the role of Chief Architect

Marc Adler, CTO as a Service

The following observations about the role of Chief Architect come from my four previous positions as Chief Architect. I have been Chief Architect of the Equities Division of Citigroup (350,000 in the company, 30,000 people in Equities), of MetLife (65,000 employees), ADP (65,000 employees), and of a small software vendor named Quantifi (roughly 50 employees). I have a mixture of large company and small company experience that has shaped my perceptions. In addition, I owned my own small software business for roughly ten years, and that experience has also shaped my thinking.

I may be slightly biased in my thinking, but in my opinion, the role of Chief Architect is one of the hardest roles out there, both in terms of defining the role and in terms of what is expected of you. Each company has their own idea of what a Chief Architect’s role is, and some companies don’t even know why they need a Chief Architect.

The role of Chief Architect is an extremely “vertical” one. I always like to say that a Chief Architect has to be able to  interact at the CxO level, and be able to talk to the lowest-level coder. The Chief Architect has to be comfortable in executive-level meetings, and must be equally comfortable sitting next to a developer and doing pair-programming or code reviews. Often times, the Chief Architect is called on to explain technology to the CxO-level people. The Chief Architect is often called on to face off with the senior-level technical people at vendors and partners. I have been directly opposite CIOs and CTOs of vendors on many occasions. The Chief Architect should know the business and well as the technology.

In my opinion, there should really be only a single Chief Architect in a division or an enterprise. There should be a single point from where all architectural decisions are made. This jives with the list of responsibilities that a Chief Architect should have (see the list below). If there are multiple Chief Architects, then there is more room for disagreements and to get into “analysis paralysis” mode.

Management

Many times, the Chief Architect has to manage a team of architects. At Citigroup and MetLife, I managed multiple groups which fell under the Architecture organization, and these groups included performance optimization engineers, project managers, and business analysts.

Since I have typically managed teams of architects, I have had to slice my architects into different domains. For example, at Citigroup, I had an architect responsible for Cash Trading Systems, an architect responsible for Derivatives Trading, an architect responsible for Risk, etc. I also had architects devoted to SDLC, architects devotes to performance optimization, etc. Each architect was responsible for a vertical slice of the entire organization, but the architects had to work horizontally as well.

Since the Chief Architect was regarded as a senior or area manager, the Chief Architect often reported to the CIO or CTO of a division. The Chief Architect would often be the CxO’s “right-hand man” when it came to technology. It was unimaginable that a Chief Architect would report to a development manager, since part of the architect’s responsibility was guiding development practices and doing architecture and code reviews (something that would sometimes put the development manager and architect at odds).

Hands-On Participation

A constant question revolves around how hands-on the Chief Architect should be. I advocate for a hands-on architect for various reasons.

The Chief Architect should not be known as a “yellow pad architect” if the CA is to get the respect of the development organization. The developers want to know that the Chief Architect has walked in their shoes before they accept his advice. A Chief Architect should be able to speak to any CxO or sit down and pair-program with the lowest-level developer.

Since the Chief Architect is also responsible for exploring new technologies and delivering proof-of-concepts, the Chief Architect should be able to code these POCs personally.

Therefore, I strongly advocate that the role of Chief Architect not prohibit the CA from diving into a coding role on occasions.

Roles and Responsibilities

The list below is a union of all of the responsibilities that I have had as Chief Architect.

  • Manage the Architecture Organization
  • Attend meetings with current and potential vendors
    • Be able to face off with the CTO or Chief Architect of a vendor and do technical vetting
  • Evaluate vendors and technology using an Architecture-team-developed scorecard
  • Keep abreast of new technologies that might be beneficial to the organization
    • Run the Innovation Lab
    • Meet with vendors
    • Do proof-of-concepts, often involving coding from me or a member of my architecture team
    • Attend industry conferences in order to be able to see what new technologies are hot, and to even see what our competitors are doing.
    • Occasionally give talks or sit on panels at conferences
  • Software development
    • Sometimes, the architecture organization will develop a POC or MVP in order to prove out ideas or to relieve pressure on the normal development organization
  • Give architectural approval for “big ticket” projects
  • Attend executive meetings
    • Often called on to decipher technology or render opinions on technology
  • Liaise directly with senior business stakeholders
    • Find out what kinds of business problems the stakeholders wanted to solve and propose solutions based on their needs
  • Guidance
    • Come up with reference architectures
    • Serve as a general technical resource for the development organization
  • Roadmaps
    • Provide roadmaps that show how the organization will move towards a certain future state
  • SDLC process
    • Do Architecture and Code Reviews
    • Attend Sprint kickoff and retrospectives
    • Provide approvals for code to be deployed into production
    • Advice on coding practices
  • Performance Optimization
    • Help the engineering team with the optimization of certain systems
  • Governance
    • Ensure that the organization is using the appropriate technology and software
    • Ensure that the organization is not using End-of-Life software
  • Architecture Review Boards
    • Run or participate in architecture reviews and approvals
  • Socialization of architecture across the enterprise
    • In the instances where I have been CA of a specific division, I would make the effort to find out what other CAs in other divisions were up to

AWS CodeBuild and Access to RDS

One of my clients whose application runs on AWS had no Continuous Integration (CI). The code is stored on Github, and I had just gotten the developers to write Unit Tests and Integration Tests. There are different tools that you can use for CI, including Jenkins, Travis CI, Circle CI, and more. But, since my client is a heavy user of AWS, I wanted to try AWS CodeBuild, as it seems to be tightly integrated with a lot of other AWS PaaS products.

I set up CodeBuild to pull the code from Github, and to run the Integration Tests using knex mocks. Everything worked smoothly.

The next step was to set up a Postgres/RDS database that was devoted to CI testing, and to switch from using mocks to using a real database.

The problem was that the application code could not access RDS. All access from CodeBuild to RDS was blocked.

The solution that I came up with was as follows:

Note which AWS region your CodeBuild instance is running in. For example, mine is in us-east-1 (select the build and look at the details to find the region).

• In your browser, go to https://ip-ranges.amazonaws.com/ip-ranges.json, and look for the entry for CODEBUILD in the region mentioned above.

• Note the IP address associated with your region of CODEBUILD. For example, the IP address for my instance is 12.345.6.789/28. (Of course, this is a fictitious address)

• Now go to the RDS AWS Dashboard and find the instance of RDS that you want to access through CodeBuild.

• Find the Security Group that the instance of RDS is using

• Navigate to that Security Group

• Go to the Inbound Rules, and add a new rule for CodeBuild. I added a new TCP rule for 12.345.6.789/28, using port range 0-65535.

• Go back to CodeBuild and run the build. CodeBuild should be able to access Github (like before) and now it can access your private RDS instance.


The number of questions found on Google around CodeBuild users accessing RDS are such that I would think that the AWS team would make this into some kind of point-and-click visual interface.

Distributed Systems Meetup in NYC

This a part of the continuing education process that a good CTO/Chief Architect has to maintain. This is a photo from a regular Meetup that I attend, where we discuss Distributed Systems. In this particular event, we were discussing Fault Tolerance in Distributed Systems with our host, Andrew from Venmo.

For people who want to learn about the theory behind distributed systems:

I feel that you can get your Master’s Degree in CompSci solely by going to meetups every week. So much great information is exchanged, and you meet others who are using interesting technology in their day jobs.

Does a Large Software Project Need a Software Architect?

I just answer this question on Quora. I am happy to hear any comments that you might have about my answer.

My background is that I was the Chief Architect of 4 different companies (as now a CTO for Rent), so my answer is colored by my experiences and my biases.

There are all sorts of “architects” out there. Enterprise architects, solutions architects, business architects, software architects, infrastructure architects, etc. Everyone fancies themselves as an architect.

So, what distinguishes an Architect from an ordinary Developer, and why would you need one.

I am going to make an assumption that you are in a large company since you mentioned a “large software project”. I am going to assume that there are teams of developers, project managers, business analysts, etc, and that the project is budgeted at several million dollars.

In large companies, one of the jobs of the Architect is to make sure that what your team is developing is following certain standards. You do not want to be using end-of-life software, you may not want to be using some open source software that is only being developed and maintained by a single person, you want to use software that is a “buy” on the company’s buy/sell/hold list, you want to be using software that passes certain performance criteria, etc.

Another job of the Architect is to make sure that your project is well thought out. Are you thinking about scalability? About different kinds of failover? About certain edge cases? Is your team following good coding practices? Is the software extensible, so that new features can be added easily? Is it easy for other systems in the company to integrate with your platform? Does your application follow a future vision that the business has?

The Architect should be familiar with frameworks and patterns and lessons-learned that other areas of the company have developed, and should be able to recommend their use to you.

Often times, the Architect is closely allied with (or in charge of) the Common Services Team of your organization, so the Architect should be able to recommend frameworks that this central team has developed for the organization.

If your project did not have a good Architect, who would do these jobs for you? The Project Manager? The Product Manager? Both no. The Dev Lead? Maybe they would do some of these tasks, but the Dev Lead will probably live inside the silo of your team and not reach across the whole organization.

I hope this answers your question. Good luck with your project.