Machine Learning with AWS and why should you use it

Amazon Web Services is home to so many diverse services it can make your head spin. In this article, I will try to make it a little easier for you and guide you through the Machine Learning section in the AWS console. Machine Learning constitutes a massive part of the AWS Console, so we will only attempt a very quick overview of the possibilities of these services.

Amazon’s machine learning services are constructed of three layers.

Frameworks and Hardware

The fundamental layer consists of Frameworks and Hardware. If you are experienced in Machine Learning, you will be using this layer quite often to tune your models to fit your needs perfectly.


  • TensorFlow
  • Gluon
  • Apache MXNet
  • Cognitive Toolkit
  • Caffe2 & Caffe
  • Keras
  • PyTorch

Platform Services

However, for the majority of use cases, you will only get down to the Platform Services which are built on top of Frameworks&Hardeware layer. Amazon tries to make Machine Learning services as accessible and easy to use as possible. Only three steps should be enough to use your trained model on production. Amazon simplified the whole process to:

  1. Build - prepare the data, build the model.
  2. Train - use provided algorithms to train your model.
  3. Deploy - deploy using provided interface simplifies deploy to one command.

Platform Services are made of:

  • Amazon SageMaker
  • AWS DeepLens
  • Amazon Machine Learning
  • Spark & EMR
  • Mechanical Turk

Amazon SageMaker is an autonomous service that fully integrates all the tools that you might need to create the model. SageMaker uses Jupyter Notebooks for this process. Now you can apply the 3-steps rule in practice.

1. Build

You will need a large set of data. You can download it directly in your Jupyter Notebook instance. Now, it’s time to clean it up! Some of the columns may contain values in a different format. You should aim for consistency and make all the values follow the same format across the column. For example, your time information can be stored in either the 12-hour or 24-hour format, sex information can be stored as either “F” or “Female”, and prices can be noted either as “USD 5.25” or “$5.25”. After making your data clean you should divide it into two parts:

  1. Training set - used to fit the model during the training
  2. Test set - used to prevent your model from overfitting
    Overfitting happens when you train your model in a way it perfectly fits your training data, e.g. when you teach the model to focus on specific features of the training set rather than trying to look at it from a distance and fit it to the general shape. For example, when you want to build a model that recognizes when people are smiling, overfitting could make the model fail to recognize the people with missing teeth. On the other hand, there is underfitting that means the trained model is too simple to capture the smile and recognizes all the faces as smiling.

2. Train

At this stage, you can choose from a variety of algorithms offered by AWS. If the problem you’re trying to solve is a discrete classification problem, you can use the Linear Learner Algorithm or XGBoost Algorithm. For image classification, AWS provides Image Classification Algorithm. Just choose the one that best fits your needs. All of them can be easily obtained using the AWS SDK. After choosing the proper algorithm and setting hyperparameters to tune your training session you can start training job.

3. Deploy

Deploying the trained model is as easy as calling deploy method on estimator object you have used for training.

AWS AI Services

AWS provides AI Services specifically for people with no machine learning experience. It allows to instantly take full advantage of the capabilities of artificial intelligence. Most of the problems we are facing are common and chances are someone has already solved them. AI Services are built on top of Platform Services. They are accessible both through AWS Console and Amazon SDKs.

You can use them to quickly present the results, delivering great business value in no time. Machine learning and Artificial Intelligence are buzzwords – highly desirable and more valuable to clients.

Firstly you should create a user using the IAM Management Console. Then, you have to assign the user permissions or assign them to a group with permissions. Find the permissions related to Machine Learning section or directly dedicated to Machine Learning service you want to use.

The second step is setting up the project with the SDK you want to use. I will use JavaScript SDK to show how easy it is to get the project up and running.

AWS SDK for JavaScript is available here:

After getting the package in your project it is time to set up the access credentials. The recommended way to do it is to head to your home directory and create a directory named ‘.aws’. Now, inside the directory you should create a file named ‘credentials’ and fill it with the following content:


The values you need are available under the user details section in Security Credentials tab. Now you are ready to use AWS AI Services!

Amazon Rekognition

The Rekognition class used in the examples can be imported from ‘aws-sdk/clients/rekognition’.

Object detection

This service detects real-world objects (entities) in JPEG or PNG photos. It allows to determine if a picture contains objects like flowers, people, trees, tables; what the surrounding is like (night, forest) and specific events: wedding, graduation, etc.

You can use the service through AWS SDK in your code using:

new Recognition()
	Image: { Bytes: <your_image_buffer_or_blob> }

The response you will get will be an array of labels of objects detected in the image, along with the confidence of its presence and location on the image.

Image moderation

If the service you are building involves filtering nude photos but keeping the images containing suggestive content, the AI Service can help it in one simple method.

new Recognition()
	Image: { Bytes: <your_image_buffer_or_blob> }

In the response you will get the labels of probably unwanted content you might want to moderate.

Facial analysis

The next AI Service you might want to hear about is facial analysis. It allows to analyze the faces on the image and get the information about age, mimics, gender, mouth, eyes and facial accessories. The function of the Rekognition instance you want to use is detectFaces. Additional attributes may be passed to get specific information from the image. If you want to know all the attributes you have to pass Attributes: [‘ALL’] as a parameter.

On top of that there is the Face Comparision service, Celebrity Recognition, a service detecting Text in Image and Video Analysis.

Amazon Lex

Another great service shared by AWS is Amazon Lex. This service provides you with the ability to quickly create intelligent assistants and introduce them to your application. The first step is to create a new bot. We will be using LexModelBuildingService class which can be imported from ‘aws-sdk/clients/lexmodelbuildingservice’.
Every bot needs to have intents that will be triggered on specific kind of messages. We will create sample intent to get the grades of some parents’ child

new LexModelBuildingService()
        name: 'GetGrades',
        description: 'Get the grades in specified subject',
        sampleUtterances: [
          'What are the grades of my child?',
        slots: [{
          name: 'Subject',
          description: 'Subject to get the grades from',
          valueElicitationPrompt: {
            messages: [
                content: 'What subject are you interested in?',
                contentType: 'PlainText',
        fulfillmentActivity: {
          type: 'ReturnIntent',
        conclusionStatement: {
          messages: [
            'Your child grades are: 5, 5, 5, 4, 5'

This snippet will create an intent that will be triggered by message specified in 'sampleUtterances' or some slight variation of them. The next step for user will be to specify the name of the subject he is interested in. Bot will reply with one message specified in 'valueElicitationPrompt' to get subject name from the user. In the example fulfilment of this intent is to return static text message with the grades, but the intent can use AWS Lambda Function to redirect all gained information to your service. To change that you have to change ‘fulfillmentActivity’ to:

fulfillmentActivity: {
	type: ‘CodeHook’
	codeHook: {
		uri: <LambdaARN>,
		messageVersion: “1.0”

This way you can find the child along with its grades in the database using your service implementation. Amazon Lex bots can be easily integrated and introduced into mobile applications, web applications and also they have ready to use integrations with Messenger and Slack.


As for further reading, I’d strongly recommend going into Amazon Comprehend service which allows you to analyze a piece of text to detect the most important phrases and words or decide if the text is positive or negative. This service can be used for categorizing user feedback.

server room

Google Cloud

You can also take a look at the Google AI section and check its image and video classification capabilities called Vision AI. They work much like Amazon Rekognition but focus on different entities. Vision AI connects all the vision services and lets you describe faces, detect objects and moderate images, all with a single request. For those looking for a deeper dive into Machine Learning, Google Cloud also offers a platform for building, training and deploying your models.


We’ve barely scratched the surface, but I hope this short introduction to AWS Machine Learning services has sparked your imagination to explore, invent, and find completely new, amazing AI applications within your services. Now you know which services can help you overcome your challenges in building smart services for individual users. With the power of AI, your applications can be elevated to a completely different level and provide a nice, more enjoyable user experience.