In this article we will see how to create an Access Policy that will allow us to limit access to this AWS S3 Bucket to only one user and in the same time limit a user to have access to only one AWS S3 bucket.

  Many times you will need to separate user access over AWS objects and you need to make sure they don’t have access to each other resources unless you want them to.

 

 What is an AWS IAM Policy ?

   A policy is a document that formally assigns permissions to a user, group, role, or resource, you create a policy, which is a document that explicitly lists permissions.

A policy lets you specify the following:

Actions

– what action are allowed.

Resources

– on what resources are the action allowed.

Effect

– type of the effect on the action. (to enable it or to disable it)

What is the format of a policy ?

Policies are created using JSON format files and it can have one or more statements, each of which describes one set of permissions.

Policy example

  • this policy will allow to list a bucket called my_bucket.

 

    Now that you have a basic idea of what is a policy and what do we use it for we will continue with our topic and create a policy that will restrict access to a user only to it and to the list of actions we want it to have.

 

User Access Policy over a single AWS S3 bucket

You can see we allow the user to list the existing buckets and than we state he will have all the access to actions on top of the S3 bucket called userftpjoe-bkt.

 

How to use Amazon Console to create the policy

1 – Login into you Amazon Console and go to IAM

iam-1

 

2 – Choose to create a new policy

iam-2

 

3 – Choose Create Your Own Policy option

iam-3

4 – Provide the Name, description and the body the policy. Validate first and then Create the policy.

iam-4

 

 

How to use Amazon command line utility to create the policy

   Where the /tmp/usrFtpJoe-bucket-policy.json must contain the above definition of the policy and the user name must be provided, in my case my user name is userFtpJoe and the policy name i choose is usrFtpJoe-S3-access-policy.

 

You can attach the policy to a user using the Gui or the command line AWS client. 

Use Amazon Console to attach the policy:

1 – Select the policy and Choose Attach

iam-5

 

2 – Choose the user to attach the policy to and click on the Attach Policy button.

iam-6

 

Use Amazon Command Line utility to attach the policy to a user

You need to provide the command with policy arn and the user name.

 

How to get the policy arn ? 

 

 

 

Here is a direct comparisons between the services provided by Microsoft Azure and Amazon Web Services. The list contains all the services that could be mapped as similar in actions and usage.

Microsoft Azure Amazon Web Services (AWS)
Available Regions Azure Regions AWS Global Infrastructure
Compute Services Virtual Machines (VMs) Elastic Compute Cloud (EC2)
Cloud Services
Azure Websites and Apps
Amazon Elastic Beanstalk
Azure Visual Studio Online None
Container Support Docker Virtual Machine Extension (how to) EC2 Container Service (Preview)
Scaling Options Azure Autoscale (how to) Auto Scaling
Analytics/Hadoop Options HDInsight (Hadoop) Elastic MapReduce (EMR)
Government Services Azure Government AWS GovCloud
App/Desktop Services Azure RemoteApp Amazon WorkSpaces
Amazon AppStream
Storage Options Azure Storage (Blobs, Tables, Queues, Files) Amazon Simplge Storage (S3)
Block Storage Azure Blob Storage (how to) Amazon Elastic Block Storage (EBS)
Hybrid Cloud Storage StorSimple AWS Storage Gateway
Backup Options Azure Backup Amazon Glacier
Storage Services Azure Import Export (how to) Amazon Import / Export
Azure File Storage (how to) AWS Storage Gateway
Azure Site Recovery None
Content Delivery Network (CDN ) Azure CDN Amazon CloudFront
Database Options Azure SQL Database Amazon Relational Database Service (RDS)
Amazon Redshift
NoSQL Database Options Azure DocumentDB Amazon Dynamo DB
  Azure Managed Cache (Redis Cache) Amazon Elastic Cache
Data Orchestration Azure Data Factory AWS Data Pipeline
Networking Options Azure Virtual Network Amazon VPC
Azure ExpressRoute AWS Direct Connect
Azure Traffic Manager Amazon Route 53
Load Balancing Load Balancing for Azure (how to) Elastic  Load Balancing
Administration & Security Azure Active Directory AWS Directory Service
AWS Identity and Access Management (IAM)
Multi-Factor Authentication Azure Multi-Factor Authentication AWS Multi-Factor Authentication
Monitoring Azure Operational Insights Amazon CloudTrail
Azure Application Insights Amazon CloudWatch
Azure Event Hubs None
Azure Notification Hubs Amazon Simple Notification Service (SNS)
Azure Key Vault (Preview) AWS Key Management Service
Compliance Azure Trust Center AWS CLoudHSM
Management Services & Options Azure Resource Manager Amazon CloudFormation
API Management Azure API Management Amazon API Gateway
Automation Azure Automation AWS OpsWorks
Azure Batch
Azure Service Bus
Amazon Simple Queue Service (SQS)
Amazon Simple Workflow (SWF)
Visual Studio AWS CodeDeploy
Azure Scheduler None
Azure Search Amazon CloudSearch
Analytics Azure Stream Analytics Amazon Kinesis
Email Services Azure BizTalk Services Amazon Simple Email Services (SES)
Media Services Azure Media Services Amazon Elastic Transcoder
Amazon Mobile Analytics
Amazon Cognitor
Other Services & Integrations Azure Machine Learning (Preview) Amazon Machine Learning
Logic Apps AWS Lambda (Preview)
Service Bus AWS Config (Preview)

It is tough to make a significant decision like picking a cloud infrastructure vendor without actually trying them out. Both Amazon and Azure  offer a free tier of service, so you do just that.

 

Many times you need to change instance size based on your needs or the workload you are heaving. what better than an automated task (script) to do this for you.

Use this syntax so change an existing EC2 instace type.

 

Also you can see the Video tutorial on how this can be done with very little effort.

 

Many times people get into using the cloud on the Free tier account and they end up spending more that they imagined. AWS can be tricky and very complicated to manage without the right tools in hand.
The best way to start using AWS(Amazon Web Services) is to understand the billing system they use and make sure you are aware of the infrastructure you build around you and managed efficiently and smart.

To bring more transparency to the whole process AWS came up with the AWS Calculator, this is a great tool that can be used as reference of the costs that might incur during your usage of the AWS cloud.

The AWS Calculator tool will enable you to get an estimate of your future costs and also enables you play around with different infrastructure scenarios.

Another great feature that this tools comes with is the Sample templates that are located in the right side of the main page, this once are detailed Solutions that can be used as a base for your future application.

Go get to the AWS Calculator just follow this link below:

Amazon Web Services Cost Calculator


Before you go away can you answer me this ? this will help me write better content !!! Big Thx

What do you this about this post ?

View Results

Loading ... Loading ...

   Many times you get lost in craziness of building EC2 instances and new systems wheel working in an very agile IT environment.

    With the new era of cloud commuting and the open doors to “unlimited” resources on the fly and with limited effort in configuration it all. You get drag into the spending pit, this where you separate the boys from the real nerds :).  Having the capability to overlook the entire infrastructure is quite hard when your company does not want to buy tools(Cloud Management Tools) to do it for you.

      So in such cases you need to come-up with nerdy scripts and sweat a bit to build your own tools to monitor and manager the Cloud infrastructure.   Lucky for us Amazon AWS has a bunch of tools to enable administration but they are all complicated and you need to learn them(not ideal for somebody that already is overworked).

 So getting to the point in this tutorial we will see how we can use aws ec2 client to create EC2 Usage Reports

I assume you already have the aws ec2 client installed – in case you do not have it follow this tutorial on how to do it in 1 minute – How to install and configure the AWS client

Is important to understand the AWS EC2 EBS volume types, this way you will have a better look over what you have and what you really use.

This a map of EBS types and their coresponding prices in Asia Pacific area:

Amazon EBS General Purpose SSD (gp2) volumes

  • $0.12 per GB-month of provisioned storage

Amazon EBS Provisioned IOPS SSD (io1) volumes

  • $0.138 per GB-month of provisioned storage
  • $0.072 per provisioned IOPS-month

Amazon EBS Throughput Optimized HDD (st1) volumes

  • $0.054 per GB-month of provisioned storage

Amazon EBS Cold HDD (sc1) volumes

  • $0.03 per GB-month of provisioned storage

Amazon EBS Snapshots to Amazon S3

  • $0.055 per GB-month of data stored

More on the rest of the Areas see link here – AWS EBS Prices 

 

How do i get charged by Amazon for my Volume ?

This is not very transparent and many get confused.

We all know that Amazon will provide you EBS volumes on monthly rate base. eg: 1 Gb = 0.10 $ cents; But what if i i need 1 Tb of data but is only for a single day out of a month ?

Do i pay for the full month ?

The answer is no !

You will be charged for the factor of quantity and time(hourly based), so the final cost will be the weighted average of usage over the month.

-i hope this makes sense.

Example 1:

Full month use of 100 Gb :

100 *  0.12 = 12 $ – this is for a full month usage.

100 Gb for a single day(24H) will pay:

12(total sum) / 30(days) = 0.40 $

 

   Most used EBS volume type is GP2 type, so is around 10 cents depending on where you are located. This 10 cents can ad up to big amounts as you are getting charged for the full size of the volume and not for what ever you are using.

  Example 2:

You add a volume of 100 Gb but only have 10 Gb on it ! You are going to pay for the 100 Gb.(EBS volume are charged for allocation amount and not usage amount )

 

So let us go over some reports i normally use for my daily Cloud administration

Count how Many GP2 Volume types you have

Sum the total amount of GB you have as GP2 volume type

Sum the total amount of GB and Calculate the spent $ value you have as GP2 volume type

  • we will assume you are getting the rate of 0.12 $ dollars per Gb per month

Group EBS volume types and calculate their usage

  • this is tricky script as you need to group and calculate all types of volume size.
  • also you have volumes that might not have any snapshot on them so the output that you feed to awk must be processed differently .

I have put the script inside a executable file called grpvols.sh

This will sum the size of your volumes based on the volume type.

As you can see we have 2981 Gb of gp2 volume type and 10 Gb of io1 type and also 100 Gb of standard volume type.

 

 

I hope this was useful and if any has an opinion please fell free to drop me a comment.

AWs Volume cleanup

    Moving to Cloud is great and offers you huge flexibility, Startups became easy to initiate and we no longer need to invest huge amounts of money in something that might not work.

All this is great but tracking AWS costs is complicated, especially when you have several major business units, distributed teams, and dozens of projects.

In this tutorial we will focus on how we can track and remove unused volumes from our AWS infrastructure.

So what is the motivation for this ? 

   Organizing EBS volumes and deleting detached volumes on a regular basis to can help decrease AWS spend. See how easy it is to identify, tag, snapshot, detach, and delete those unused EBS volumes.

First we need to understand Volume state types:

  1. creating     – The volume is getting created and is not yet ready.
  2. available   –  The volume is available for use.
  3. in-use        – The volume is in use/attached.
  4. deleting     – The volume is getting deleted.
  5. error          –  The volume is in error state(pray that you have snapshot of it).

We are going to look for volume with the state of available since this volumes are ready to be used but are not used/attached to any of our instances.

I assume you have already have installed you AWS EC2 client, if not you can follow this easy tutorial here – Install AWS EC2 client.

 

Things to consider

You have AWS infrastructure spread across multiple Regions.

  Using the AWS console can be very time consuming and not very efficient when dealing with multiple Regions and multiple projects with 10 + maybe 100 + instances that carry 2-3 volumes and also have snapshots generated. not to mention when you have more than one administrator and your business is in full developing mode.

Now let us get to work 

 – i am going to walk you thru the process i use to manage my unused volumes/snapshots and how i clean them up across all regions.

List all available regions:

  • this script will list all available regions in AWS.
  • we will use this output to loop thru all the regions and scatter for unused volumes.

List all volume in a region:

  • this script will loop thru the volumes in your region us-east-1 and find all volumes with status available.

Ok , so we have two script that are both looping to give us the volumes with status available, now we need to combine them and create a nested loop.

 

  • you can see that now our script went and looped thru all regions and thru all volumes in each region and picked up all volumes with state available 

So what now ? This tutorial was about cleaning right ? 

    Using this script now we can implement an automated clean up task by adding a new functionality to this script.

Before you run/execute anything we will generate the command that will be executed and examine it:

  •  for the purpose of this tutorial and to keep it clean i have passed a specific region to my script in this case us-region-1.
  • i have also add ed the delete-volume command with the volume-id option

  • we can see that the command was generated using the echo command and the syntax seems ok.

Executing the delete-volume command:

  • now this is up to you, if you wanna run it using the echo option, copy+paste or let the script do all the work as i will demonstrate here.

  • this will run the command that he generates dynamically as demonstrated in previous steps(Note the available state of the volume) .

Now we will try to list the volumes again and see if we have any volumes with state available/no in use.

  • we can see nothing is returned.

So the final Script to clear all volumes with state available across all regions is bellow

Use this script with care and test, test, test, test again and again on your DEV enviroment or some dummy environment. I will not be responsible for any damage that this script might cause.

 

hope this was useful…

 

In this article we will work with s3cmd tool to manage our AWS S3 buckets. if you haven`t installed it go ahead and see my previous article on installing the s3cmd tool.

Examples of using s3cmd tool.

We will start by using the ls command option:

  • this will list all our buckets.

We can also list the content of one specific bucket:

Create a new S3 bucket using the mb option:

Add a file to a bucket using the put option:

Add multiple files to a S3 bucket using a wildcard

  • we can use a * wildcard to copy file into a S3 bucket.

Copy a file from and S3 bucket onto your local machine using the get command option:

  • the syntax is :

 s3cmd get {bucket}/{file}   {new location}

Copy a file from and S3 bucket onto local machine and overwrite existing one using the –force option:

 

 

 

   S3cmd is a free tool and client used for uploading, retrieving and managing data in Amazon S3 and other cloud storage service providers that use the S3 protocol, such as Google Cloud Storage. It is used by users who are familiar with command line programs. It is also ideal for batch scripts and automated backup to S3.

 How do i install s3cmd ? 

I will install the tool on a CentOS host using the yum package manager.

See example below:

To finalize the s3cmd tool installation you need to create the config file or the .s3cfg file.

See example below on how to configure the conf file.

  • you will have to provide it with your AWS Access Key, Secret Key, default region and a password used for encryption.

  • a conf file will be created in the /root/.s3cfg location.

Note:

  • make sure it can only be read by the root/owner user since it stored your keys and password in test format in the file.

Test that your s3cmd tool was installed correctly.

Check the installed version:

List all available S3 buckets under your account:

 

In the next article we will see how we can use the s3cmd tool to manage our AWS S3 storage buckets.

Working with S3CMD tool on AWS S3 storage buckets

In this short article i will go over the basic command used to work with AWS S3 storage command line client.

For this you will need to install the AWS client first and and have an AWS account ready for use(Is Free and comes with a good amount of free EC2 hours to use).

To install the AWS client follow this article, is easy as 1,2,3 :).

 

So the S3 utility that comes with the AWs client is a powerful tool that will enable you to manage your AWs S3 buckets for the command line and also you can implement it using scripts and create automated tasks.

  I will start by creating a new bucket and from here execute some basic tasks and commands. 

Add a new bucket

  • use the mb(make bucket) option followed by the {URI}:://name_of_bucket

List Bucket details

  • you will use the ls option followed by the bucket name

Remove Bucket

  • use the rb(remove bucket) option followed by the {URI}:://name_of_bucket

Remove a bucket that contains files

  • use the rb(make bucket) option followed by the {URI}:://name_of_bucket  and the –force option.

-if you try to remove a bucket that contains files you will get the following error.

Add new objects to a bucket:

Adding a new object to a S3 bucket is quite easy and similar to the copy command from Linux (cp).

Syntax:

aws s3 cp source destination option

Examples:

Copy an object from one bucket to another

Copy an object from one bucket to the same bucket

Copy from a S3 bucket to a local folder

Sync the content of two buckets

  • the sync S3 option recursively copies new and updated files from the source directory to the destination.

Syntax: 

aws s3 sync {source} {destination} {option}

Examples:

Sync the content of one local folder and a S3 bucket

 

I will post more advanced AWS S3 command line examples in the future articles.

 

In this article we will see how we can start an AWS EC2 instance for a non-stop state of Initializing.

Causes for this type of EC2 instance behavior are listed at this link EC2 tourbleshooting and  came well detailed by AWS documentation.

But in some cases like mine there is no official troubleshoot done.

So here is what i got in my System Log.

 

  • to get your EC2 system log you select it from the EC2 instance option list.

system_log

 

And here is the content of the log, just the part we are interested in:

What could’ve happened ?

  • it looks like the volume xvde has some problems. And this affects the entire host.

How can i fix this ? 

  • the host is not reachable !
  • i cannot edit the /etc/fstab file and remove the xvde entry !

Solution

 1 – Start a new instance.

2 – Detach the root volume (/dev/sda )from the bad host

  • the root volume holds the /etc/fstab file.

detach

 

3 – Attach the detached volume to the instance we started at step 1.

  • choose instance form drop down list.
  • choose the volume name as /dev/sdx  (easy to spot)

attach

 

 

4 – Login to your new instance and list the volumes attached to the instance using the lsblk command.

  • you don`t need to format the disk, this will erase the data.

5 – Create a mount point and mount the attached volume.

  • the /dev/xvdx2 volume content are reachable to you now to edit.
  • edit the /etc/fstab file and remove the entry with the failed disk.

Save the changes to the /etc/fstab file.

6 – Un-mount the /dev/xvdx2 volume and detach it from the new instance.

7 – Reattach the volume back to your failing instance.

Note:

  • you have to attach the volume as /dev/xvda.

8 – Start your failing instance and see the System Logs.

  • your host should up and running.

Also you will need to see what went wrong with the failed disk, in my case this was due to the fact i was using EC2 instance storage for a mount point. Lucky me this was only for temp space :).

 

By default, password ssh access is disabled, and also root ssh access is disabled. Initially we can only login into an AWS EC2 instance using a pam key.

In this article we will see what we need to do to make possible ssh login for a normal user and for the root user in a AWS EC2 instance.

After you have created your instance and managed to login using the pam key follow the steps bellow to enable password ssh for root and other users.

 

For this we need to edit the /etc/sshd/ssh_config file

For root login enabling

  • you will need to uncomment the PermitRootLogin yes line.
  • comment the line PermitRootLogin forced-commands-only.

For password login

  • uncomment line PasswordAuthentication yes.
  • comment line PasswordAuthentication no.

 

 Next we need to restart the sshd service so the new configuration will get used.

 

Now your AWS EC2 instance will be accessible by user/password.

Note: 

  • i don`t recommend login in using the root user, this can be dangerous and does not follow the best practices.
  • also user access using password is a quite risky.

 

In this article we will see how we can install the AWS client using two simple shell scripts.
The scripts are:

conf_vars.sh

  • will hold your AWS account credentials and your region data.

installAWSClient.sh

  • will download, install and configure your AWS client.

Lets see the content of the scripts:

conf_vars.sh

  • edit the file and add your credentials and region.

 

installAWSClient.sh

Create this two shell scripts on your Linux box and make them executable.

 

Lets demo the script execution.

  • it is quite straight foreword.

Test your AWS client installation.

or 

 

Great your AWS client is ready to be used.

 

 

Wanna see the Video version as well ? See Here! 

In this article i will go thru the steps required to mount an AWS S3 Bucket to an Linux EC2 instance.

I assume who will follow this article will already have an AWS account setup and knows already what an EC2 & S3 Bucket are.

A – For this you will need to install fuse and s2fs packages 

Follow the steps bellow to do this (this was done using a Centos 6.+ Linux Box)

 B – After you have installed s2fs & fuse you will need setup the AWS account access.

  • run the following commands as the root user.

 C – Next you will need to create and user that will manage your S3 storage.

Follow this steps to see how to create a new user that will have S3 Storage administration privileges.

1 – Go to Identity and Access Management Service(IAM)

ScreenHunter_01 Jun. 22 18.25

2 – Select the users tab.

 

ScreenHunter_02 Jun. 22 18.25

3 – Choose to create a new user. 

ScreenHunter_03 Jun. 22 18.26

4 – Give the user a name, i will call it “s3_access”, you can call it the way you want.

ScreenHunter_04 Jun. 22 18.26

5 –  When the user is created AWS will prompt you with the users credentials.

Very important : save them on your computer  or “memorize” them.

ScreenHunter_05 Jun. 22 18.26

 

They look something like this, where you have a column header and their respective values. We only will need the Access Key ID & Secret Access Key.

ScreenHunter_06 Jun. 22 18.27

 

 

6 – Setup user policy rights.

  • the user was created but is “naked”, meaning it has no rights.
  • you need to go to the “Policies” tab and look for the AmazonS3FullAccess Policy

ScreenHunter_07 Jun. 22 18.33

 

  • Once selected you need to attach/”grant” the policy to user(in our case the s3_access user).

 

ScreenHunter_08 Jun. 22 18.33

 

  • Select the s3_access user from the list of users and select the button “Attach Policy“. Now our s3_access user has full control over our S3 storage resources.

 

ScreenHunter_09 Jun. 22 18.34

 D – Next you need to alter the  file and put in your AWS S3 credentials

  • just copy and paste the command bellow after you edit the Access Key and the Secret Access Key with your own values from the S3_access user.

  

E – Create mount point

 F – Mount the s3 bucket device on your mount point

Now you can use the S3 bucket as your own file system.

Note : – is important that if you want this to always be mounted when you machine reboot you need to put this entry in your /etc/fstab file

 

To test the edited /etc/fstab file unmount the s2fs device and run umount -a, this command should mount your device the same way the bootloader does.

  • nice the device was mounted and this way we validated that it will come up with the bootloader.