AWS S3 Bucket

AWS CLI

LIST
# list all the available s3 buckets
aws s3 ls
[list with bucket name]
aws s3 ls s3://bucket-name/

# list all the sub-folders and files
aws s3 ls s3://bucket-name/ --recursive
(i.e., aws s3 ls s3://prashanth-sams --recursive)

# list all the bucket names with it's size
aws s3 ls s3://bucket-name/ --summarize

CREATE
# create new bucket; here mb is 'make bucket'
aws s3 mb s3://bucket-name/
(i.e., aws s3 mb s3://prashanth-sams)

# create new bucket with specific region
aws s3 mb s3://bucket-name/ --region us-east-1

COPY | MOVE
# copy a file inside bucket
aws s3 cp source-file s3://bucket-name/
(i.e., aws s3 cp /file.html s3://prashanth-sams)

# move a file inside bucket
aws s3 mv source-file s3://bucket-name/
(i.e., aws s3 mv /file.html s3://prashanth-sams)

DELETE
# delete all the data inside a bucket
aws s3 rm s3://bucket-name/ --recursive

# delete all files and folders excluding a specific file pattern
aws s3 rm s3://bucket-name/ --recursive --exclude "*.html"

# delete all files and folders excluding a specific folder
aws s3 rm s3://bucket-name/ --recursive --exclude "folder/*"

# delete a bucket which is empty; here, rb is 'remove bucket'
aws s3 rb s3://bucket-name

# delete a bucket which is not empty
aws s3 rb s3://bucket-name --force 

SYNC
# upload or sync your local data to remote s3 bucket
aws s3 sync . s3://bucket-name

# upload data by excluding files/folders with specific pattern to remote s3 bucket
aws s3 sync . s3://bucket-name --exclude "*.tgz"
aws s3 sync . s3://bucket-name --exclude "folder/*"

# download or sync your remote s3 bucket data to local
aws s3 sync s3://bucket-name .

# download data by excluding files/folders with specific pattern to local
aws s3 sync s3://bucket-name . --exclude "*.tgz"
aws s3 sync s3://bucket-name . --exclude "folder/*"

# copy or sync your remote s3 bucket data to another s3 bucket
aws s3 sync s3://bucket-name1 s3://bucket-name2

# update and deleted file/folder in remote s3
aws s3 sync . s3://bucket-name --delete

DYNAMIC URL
# make data public and open to users with dynamic access key
[default expire time of the link will be 3600 secs]
aws s3 presign s3://bucket-name/file.html

# make data public for a specific time period
aws s3 presign s3://bucket-name/file.html --expires-in 30

STATIC WEBSITE
# static url for a html file
aws s3 website s3://bucket-name --index-document index.html

# static url for a html file with working and not working document 
aws s3 website s3://bucket-name --index-document index.html --error-document error.html

OUTPUT
http://bucket-name.s3-website-us-east-1.amazonaws.com/
(i.e., http://prashanth-sams.s3-website-us-east-1.amazonaws.com/)

AWS Console

CREATE BUCKET

  • Go to S3 in aws console
  • Click on Create bucket

  • Enter bucket name, Region, and click on the create button

  • Select Bucket name and click on Edit public access settings

  • Untick Block all public access and click on the save button

  • Now, click on the bucket-name and upload files
  • Select the file and make it public

  • Now, click on the file and open the link

AWS Key-Pair

Automatic (AWS web interface)

Configure AWS key-pair from AWS web interface

  • Go to AWS console and click on Key Pairs link
  • Click on the Create key pair button

  • Enter Key pair name and choose pem file format and click on the Create key pair button
  • Now, a pem (private key) file will be downloaded in your local machine, which was actually generated in the AWS console

  • Generate the public key out of the generated private key
sudo ssh-keygen -y -f ~/Downloads/prashanth.pem > prashanth.pub

 

Manual (AWS CLI)

Configure AWS key-pair from local machine (terminal)

  • Generate AWS key-pair
# generate key-pair
aws ec2 create-key-pair --key-name prashanth

# generate key-pair by exporting the values in a .pem file
aws ec2 create-key-pair --key-name prashanth --output text > prashanth.pem

# generate key-pair with a query cli for filter
aws ec2 create-key-pair --key-name prashanth --query 'filter_name'


  • Restrict permission access to read only
chmod 400 prashanth.pem
  • Verify the recently generated key-pair
aws ec2 describe-key-pairs --key-name prashanth
  • Delete a key-pair
aws ec2 delete-key-pair --key-name prashanth

Create an EC2 instance | Terraform

This post helps you to create a basic EC2 instance (free tier) through Terraform AWS provisioning DSL scripting language
  • Download Terraform CLI
https://www.terraform.io/downloads.html
  • Extract and move terraform inside bin folder
mv ~/Downloads/terraform /usr/local/bin/

terraform version
  • Create a new project and a file with extension .tf
  • Copy and paste the below script
provider "aws" {
version = "~> 2.0"
region = "us-west-2"
shared_credentials_file = "~/.aws/credentials"
profile = "prashanth"
}
data "aws_ami" "amazon-linux-2" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-hvm*"]
}
filter {
name = "architecture"
values = ["x86_64"]
}
}
resource "aws_instance" "test" {
ami = "${data.aws_ami.amazon-linux-2.id}"
associate_public_ip_address = true
instance_type = "t2.micro"
}
  • Now, initialize terraform
terraform init
  • Compile the terraform file for any issues
terraform plan
  • Finally apply terraform
terraform apply -auto-approve

Export Jenkins artifacts report from one machine to another

In this post, we will see how to transfer the Jenkins artifacts from Machine A to Machine B under the same wireless network connection (router)

  • Install Publish Over SSH Jenkins plugin. Go to Manage Jenkins > Manage Plugins > Available

  • Go to Manage Jenkins > Configure System
  • Copy & paste the SSH private key generated on Machine B, where the artifacts have to be archived

  • Create ssh key on Machine B (if there are no keys available)
ssh-keygen -t rsa
  • Generate a file named authorized_keys (public key) using the id_rsa.pub public key to couple with the private key given in machine A
cd ~/.ssh/
cat id_rsa.pub >> authorized_keys
  • The private key is found in id_rsa file
cat id_rsa
  • Back to Machine A: Provide a custom name, Machine B’s hostname, username, and remote directory to store artifacts

  • To get Machine B’s hostname, open terminal, and type

Linux

hostname -I

Mac

System Preferences... > Network

Windows

ipconfig

or (Mac/Linux)

ifconfig

and get the router IP address

  • Now, go to Jenkins job > Configure and select Send build artifacts over SSH from Add post-build action

  • Provide the custom name created earlier, source folder containing artifacts, and the destination folder under the previously mentioned remote directory

  • Save, build the job, and check for the artifacts in the remote destination

 

Start nginx server in Machine B

  • Now, download nginx server
#Linux
sudo apt-get install nginx

#Mac
brew install nginx
  • Start Nginx
#Linux
service nginx start

#Mac
sudo nginx
  • Configure the path locating artifacts
#Linux
sudo vi /etc/nginx/nginx.conf
(if above doesn't work, try /etc/nginx/sites-enabled)

#Mac
sudo vi /usr/local/etc/nginx/nginx.conf

  • Restart Nginx
#Linux
service nginx restart

#Mac
sudo nginx -s stop
sudo nginx
  • Now, open the folder from already provided root location (say, /home/pi) of Machine B’s IP address
http://172.16.0.86:8080/allure-report/

http://172.16.0.86/allure-report/
(if you make listen 80 instead of 8080)

http://localhost:8080/allure-report/
(if you're in local machine B)

 

Configure Selenium Jenkins job with Ruby Env setup from Jenkins User

Note: (Follow this post before proceeding)

  • Start with creating a new Jenkins job by clicking Jenkins > New Item
  • Enter the job name, select Freestyle project, and click ok

  • Choose Source Code Management > Git and enter the prompt GitHub repo. Enter the branch name if different.

  • Now, select Build > Add build step > Execute shell

  • And follow these steps in shell to run selenium cucumber tests
  1. Set temporary RVM environment setup
  2. source /var/lib/jenkins/.rvm/bin/rvm

    or

    source ~/.rvm/bin/rvm
  3. Create rvm gemset and switch to it
  4. rvm gemset create test
    rvm gemset use test
  5. Install the ruby libraries
  6. gem install bundler
    bundle install
  7. Run Selenium cucumber tests
  8. cucumber features/scenarios/**/*.feature
  9. Finally, apply changes

Skip the manual temporary RVM environment setup [Optional]

  • Go to Manage Jenkins > Manage Plugins, click on the Available tab
  • Enter rvm in the filter

  • Select checkbox, download & restart Jenkins to take effect
  • Now, open the job created earlier and click Configure

  • you’ll see an extra newly added option, Run the build in a RVM-managed environment under Build Environment section

  • Add the installed ruby version in it (say, ruby-2.4.1) and remove source ~/.rvm/bin/rvm from the Execute shell

 

Install RVM and Ruby on Amazon Linux as Jenkins/Root User

Install RVM and Ruby as a Jenkins User

  • Install Jenkins (Follow this link)
  • Become a root user
sudo su
  • Install prerequisites for RVM and Ruby
yum install -y gcc openssl-devel libyaml-devel libffi-devel readline-devel zlib-devel gdbm-devel ncurses-devel ruby-devel gcc-c++ jq git patch autoconf automake bison libtool patch sqlite-devel
  • Set password for Jenkins as a root user (if needed)
sudo passwd jenkins

  • Switch as Jenkins User

  • Import GPG key before RVM install
curl -sSL https://rvm.io/mpapis.asc | gpg2 --import -

[run the cmd if the above didn't work]
gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB
  • Install RVM
curl -sSL https://get.rvm.io | bash -s stable --ruby
  • Export temporary setup
source /var/lib/jenkins/.rvm/scripts/rvm
  • Verify RVM and Ruby installations
rvm -v && which rvm
ruby -v && which ruby

Install RVM and Ruby as a Root User [optional]

  • Install Jenkins (Follow this link)
  • Become a root user
sudo su
  • Install prerequisites for RVM and Ruby
yum install -y gcc openssl-devel libyaml-devel libffi-devel readline-devel zlib-devel gdbm-devel ncurses-devel ruby-devel gcc-c++ jq git
  • Import GPG key before RVM install
curl -sSL https://rvm.io/mpapis.asc | gpg2 --import -
  • Install RVM
curl -sSL https://get.rvm.io | bash -s stable --ruby
  • Now, run the temporary RVM environment for effect
source /usr/local/rvm/scripts/rvm
  • Verify RVM and Ruby installations
rvm -v && which rvm
ruby -v && which ruby

Jenkins installation on Amazon Linux AMI (AWS EC2)

Prerequisites

  • Become a root user
sudo su
  • Update yum (since Amazon Linux is based on RedHat Linux)
yum update -y
  • Install Java 8 and purge Java 7
yum install java-1.8.0
  • Remove Java 7
yum remove java-1.7.0-openjdk

Install Jenkins

  • Download Jenkins from the RedHat repo
wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo
  • Import the verification key using the package manager RPM
rpm --import https://pkg.jenkins.io/redhat/jenkins.io.key
  • Install Jenkins
yum install jenkins --nogpgcheck -y
  • Start Jenkins
service jenkins start
  • Open the Jenkins URL with server IP and Jenkins default port
http://xx.xxx.xxx.xxx:8080/
  • The default Jenkins password can be copied from the below-mentioned file
cat /var/lib/jenkins/secrets/initialAdminPassword

Edit Jenkins default port (optional)

  • Edit Jenkins config file (vi commands: “i” for insert mode, “ESC” key to escape the inserting mode, “:wq” for write an quit)
vi /etc/sysconfig/jenkins
  • Update port
JENKINS_PORT="8081"
  • Check Jenkins installation
fuser -v -n tcp 8080
netstat -na | grep 8080
  • Auto start Jenkins service
sudo chkconfig --list jenkins
sudo chkconfig jenkins on

Install Google-chrome and Chromedriver in Amazon Linux machine

Install Chromedriver

  • Go to the temp folder
cd /tmp/
  • Download the latest Linux-based chromedriver
wget https://chromedriver.storage.googleapis.com/2.37/chromedriver_linux64.zip
  • Extract chromedriver
unzip chromedriver_linux64.zip
  • Move chromedriver inside the applications folder
sudo mv chromedriver /usr/bin/chromedriver
  • Confirm chromedriver version
chromedriver --version

 

Install Google-chrome

  • Run below command to install the latest google-chrome browser, which helps you to avoid GTK3 installation
curl https://intoli.com/install-google-chrome.sh | bash
  • Rename google-chrome-stable with google-chrome, so that the automation tests identify chrome browser before test execution
mv /usr/bin/google-chrome-stable /usr/bin/google-chrome
  • Verify google-chrome installation
 google-chrome --version && which google-chrome

 

Manual Google-chrome installation [Optional]

  • Check for Google chrome version if already installed
google-chrome --version
  • Get the Application library location
which google-chrome
  • Uninstall the older version if not needed
sudo yum -y erase google-chrome
  • To upgrade newest chrome version, try
sudo yum update google-chrome-stable
  • If you are using Amazon Linux, then you need the rpm file to be downloaded, extract and move to /usr/bin
wget https://dl.google.com/linux/direct/google-chrome-stable_current_x86_64.rpm