Decommission USB Blocking on Linux

If you previously blocked USB storage ports on your Linux machine to secure it against unauthorized access, you may need to reverse these changes. This guide provides a step-by-step approach to remove the USB block rule, allowing USB storage devices to function normally again.

1. Locate the USB Block Rule

If you’ve followed the previous guide or a similar process to block USB storage, you should have created a rule file in the /etc/udev/rules.d/ directory. The filename for this rule is likely 99-usbblock.rules.

2. Remove the USB Block Rule File

Use the following command to delete the file, thereby removing the block on USB storage devices:

sudo rm /etc/udev/rules.d/99-usbblock.rules

This step deletes the file that contains the blocking rule. Without this file, udev will no longer apply the restriction.

3. Reload udev Rules

After removing the rule file, you need to reload udev rules so that the system updates and stops enforcing the deleted rule:

sudo udevadm control --reload-rules

4. Trigger udev to Apply Changes

Finally, trigger udev to apply the updated rules immediately:

sudo udevadm trigger

With these steps, your Linux machine will no longer block USB storage devices, allowing them to be recognized and used as normal.

5. Verifying USB Functionality

To verify that the USB storage ports are no longer blocked:

Plug in a USB storage device (e.g., a flash drive). Use a command like lsblk or fdisk -l to check if the device is detected:

lsblk

If you see the USB storage device listed, then decommissioning was successful, and the system is now allowing USB storage access.

Block USB storage ports in Linux

Blocking USB storage on Linux is a straightforward process using udev rules. Follow these steps to configure and enforce a rule to disable USB storage access:

1. Create a USB Block Rule

The following command creates a rule in the udev directory that disables any USB storage device from being authorized for use:

echo 'SUBSYSTEM=="usb", ATTR{authorized}="0"' | sudo tee /etc/udev/rules.d/99-usbblock.rules

This rule works by setting ATTR{authorized}="0" for any device under the usb subsystem, effectively blocking it.

2. Reload udev Rules

After adding the rule, you need to reload the udev rules for it to take effect:

sudo udevadm control --reload-rules

3. Trigger udev to Apply the Rule

Finally, use the following command to trigger udev and apply the rule immediately:

sudo udevadm trigger

With this configuration, any USB storage device plugged into your Linux machine will be blocked.

AWS S3 Bucket

AWS CLI

LIST
# list all the available s3 buckets
aws s3 ls
[list with bucket name]
aws s3 ls s3://bucket-name/

# list all the sub-folders and files
aws s3 ls s3://bucket-name/ --recursive
(i.e., aws s3 ls s3://prashanth-sams --recursive)

# list all the bucket names with it's size
aws s3 ls s3://bucket-name/ --summarize

CREATE
# create new bucket; here mb is 'make bucket'
aws s3 mb s3://bucket-name/
(i.e., aws s3 mb s3://prashanth-sams)

# create new bucket with specific region
aws s3 mb s3://bucket-name/ --region us-east-1

COPY | MOVE
# copy a file inside bucket
aws s3 cp source-file s3://bucket-name/
(i.e., aws s3 cp /file.html s3://prashanth-sams)

# move a file inside bucket
aws s3 mv source-file s3://bucket-name/
(i.e., aws s3 mv /file.html s3://prashanth-sams)

DELETE
# delete all the data inside a bucket
aws s3 rm s3://bucket-name/ --recursive

# delete all files and folders excluding a specific file pattern
aws s3 rm s3://bucket-name/ --recursive --exclude "*.html"

# delete all files and folders excluding a specific folder
aws s3 rm s3://bucket-name/ --recursive --exclude "folder/*"

# delete a bucket which is empty; here, rb is 'remove bucket'
aws s3 rb s3://bucket-name

# delete a bucket which is not empty
aws s3 rb s3://bucket-name --force 

SYNC
# upload or sync your local data to remote s3 bucket
aws s3 sync . s3://bucket-name

# upload data by excluding files/folders with specific pattern to remote s3 bucket
aws s3 sync . s3://bucket-name --exclude "*.tgz"
aws s3 sync . s3://bucket-name --exclude "folder/*"

# download or sync your remote s3 bucket data to local
aws s3 sync s3://bucket-name .

# download data by excluding files/folders with specific pattern to local
aws s3 sync s3://bucket-name . --exclude "*.tgz"
aws s3 sync s3://bucket-name . --exclude "folder/*"

# copy or sync your remote s3 bucket data to another s3 bucket
aws s3 sync s3://bucket-name1 s3://bucket-name2

# update and deleted file/folder in remote s3
aws s3 sync . s3://bucket-name --delete

DYNAMIC URL
# make data public and open to users with dynamic access key
[default expire time of the link will be 3600 secs]
aws s3 presign s3://bucket-name/file.html

# make data public for a specific time period
aws s3 presign s3://bucket-name/file.html --expires-in 30

STATIC WEBSITE
# static url for a html file
aws s3 website s3://bucket-name --index-document index.html

# static url for a html file with working and not working document 
aws s3 website s3://bucket-name --index-document index.html --error-document error.html

OUTPUT
http://bucket-name.s3-website-us-east-1.amazonaws.com/
(i.e., http://prashanth-sams.s3-website-us-east-1.amazonaws.com/)

AWS Console

CREATE BUCKET

  • Go to S3 in aws console
  • Click on Create bucket

  • Enter bucket name, Region, and click on the create button

  • Select Bucket name and click on Edit public access settings

  • Untick Block all public access and click on the save button

  • Now, click on the bucket-name and upload files
  • Select the file and make it public

  • Now, click on the file and open the link

Helm CLI cheatsheet

List of Helm CLI commands and it’s purpose:

SETUP
# initialize helm
helm init

# update helm
[MAC]
brew upgrade kubernetes-helm
helm init —upgrade
[LINUX]
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
helm version

# initialize helm [when there is an issue with helm tiller versions]
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -


CHART INSTALLATION & MANIPULATION
# create chart template [endpoint]
helm create <chart-name>
(e.g., helm create sitespeedio)

# download remote chart to local
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm fetch <remote-repo-name>
(e.g., helm fetch stable/grafana)

# update chart repo url
helm repo update stable https://kubernetes-charts.storage.googleapis.com/

# list the helm repo details which you have configured
helm repo list



# list all the installed charts
helm search

# search for helm chart from remote repo
helm search <chart-name>
(e.g., helm search prometheus)

# deploy release
helm install --name <release-name> <chart-path>
(e.g., 
helm install --name sitespeedio ./sitespeedio/
helm install --name sitespeedio . --namespace=sitespeedio
)
[without name (deploy release with random name)]
helm install <chart-path> --namespace <namespace-name>

# override helm values
helm install --name <release-name> --values config.yaml --timeout 300 --wait stable/mysql

# set environment variable on creating release
helm install --set x=somevalue -f config.yaml <chart-name> --name <release-name>

# helm chart syntax checker
helm lint
helm lint <path-name>

# upgrade the chart or variables in a release
helm upgrade --values config.yaml <release-name> <chart-name>
(e.g., helm upgrade --values config.yaml foo stable/mysql)

# inspect the chart details
helm inspect <chart-name>

# inspect the values assigned in the chart
helm inspect values <chart-name>

# create package as a .tgz file [if you have chartmuseum] 
helm package <chart-path> 
helm package . 

# install chart dependencies 
helm dep up <chart-name> 
helm dependency update  


MANIPULATE RELEASE 
# release status 
helm status <release-name> 
(e.g., helm status zooming-fish) 

# check release history 
helm history <release-name> 
(e.g., helm history zalenium)  

# rollback to the previous release number 
helm rollback <release-name> <version> 
(e.g., helm rollback zalenium 1) 

# return environment variables set on runtime for a release  
helm get values <release-name>  

# prints out all of the Kubernetes resources that were uploaded to the server 
helm get manifest <release-name>  


LIST RELEASE 
# list all releases with details 
helm ls --all 
helm list 
helm list --namespace <namespace-name> 

# list release name(s) 
helm ls --short 
helm ls --all --namespace <namespace-name> --short 

# list release names with deployed status 
helm ls --all --namespace=sitespeedio | grep DEPLOYED | awk '{print$1}'

# list all the deleted releases 
helm ls --deleted 


DELETE | UNINSTALL 
# uninstall tiller from your kubernetes cluster 
helm reset --force 

# delete release 
helm delete <release-name> 
(e.g., helm delete zooming-fish) 

# extract and delete release 
helm del $(helm ls | grep 'aus' | awk '{print $1}') 
helm del $(helm ls | grep 'FAILED' | awk '{print $1}') 

# delete all releases 
helm del $(helm ls --all --short) --purge 

# force delete helm release 
helm delete --purge --no-hooks <release-name> 

# completely remove release [even from DELETE status] 
helm del <release-name> --purge 
helm del $(helm ls --all | grep 'DELETED' | awk '{print $1}') --purge

Kubernetes CLI cheatsheet

Kubernetes Cluster

  • A group or pool of nodes form a powerful machine, which is a cluster
  • When you deploy programs in cluster, it intelligently distributes the work to available nodes
# cluster details
kubectl cluster-info



# troubleshoot cluster
kubectl cluster-info dump

Kubernetes Config

  • Modify cluster config and switch between context of a cluster
  • You can have different config for a single cluster; every single config is called context
# view current kubectl context/config
kubectl view config

# switch context/config of a cluster
kubectl config set current-context config-name
(i.e., kubectl config set current-context minikube)

# list all the available context/config
kubectl config get-contexts

Kubernetes Node

  • A node can be a physical machine or virtual machine
LIST NODES
# list all nodes in a cluster
kubectl get nodes



# list all nodes in a cluster with details [Internal/External IP, etc.,] 
kubectl get nodes -o wide

# get External IP of a node in a cluster
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}'
[get Internal IP of a node in a cluster]
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'

DELETE NODES
# remove a node from the service
kubectl drain 
kubectl drain --ignore-daemonsets --force
(e.g., kubectl drain minikube --ignore-daemonsets --force)

# uncordon the drained node
kubectl uncordon
(e.g., kubectl uncordon minikube)

MANIPULATE NODES
# troubleshoot node
kubectl describe node node_name
(i.e., kubectl describe node minikube)

Kubernetes Namespace

  • Namespace is intended for in use of environments
CREATE NAMESPACE
# create namespace
kubectl create namespace namespace_name
(i.e., kubectl create namespace zalenium)

LIST NAMESPACE
# list all the namespaces
kubectl get namespaces

# list all the namespaces with labels
kubectl get namespaces --show-labels

# list specific namespace with labels
kubectl get namespace namespace_name --show-labels
(i.e., kubectl get namespace zalenium --show-labels)



# list all namespaces with namespace counts
kubectl get pods --all-namespaces -o jsonpath="{..namespace}" |tr -s '[[:space:]]' '\n' |sort |uniq -c

DELETE NAMESPACE
# delete all namespaces
kubectl delete --all namespaces

# delete specific namespace
kubectl delete namespace namespace_name
(i.e.,
kubectl delete --all pods --namespace=zalenium
kubectl delete namespace zalenium
)

Kubernetes Service

  • A service is a grouping of Pods running on a cluster; you can have many services in a cluster; moreover, it helps you as a load balancer, and provide zero-downtime application deployments
LIST SERVICES
# list all the services in default namespace
kubectl get service
kubectl get services


# list all the services in another namespace
kubectl get services --namespace namespace_name
(i.e., kubectl get services --namespace zalenium)

# list all the services with details [selector]
kubectl get services --namespace namespace_name -o wide
(i.e., kubectl get services --namespace zalenium -o wide)

# list all the services with selector filter
kubectl get services --all-namespaces --selector=selector_name
(i.e., kubectl get services --all-namespaces --selector=app.kubernetes.io/name=zalenium -o wide)

# list all the services from entire namespace
kubectl get services --all-namespaces

DELETE SERVICES 
# delete kubernetes service
kubectl delete service service_name --namespace namespace_name
(i.e., kubectl delete service zalenium --namespace zalenium)

Kubernetes Deployment

  • When you create a deployment, it creates a pod with containers in it
CREATE DEPLOYMENTS 
# create deployment that deploy pods with dynamic image name 
kubectl create deployment deployment_name --image=image_name 
(i.e., kubectl create deployment ngnix --image=nginx)

LIST DEPLOYMENTS 
# list all the deployments in the default namespace 
kubectl get deployment kubectl get deployments

# list all the deployments in another namespace
kubectl get deployments --namespace namespace_name
(i.e., kubectl get deployments --namespace zalenium)

# list all the deployments with details [container name, image, selector]
kubectl get deployments --namespace namespace_name -o wide
(i.e., kubectl get deployments --namespace zalenium -o wide)

# list all the deployments with selector filter
kubectl get deployments --all-namespaces --selector=selector_name
(i.e., kubectl get deployments --all-namespaces --selector=app.kubernetes.io/name=zalenium -o wide)

# list all the deployments from entire namespace
kubectl get deployments --all-namespaces

DELETE DEPLOYMENTS
# delete deployment
kubectl delete deployment deployment_name --namespace namespace_name
(i.e., kubectl delete deployment zalenium --namespace zalenium)

MANIPULATE DEPLOYMENTS
# expose deployment - NodePort [this will create a service from the deployment/pod]
kubectl expose deployment pod_name --type=NodePort

SCALE DEPLOYMENTS
[make sure the deployment is in running state whenever you scale them]
# scale the existing deployment
kubectl scale deployment --all --replicas=4

# scale a specific deployment
kubectl scale deployment/deployment_name --all --replicas=4
(i.e., kubectl scale deployment/nginx-deployment --all --replicas=4)

# scale multiple controllers
kubectl scale deployment/deployment_name1 deployment/deployment_name2 --all --replicas=4

# scale a specific deployment locating the yaml file
kubectl scale --replicas=4 -f yaml-path
(i.e., kubectl scale --replicas=4 -f deployment.yaml)

# rule while scaling deployment
kubectl scale --current-replicas=3 --replicas=3 -f yaml-path

Kubernetes Pods

  • Pod is a group of containers
  • Each Pod has a unique IP
  • The containers inside a pod talk to each other and share volumes
LIST PODS
# list all the pods in the default namespace
kubectl get pods
kubectl get pods --namespace default

# list all the pods in another namespace
kubectl get pods --namespace namespace_name
(i.e., kubectl get pods --namespace zalenium)

# list all the pods with details [ip addresss, concerned node]
kubectl get pods --namespace namespace_name -o wide
(i.e., kubectl get pods --namespace zalenium -o wide)

# list all the pods with selector filter
kubectl get pods --all-namespaces --selector=selector_name
(i.e., kubectl get pods --all-namespaces --selector=app.kubernetes.io/name=zalenium -o wide)

# list all the pods with status filter [Running, Pending, etc.,]
kubectl get pods --all-namespaces --field-selector=status.phase=Running

# list all the pods from entire namespace
kubectl get pods --all-namespaces

# list pods without column headers
kubectl get pods -n namespace_name --no-headers

# list pod labels
kubectl get pods -n namespace_name --no-headers -o custom-columns=:metadata.labels.app

# list pod names
kubectl get pods -n namespace_name --no-headers -o custom-columns=:metadata.name
kubectl get pods -o=name -n namespace_name | sed "s/^.\{4\}//"
kubectl get pods -n namespace_name --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'

# list pod's release names
kubectl get pods -n namespace_name --no-headers -o custom-columns=:metadata.labels.release

MANIPULATE PODS
# debug/enter into a pod
kubectl exec -it pod_name bash --namespace namespace_name
(i.e., kubectl exec -it zalenium-40000-4zbtm bash --namespace zalenium)

# copy data into a pod 
kubectl cp local_path namespace_name/pod_name:pod_path
(i.e., kubectl cp ./ zalenium/zalenium-40000-4zbtm bash --namespace zalenium)

# get logs of the pod
kubectl logs pod_name -n namespace_name
watch kubectl logs pod_name -n namespace_name

# troubleshoot pod
[list the pods and check for running status]
kubectl describe pod pod_name --namespace namespace_name
(i.e., kubectl describe pod zalenium-40000-4zbtm --namespace zalenium)

Kubernetes WatchDog

  • Let’s you debug and control over entire kubernetes system
LIST & CHECK STATUS
# list all the details (services, deployments, pods, replicas) related to all namespaces
kubectl get all --all-namespaces

# list all the details (services, deployments, pods, replicas) specific to a namespace
kubectl get all -n namespace_name

# live status
watch kubectl get all -n namespace_name 

# list all the deleted pod names too
kubectl get event --all-namespaces -o custom-columns=NAME:.metadata.name | cut -d "." -f1

Generic CLI cmds

# check all supported api versions
kubectl api-versions

# get details about a specific api-version
kubectl get --raw api-version
(i.e., kubectl get --raw /apis/autoscaling/v2beta2)

# default cmd to create services/deployments/pods/volumes from yaml file
kubectl create -f yaml-path
(i.e., kubectl create -f .)

Chartmuseum – The Helm chart repository management

Let’s see how to configure Chartmuseum to manage the helm chart repository internally:

  • Docker up chartmuseum
docker run --rm -it \
-p 8080:8080 \
-v $(pwd)/charts:/charts \
-e DEBUG=true \
-e STORAGE=local \
-e STORAGE_LOCAL_ROOTDIR=/charts \
chartmuseum/chartmuseum:latest
  • Open Terminal and go to the custom chart’s folder that you own. Build a package file for the chart created
helm package .
  • You will see a .tgzfile generated. Now place the filename as a binary to push it inside the chartmuseum local server
curl -L --data-binary "@grafana-4.3.0.tgz" http://localhost:8080/api/charts
  • Go to the below link to see the updated repo
http://localhost:8080/api/charts
  • Set base repo URL from where you can download the helm charts
helm repo add name http://localhost:8080
  • Search for the existing charts in your repo
helm search grafana
  • Download fetch and install the grafana helm chart
helm fetch name/grafana
helm install name/grafana

Linux Environment Variables

ENV

Displays all the environment variables

$ env
$ env | grep HOME
$ printenv
$ printenv HOME

$ env WHO='Prashanth Sams' | grep WHO
WHO=Prashanth Sams

SET

Displays all the environment variables as well as shell variables(also called local variables)

$ set
$ set | less
  • Unset environment variables
$ export WHO='Prashanth Sams'
$ unset WHO

Persistent Environment variables

Environment variables are loaded from the following files on starting each shell session

/etc/environment
/etc/profile
~/.bashrc

Default Environment variables

Different ways to find the machine username

$ whoami
prashanthsams

$ echo $USER
prashanthsams

$ bash -c 'echo $USER'
prashanthsams

$ echo $HOME
/Users/prashanthsams

Ubuntu Core installation | RaspberryPi 3

This post helps to install Ubuntu Core OS in your RaspberryPi 3 device

  • Download SD Card Formatter and format existing data from your SD card device (if needed)

  • Now, remove the SD card and insert it in your RaspberryPi 3 device
  • Connect power supply + monitor + ethernet cable from router to your raspberryPi device
  • It is better to use elan instead of wlan since the wifi network scan won’t be working on boot
  • SSH to the raspberryPi machine
ssh your-ubuntu-sso-username@192.168.0.xxx

i.e., ssh prashanth@192.168.0.104

Tips

To check the active machines attached to the router in current local network,

brew install nmap

sudo nmap -sn 192.168.0.0/24

List of available Jenkins Environment variables

Listed below is a complete set of Jenkins environment variables:

Variable Description
BRANCH_NAME For a multibranch project, this will be set to the name of the branch being built, for example in case you wish to deploy to production from master but not from feature branches; if corresponding to some kind of change request, the name is generally arbitrary (refer to CHANGE_ID and CHANGE_TARGET).
CHANGE_ID For a multibranch project corresponding to some kind of change request, this will be set to the change ID, such as a pull request number, if supported; else unset
CHANGE_URL For a multibranch project corresponding to some kind of change request, this will be set to the change URL, if supported; else unset
CHANGE_TITLE For a multibranch project corresponding to some kind of change request, this will be set to the title of the change, if supported; else unset
CHANGE_AUTHOR For a multibranch project corresponding to some kind of change request, this will be set to the username of the author of the proposed change, if supported; else unset
CHANGE_AUTHOR_DISPLAY_NAME For a multibranch project corresponding to some kind of change request, this will be set to the human name of the author, if supported; else unset
CHANGE_AUTHOR_EMAIL For a multibranch project corresponding to some kind of change request, this will be set to the email address of the author, if supported; else unset
CHANGE_TARGET For a multibranch project corresponding to some kind of change request, this will be set to the target or base branch to which the change could be merged, if supported; else unset
BUILD_NUMBER The current build number, such as “153”
BUILD_ID The current build ID, identical to BUILD_NUMBER for builds created in 1.597+, but a YYYY-MM-DD_hh-mm-ss timestamp for older 
BUILD_DISPLAY_NAME The display name of the current build, which is something like “#153” by default
JOB_NAME Name of the project of this build, such as “foo” or “foo/bar”
JOB_BASE_NAME Short Name of the project of this build stripping off folder paths, such as “foo” for “bar/foo”
BUILD_TAG String of “jenkins-${JOB_NAME}${BUILD_NUMBER}” All forward slashes (“/”) in the JOB_NAME are replaced with dashes (“-“). Convenient to put into a resource file, a jar file, etc for easier identification
EXECUTOR_NUMBER The unique number that identifies the current executor (among executors of the same machine) that’s carrying out this build. This is the number you see in the “build executor status”, except that the number starts from 0, not 1
NODE_NAME Name of the agent if the build is on an agent, or “master” if run on master
NODE_LABELS Whitespace-separated list of labels that the node is assigned
WORKSPACE The absolute path of the directory assigned to the build as a workspace
JENKINS_HOME The absolute path of the directory assigned on the master node for Jenkins to store data
JENKINS_URL Full URL of Jenkins, like http://server:port/jenkins/ (note: only available if Jenkins URL set in system configuration)
BUILD_URL Full URL of this build, like http://server:port/jenkins/job/foo/15/ (Jenkins URL must be set)
JOB_URL Full URL of this job, like http://server:port/jenkins/job/foo/ (Jenkins URL must be set)
GIT_COMMIT The commit hash being checked out
GIT_PREVIOUS_COMMIT The hash of the commit last built on this branch, if any.
GIT_PREVIOUS_SUCCESSFUL_COMMIT The hash of the commit last successfully built on this branch, if any
GIT_BRANCH The remote branch name, if any
GIT_LOCAL_BRANCH The local branch name being checked out, if applicable
GIT_URL The remote URL. If there are multiple, will be GIT_URL_1GIT_URL_2, etc
GIT_COMMITTER_NAME The configured Git committer name, if any
GIT_AUTHOR_NAME The configured Git author name, if any
GIT_COMMITTER_EMAIL The configured Git committer email, if any
GIT_AUTHOR_EMAIL The configured Git author email, if any
SVN_REVISION Subversion revision number that’s currently checked out to the workspace, such as “12345”
SVN_URL Subversion URL that’s currently checked out to the workspace

 

Stop/kill ‘docker run’ on aborting Jenkins job

A small post that helps you to stop & remove docker containers on Jenkins job abortion.

  • Shell snippet that run in Jenkins Build > Execute shell
# function to trigger on abort condition
getAbort()
{
 docker rm $(docker stop $(docker ps -aq --filter="name=Test$BUILD_ID"))
}

# declare on abort condition
trap 'getAbort; exit' SIGHUP SIGINT SIGTERM

# docker pull and run
docker pull httpd
docker run --name Test$BUILD_ID httpd