Decommission USB Blocking on Linux

If you previously blocked USB storage ports on your Linux machine to secure it against unauthorized access, you may need to reverse these changes. This guide provides a step-by-step approach to remove the USB block rule, allowing USB storage devices to function normally again.

1. Locate the USB Block Rule

If you’ve followed the previous guide or a similar process to block USB storage, you should have created a rule file in the /etc/udev/rules.d/ directory. The filename for this rule is likely 99-usbblock.rules.

2. Remove the USB Block Rule File

Use the following command to delete the file, thereby removing the block on USB storage devices:

sudo rm /etc/udev/rules.d/99-usbblock.rules

This step deletes the file that contains the blocking rule. Without this file, udev will no longer apply the restriction.

3. Reload udev Rules

After removing the rule file, you need to reload udev rules so that the system updates and stops enforcing the deleted rule:

sudo udevadm control --reload-rules

4. Trigger udev to Apply Changes

Finally, trigger udev to apply the updated rules immediately:

sudo udevadm trigger

With these steps, your Linux machine will no longer block USB storage devices, allowing them to be recognized and used as normal.

5. Verifying USB Functionality

To verify that the USB storage ports are no longer blocked:

Plug in a USB storage device (e.g., a flash drive). Use a command like lsblk or fdisk -l to check if the device is detected:

lsblk

If you see the USB storage device listed, then decommissioning was successful, and the system is now allowing USB storage access.

Block USB storage ports in Linux

Blocking USB storage on Linux is a straightforward process using udev rules. Follow these steps to configure and enforce a rule to disable USB storage access:

1. Create a USB Block Rule

The following command creates a rule in the udev directory that disables any USB storage device from being authorized for use:

echo 'SUBSYSTEM=="usb", ATTR{authorized}="0"' | sudo tee /etc/udev/rules.d/99-usbblock.rules

This rule works by setting ATTR{authorized}="0" for any device under the usb subsystem, effectively blocking it.

2. Reload udev Rules

After adding the rule, you need to reload the udev rules for it to take effect:

sudo udevadm control --reload-rules

3. Trigger udev to Apply the Rule

Finally, use the following command to trigger udev and apply the rule immediately:

sudo udevadm trigger

With this configuration, any USB storage device plugged into your Linux machine will be blocked.

echo command – Linux Shell Basics I

# print the same text
echo "Jesus loves you"
output: Jesus loves you

# insert a value in a string
x=you
echo "Jesus loves $x"
output: Jesus loves you

# print in a new line
echo -e "Jesus\nloves\nyou"
output: 
Jesus
loves
you

# remove spaces between two characters
echo -e "Jesus \bloves  \b\byou"
output: Jesuslovesyou

# a tab space in-between two characters
echo -e "Jesus\tloves\tyou"
output: Jesus   loves   you

# vertical space in-between two charaters
echo -e "Jesus\vloves\vyou"
output:



# print all the files and folder names in the directory (similar to ls cmd)
echo *
echo *.txt
echo /root/*

# Omits anything before \r
echo -e "Hey \rJesus loves you"
output: Jesus loves you

# Omits trailing new line
echo -n "Jesus loves you "
output: Jesus loves you root@2d1b43d3196c:~#

# get input and print it
echo "whats your name?"; read name
whats your name?
sams
echo $name
sams



Manage Users in Ubuntu Linux (Debian)

USERS
LIST
# list all the users in the Linux machine
awk -F: '{ print $1}' /etc/passwd | grep -v '_'
cut -d: -f1 /etc/passwd

# check current user
whoami

# show user id  
id -u <user-name>

CREATE
# create user
useradd <user-name>
(i.e., useradd sams)

# create user's home dir alongside
useradd -m <user-name>
su - <user-name>

# create user with options
useradd -m -d <home-dir> -s <shell> -c <comment> -U <user-name>
(i.e., useradd -m -d /var/sams -s /bin/bash -c "Prashanth Sams" -U sams)



# create user with an existing group
useradd -G <existing-group-name> -m <user-name>
(i.e., useradd -G samsgroup -m sams)

# create user with custom id
useradd -u 2020 <user-name>

SWITCH / DELETE
# switch user 
su <user-name>
(i.e., su sams)

# delete user 
userdel <user-name> 
(i.e., userdel sams)

# delete user's home directory and mail spool
userdel -r <user-name> 

# lock an user account
usermod -L <user-name> 
(i.e., usermod -L sams)

PASSWORD 
# set user password
passwd <user-name>



# ignore asking for passwd on entering sudo
[add the below line in the /etc/sudoers file]
%<group-name>   ALL=(ALL) NOPASSWD: ALL
(i.e., %sams   ALL=(ALL) NOPASSWD: ALL)

TROUBLESHOOT 
# user info
cat /etc/passwd



# fetch specific user info
grep <user-name> /etc/passwd
cat /etc/passwd | grep <user-name>
GROUPS
LIST
# list all the local groups
cut -d: -f1 /etc/group | sort

# show user's existing group names
id -Gn
groups 

# show group id(s) or name(s)
[primary user group id]
id -g
[show all the user existing group ids]
id -G

CREATE / ASSIGN
# create group 
groupadd <group-name>
(i.e., groupadd samsgroup)

# assign group
[set primary group ]
usermod -g <group-name> <user-name>
(i.e., usermod -g samsgroup sams)

[add user to a group]
usermod -aG <group-name> <user-name> 
(i.e., usermod -a -G samsgroup sams)

EDIT 
# edit group roles
[make sure vi is installed]
visudo
vi /etc/sudoers
[add the below line in the file]
<group-name>     ALL=(ALL) ALL
(i.e., samsgroup ALL=(ALL) ALL)

DELETE
# delete group (this does't affect the primary group)
deluser <user-name> <group-name> 
gpasswd -d <user-name> <group-name> 
(i.e., deluser sams samsgroup)

SSH RaspberryPi and activate Jenkins server

Need to have your Jenkins server in RaspberryPi? here it is…

  • Install Raspbian OS and log into the device
  • Open Preferences > Raspberry Pi Configuration > Interfacesand enable SSH
  • Let us SSH into this RaspberryPi device from another machine
ssh pi@raspberrypi.local
  • The same can be achieved by giving the IP address instead of raspberrypi.local
ssh pi@192.168.0.105
  • Now, enter the default password, which is,
raspberry
  • Type passwd to change the current password

 

Jenkins Installation

  • Download Jenkins key and binary; it usually downloads the latest versions
wget -q -O - https://jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -

sudo sh -c 'echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list'
  • Install Jenkins
sudo apt-get update

sudo apt-get install jenkins -y
  • Set Jenkins server to start automatically during machine boot
systemctl status jenkins.service
  • Open one of the following links to see the Jenkins interface

http://raspberrypi.local:8080/

http://192.168.0.105:8080/

  • Copy and paste the Jenkins default password
sudo cat /var/lib/jenkins/secrets/initialAdminPassword

 

Java Installation

  • The latest Jenkins versions fail to support Java 1.8 which comes by default on Raspbian OS.
  • Let’s see the cmd lines to uninstall JDK 1.8 in RaspberryPi OS
sudo update-alternatives --display java

sudo update-alternatives --remove "java" "/usr/lib/jvm/jdk-8-oracle-arm32-vfp-hflt/jre/bin/java"
sudo update-alternatives --remove "javac" "/usr/lib/jvm/jdk-8-oracle-arm32-vfp-hflt/jre/bin/javac"
sudo update-alternatives --remove "javas" "/usr/lib/jvm/jdk-8-oracle-arm32-vfp-hflt/jre/bin/javas"

sudo rm -rf jdk-8-oracle-arm32-vfp-hflt/

sudo update-alternatives --config java
sudo update-alternatives --config javac
sudo update-alternatives --config javas
  • Now, let us install Java 11 which is a compatible version for Jenkins 2.0+ versions
wget https://github.com/bell-sw/Liberica/releases/download/11.0.2/bellsoft-jdk11.0.2-linux-arm32-vfp-hflt.deb

sudo apt-get install ./bellsoft-jdk11.0.2-linux-arm32-vfp-hflt.deb

sudo update-alternatives --config javac
sudo update-alternatives --config java

Linux Environment Variables

ENV

Displays all the environment variables

$ env
$ env | grep HOME
$ printenv
$ printenv HOME

$ env WHO='Prashanth Sams' | grep WHO
WHO=Prashanth Sams

SET

Displays all the environment variables as well as shell variables(also called local variables)

$ set
$ set | less
  • Unset environment variables
$ export WHO='Prashanth Sams'
$ unset WHO

Persistent Environment variables

Environment variables are loaded from the following files on starting each shell session

/etc/environment
/etc/profile
~/.bashrc

Default Environment variables

Different ways to find the machine username

$ whoami
prashanthsams

$ echo $USER
prashanthsams

$ bash -c 'echo $USER'
prashanthsams

$ echo $HOME
/Users/prashanthsams

Create own Docker images in AWS ECR

Now, you can register you own custom docker image in AWS ECR instead of hub.docker.com. Secure your docker image through AWS ECR.

  • Install aws cli library 
pip3 install --upgrade awscli
  • Configure aws in your local machine
aws configure

  • After configuration, you can validate these details as seen below
aws configure list

  • Build a Dockerfile to create an image locally
  • Login to remote AWS and create a repository as you do in the GitHub
  • Now, open the terminal and login to AWS ECR from cli
aws ecr get-login --no-include-email --region ap-southeast-1
  • Copy and paste the auto-generated login details
  • Build docker image as normal
docker build -t your-image-name .
  • Create tag for the image you create (here, xxxxxxxxxxxxxx is to be copied from the remote aws ecr repo)
docker tag your-image-name:latest xxxxxxxxxxxxxx.dkr.ecr.ap-southeast-1.amazonaws.com/your-image-name:latest
  • Push it to the remote AWS ECR
docker push xxxxxxxxxxxxxx.dkr.ecr.ap-southeast-1.amazonaws.com/your-image-name:latest

 

Dockerize and integrate SonarQube with Jenkins

This post needs a basic knowledge from the previous post. Let’s see how to make SonarQube integration with Jenkins for code quality analysis in a live docker container

Dockerize SonarQube

  • Create a docker-compose.yml file with sonarqube and postgres latest images


version: "3.5"
services:
sonarqube:
image: sonarqube
ports:
– "9000:9000"
networks:
– sonarnet
environment:
– SONARQUBE_JDBC_URL=jdbc:postgresql://db:5432/sonar
volumes:
– sonarqube_conf:/opt/sonarqube/conf
– sonarqube_data:/opt/sonarqube/data
– sonarqube_extensions:/opt/sonarqube/extensions
– sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
– sonarnet
environment:
– POSTGRES_USER=sonar
– POSTGRES_PASSWORD=sonar
volumes:
– postgresql:/var/lib/postgresql
– postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
volumes:
sonarqube_conf:
name: sonarqube_conf
driver: local
sonarqube_data:
name: sonarqube_data
driver: local
sonarqube_extensions:
name: sonarqube_extensions
driver: local
sonarqube_bundled-plugins:
name: sonarqube_bundled
driver: local
postgresql:
name: postgresql
driver: local
postgresql_data:
name: postgresql_data
driver: local

  • Make sure you’ve sonar and sonar-scanner libs pre-installed in your local machine
  • Set login username and password as admin while executing the runner
sonar-scanner -Dsonar.projectKey=project_key -Dsonar.sources=. -Dsonar.host.url=http://localhost:9000 -Dsonar.login=admin -Dsonar.password=admin

Jenkins Integration

  • Install SonarQube Scanner Jenkins plugin

  • Go to Manage Jenkins > Configure System and update the SonarQube servers section

  • Go to Manage Jenkins > Global Tool Configuration and update SonarQube Scanner as in the below image

  • Now, create a jenkins job and setup SCM (say, git)
  • Choose Build > Execute SonarQube Scanner from job configure

  • Now, provide the required sonar properties in Analysis properties field. [Mention the path to test source directories in the following key, sonar.sources]

  • These sonar properties can be also be served from a file inside the project, named sonar-project.properties (see github for more details)
  • Now, update Path to project properties field in the project execute build

  • Observe the results in docker container’s host url

Configure SonarQube for code quality analysis

SonarQube provides code quality Analysis for most of the key programming languages.

  • Download and install sonarqube
MAC
# install sonar & sonar-scanner
brew install sonar
brew install sonar-scanner
  • Initialize SonarQube server
sonar.sh start
  • And navigate to the below URL. By default, the SonarQube server runs on port 9000
http://localhost:9000/
  • Port9000can be replaced insidesonar.properties file located in path
/usr/local/Cellar/sonarqube/7.4/libexec/conf/sonar.properties
sonar.web.port=9001

  • Restart Sonar to make it effect
# stop sonar 
sonar.sh stop

# start sonar
sonar.sh start
  • By default, you can login with admin / admin as user / pass credentials

  • Create a new project on clicking Projects tab in the menu or by clicking + icon on top-right corner of the page

  • Generate a login token for the project

  • Observe the generated token as in the below image

  • Now, setup a unique project key with appropriate platform; the key can be your project name

  • You will see a dynamic runner cli commands for execution through sonar-scanner

  • Now, run the above command in the terminal
sonar-scanner -Dsonar.projectKey=project_key -Dsonar.sources=. -Dsonar.host.url=http://localhost:9000 -Dsonar.login=token_id
  • Observe the execution status in terminal and view reports in the sonarqube server

  • Go to the reports section
http://localhost:9000/dashboard?id=project_key

OPTIONAL [Standalone installation]

  • Download sonarqube manually
https://www.sonarqube.org/downloads/

source url: https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-7.4.zip

  • Unzip and move into /bin folder in your terminal
MAC
sonarqube-7.5/bin/macosx-universal-64

LINUX
sonarqube-7.5/bin/linux-x86-64
  • Initialize SonarQube server
./sonar.sh start
  • The port 9000 is set default which can be modified from /sonarqube-7.5/conf/sonar.properties
  • Restart Sonar to make it effect
# stop sonar 
./sonar.sh stop

# start sonar
./sonar.sh start
  • Download sonar-scanner manually
https://docs.sonarqube.org/display/SCAN/Analyzing+with+SonarQube+Scanner

source url: https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-3.3.0.1492.zip

  • Unzip and locate /bin/sonar-scanner in the terminal to execute sonarqube runner
~/Downloads/sonar-scanner-3.3.0.1492-macosx/bin/sonar-scanner -Dsonar.projectKey=project_key -Dsonar.sources=. -Dsonar.host.url=http://localhost:9000 -Dsonar.login=token_id

Docker CLI cheatsheet

Docker Containers

LIST CONTAINERS
# lists all active containers [use alias 'ps' instead of 'container']
docker ps
docker container ls # lists all containers [active and in-active/exited/stopped] docker ps -a docker ps --all
docker container ls -a
[lists only container ID] docker ps -a | awk '{print $1}'
docker container ls -q [lists only the in-active/exited/stopped containers] docker ps -f "status=exited" docker ps --filter "status=exited" [lists only the created containers] docker ps --filter "status=created" [lists only the running containers] docker ps --filter "status=running"
[lists can also be filtered using names]
docker ps --filter "name=xyz" CREATE CONTAINERS # create a container without starting it [status of docker create will be 'created'] docker create image_name docker create --name container_name image_name (i.e, docker create --name psams redis) # create & start a container with/without -d (detach) mode [-i, interactive keeps STDIN open even on detach mode] [docker run = docker create + docker start] docker run -it image_id/name docker run -it -d image_id docker run -it -d image_name (i.e, docker run -it -d ubuntu) # create & debug a container [the container status become 'exit' after exit from console] docker run -it image_id bash docker run -i -t image_id /bin/bash (i.e, docker run -it ee8699d5e6bb bash) # name docker container while creation docker run --name container_name -d image_name (i.e, docker run --name psams -d centos) # publish a container's exposed port to the host while creating container [-p in short, --publish] docker run -d -p local-machine-port:internal-machine-port image_name (i.e, docker run -d -p 8081:80 nginx) # mount volume while creating a container [-v, volume maps a folder from our local machine to a relative path in a container] docker run -d -v local-machine-path:internal-machine-path image_name (i.e, docker run -d -p 80:80 -v /tmp/html/:/usr/share/nginx/html nginx) [http://localhost/sams.html where, sams.html is located in local machine path] # auto-restart containers [in-case, if there is a failure or docker stops by itself] docker run -dit --restart=always container_id docker run -dit --restart always container_id [restart only on failure] docker run -dit --restart on-failure container_id [restart unless stopped] docker run -dit --restart unless-stopped container_id

# update specific container's restart service
docker update --restart=always container_id
[update all the available containers]
docker update --restart=no $(docker ps -a -q) MANIPULATE CONTAINERS # debug/enter a running docker container [-i, interactive and -t, -tty is mandate for debugging purpose] docker exec -it container_id bash (i.e, docker exec -it 49c19634177c bash) # rename docker container docker rename container_id target_container_name docker rename container_name target_container_name (i.e, docker rename 49c19634177c sams) START/STOP/REMOVE CONTAINERS # stop container [stop single container] docker stop container_id docker container stop container_id
[stops all the containers]
docker stop $(docker ps -aq)
# kill container [docker stop and kill does the same job; but, 'stop' does safe kill and 'kill' does not]
[docker stop -> send SIGTERM and then SIGKILL after grace period]
[docker kill -> send SIGKILL]
[kill single container]
docker kill container_id
docker container kill container_id
[kills all the containers]
docker kill $(docker ps -aq)
# start container
[start a single container]
docker start container_id
docker container start container_id
[start all containers]
docker start $(docker ps -aq)
# restart container
[restart a single container] docker restart container_id docker container restart container_id
[restarts all containers]
docker restart $(docker ps -aq) # remove all containers docker rm $(docker ps -aq) # remove a single container [works only on exited container] docker rm container_id docker container rm container_id (i.e, docker rm 49c19634177c) # remove all the exited containers docker ps -a | grep Exit | cut -d ' ' -f 1 | xargs docker rm # force remove single container [works on active containers by stopping them] docker rm -f container_id (i.e, docker rm -f 49c19634177c) OTHERS # container full details docker inspect container_name/id (i.e., docker inspect 49c19634177c)

# get specific information from the container details
docker inspect -f '{{ property_key }}' container_id
(i.e., docker inspect -f '{{ .Config.Hostname }}' 40375860ee48)

# see specific container's logs
docker logs --follow container_name/id
(i.e., docker logs --follow 40375860ee48)

Docker Images

DOWNLOAD IMAGES
# pull docker images
docker pull image_name
(i.e, docker pull centos)
(i.e, docker pull prashanthsams/selenium-ruby)

# list all images
docker images

# list all dangling images
[the term 'dangling' means unused]
docker images -f dangling=true REMOVE IMAGES # remove single image [works only on images with no active containers] docker image rm image_id (i.e, docker image rm e9ebb50d366d) # remove all images docker rmi $(docker images -q) # removes all dangling images docker images -f dangling=true docker images purge
docker rmi -f $(docker images -f "dangling=true" -q) OTHERS # save and load image [save an existing image]
docker save existing_image > "target_image.tar"
(i.e., docker save nginx > “image_name.tar")
[load the newly generated image]
docker load -i target_image.tar



# get complete details about an image
docker inspect image_name/id
(i.e., docker inspect prashanthsams/selenium-ruby) # to check history of the specific image docker history image_name (i.e., docker history prashanthsams/selenium-ruby)

MISC

# auto-start docker daemon service on device boot
LINUX
sudo systemctl enable docker
MAC
sudo launchctl start docker

# restart docker daemon service on device boot
LINUX
sudo systemctl restart docker

# copy a file into docker container from local (manual)
docker cp /source_path container_id:/target_path
(i.e., docker cp /tmp/source_file.crx c46e6d6ef9ba:/tmp/dest_file.crx)

# copy a file from docker container to local (manual)
docker cp container_id:/source_path /target_path
(i.e., docker cp c46e6d6ef9ba:/tmp/source_file.crx /tmp/dest_file.crx)

# remove stopped containers, images with no containers, networks without containers 
docker system prune -a

# search docker hub for an image
docker search search_by_name
(i.e., docker search prashanthsams/selenium-ruby)



# check container memory usage (like top in linux) 
docker stats



# check container memory using 3rd party lib
LINUX
brew install ctop
ctop -a



# find changes to the container's filesystem from start
docker diff container_id

# zip and export container
docker export -o "container_name.tar" container_id
docker export --output "container_name.tar" container_id
docker export container_id > "container_name.tar"

DOCKER-COMPOSE

# run docker compose to create, start and attach to containers for a service
[the below cmd executes the rules written under docker-compose.yml file in your project]
docker-compose up
[run docker compose in a detach mode (background)]
docker-compose up -d
[run specific service from your yml file]
docker-compose up service_name

# similar to 'docker-compose up' but it overrides the cmds used in the service config
['run' priorities which cmd to run first; if the service config starts with bash, we can override it]
docker-compose run service_name python xyz.py
docker-compose run service_name python xyz.py shell
['run' does not create any of the ports specified in the service configuration, so use '--service-ports']
docker-compose run --service-ports service_name
[manually binding ports]
docker-compose run --publish 8080:80 service_name

# scale up containers in a service
docker-compose up --scale service_name=5
(i.e., docker-compose up --scale firefoxnode=5)
# quits/shuts down all the services docker-compose down # check if all the composed containers are running docker-compose ps
# start/stop service
[make sure the container state exists in-case if you need to start it]
docker-compose stop
docker-compose stop service_name
docker-compose start
docker-compose start service_name

# pause and unpause docker container's state that runs on a service
docker-compose pause
docker-compose unpause

# check all environments variables available to your running service
docker-compose run service_name env



OTHERS
[check all the active images that runs on a service]
docker-compose images
[update docker images with the latest changes in yml file]
docker-compose pull
[check logs]
docker-compose logs

DOCKER HUB – REMOTE REGISTRY (workflow)

# convert your Dockerfile to a docker image
[use -t to set image name and tag]
docker build .
docker build dockerfile_path
docker build -t custom_image_name dockerfile_path
[use tag if needed]
docker build -t custom_image_name:tag dockerfile_path



# commit docker container for an image to be prepared
[create a container]
docker run -it -d locally_created_image
[get container id]
docker ps
[make commit]
docker commit container_id custom_image_name
docker commit container_id username/custom_image_name
(i.e., docker commit 822b26bdd62d prashanthsams/psams)

# login to your docker hub account with username & passwrod
[dynamically provide dockerhub login details on runtime]
docker login
[pre-stored details for dockerhub login]
docker login -u your_dockerhub_username -p your_dockerhub_password

# push the commit to your remote docker hub account
docker push custom_image_name
docker push username/custom_image_name
(i.e., docker push prashanthsams/psams)

# pull your newly created remote docker image
docker pull newly_created_image
(i.e., docker pull prashanthsams/psams)