Ordered Chaotic Discussions

Lazy Automation



Lets face it we are all a little bit lazy and why do something the hard way when you can get a computer to do it for you. This topic is to share tasks you’ve automated to make your life easier or more efficient, be it in Bash, Python, C, PowerShell, Excel, hell even Redstone, if you’ve automated it I want to see it.

mp3 conversion

I made this script a while ago to convert my music collection to mp3. It takes three arguments either wav, flac, or ogg and converts all files in the current directory to mp3.


# loop through files in the currently active directory
for f in *.$1
        # case statement for arguments (you can convert wav, flac, and ogg to mp3)
        case $1 in
                # converts wav files to mp3 using lame
                lame -b320 "$f" "${f%.*}".mp3
                # converts flac files to mp3 using flac/lame
                flac -cd "$f" | lame -b320 - "${f%.*}".mp3
                # converts ogg to mp3 using oggdec/lame
                oggdec "$f" -o - | lame -b320 - "${f%.*}".mp3
                echo "Usage: $0 {wav|flac|ogg}"
        shopt -s nullglob
        # check for mp3 files in current directory
        if [[ -n $(echo *.mp3) ]]; then
            # check if mp3 directory exists
            if [ ! -d "mp3" ]; then
                # creates mp3 directory if it doesn't already exist
                mkdir mp3
            # move all mp3 files to the mp3 directory
            mv *.mp3 mp3/

FLAC conversion

After a while of using mp3 for my music I decided to switch to FLAC so adapted the script above to convert wav files to flac:


# loop through files in currently active directory
for f in *.wav
        # converts wav files to flac using sox
        sox "$f" "${f%.*}".flac
        shopt -s nullglob
        #check for flac files in current directory
        if [[ -n $(echo *.flac) ]]; then
            # check if flac directory exists
            if [ ! -d "flac" ]; then
                # creates flac direcotry if it doesn't already exist
                mkdir flac
            # move all flac files to the flac directory
            mv *.flac flac/

LFS Chroot

When I built Linux from Stretch I often needed to detach my usb flash drive to test it but also center the chroot again later so I made this. it takes two arguments m and u, the first mounts everything and enters the chroot and the second unmounts and cleans everything up.


export LFS=/mnt/lfs

case $1 in
        mount -v -t ext4 /dev/sde1 $LFS
        mount -v -t ext4 /dev/sde2 $LFS/home

        mount -v --bind /dev $LFS/dev

        mount -vt devpts devpts $LFS/dev/pts -o gid=5,mode=620
        mount -vt proc proc $LFS/proc
        mount -vt sysfs sysfs $LFS/sys
        mount -vt tmpfs tmpfs $LFS/run

        if [ -h $LFS/dev/shm ]; then
          mkdir -pv $LFS/$(readlink $LFS/dev/shm)

        chroot "$LFS" /usr/bin/env -i              \
            HOME=/root TERM="$TERM" PS1='\u:\w\$ ' \
            PATH=/bin:/usr/bin:/sbin:/usr/sbin     \
            /bin/bash --login
        umount -v $LFS/dev/pts
        umount -v $LFS/dev
        umount -v $LFS/run
        umount -v $LFS/proc
        umount -v $LFS/sys

        umount -v $LFS/home
        umount -v $LFS
        echo "Usage: $0 {m|u}"

I also made a script to automate the creation of a bootable Windows flash drive which I posted in more detail here:

Thats it for now, I’ll post new stuff later when I have more to show.

The Lounge 0001 [From the Beginning]
made this topic public #2

(The Lazy) #3

A game i like to play often is cataclysm-dda but i wanted to keep up with the latest development versions so i make a script that will check for updates and compile 2 versions of the game. One with tile support and one without


UPDATES=$(git pull) #check for updates and download them. Store the results to check to see if we got updates
DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd ) #gets the current directory and saves the output

RESPONSE="N" #default response used when already upto date

shopt -s extglob

if [[ $UPDATES = "Already up to date." ]] #Checks to see if the latest version is running and if true asks if you want to go ahead and compile
        echo "Already upto date."
        echo "Continue Building (y/N)?"
        read RESPONSE
        if [[ ${RESPONSE,,} != "y" ]]


if [ -e output.log ] #checks if the log already exsists and if true it removes it
        rm output.log

make clean |& tee -a output.log #prepare the source folder for compilation

echo "#############################" |& tee -a output.log
echo "Compiling with tiles build" |& tee -a output.log
echo "#############################" |& tee -a output.log
make $MAKE_FLAGS RELEASE=1 LTO=1 TILES=1 SOUND=1 LUA=1 CLANG=1 CCACHE=1 |& tee -a output.log #compiles a version of cataclysm with tiles

echo "#############################" |& tee -a output.log
echo "Compiling default build" |& tee -a output.log
echo "#############################" |& tee -a output.log
make $MAKE_FLAGS RELEASE=1 LTO=1 LUA=1 CLANG=1 CCACHE=1 |& tee -a output.log #compiles a non tiles build of cataclysm

echo "#############################" |& tee -a output.log
echo "Installing cataclysm DDA" |& tee -a output.log
echo "############################" |& tee -a output.log

if [ ! -d $DIR/$INSTALL_DIR ] #check to see if the install folder exsists and if it does not it creates it
        mkdir $DIR/$INSTALL_DIR

rm -rf $DIR/$INSTALL_DIR/!(save|config|templates) #remove the old version of the game
cp -r $DIR/data/ $DIR/lua/ $DIR/gfx/ $DIR/cataclysm $DIR/cataclysm-tiles $INSTALL_DIR #copys over the latest bulid to $INSTALL_DIR

Dont judge too hard. is the first real script ive made in a LONG time


Some more scripts I’ve made because why not.

BTRFS snaphots

Quick and dirty script I made to automatically take BTRFS snapshots of my root and /home partitions. I run this daily on a cronjob:


date=$(date +"%Y-%m-%d--%H:%M:%S")

echo -e "\e[33mCreating snapshots\033[0m"
btrfs subvol snapshot / $snapshot_dir/root/$date
btrfs subvol snapshot /home $snapshot_dir/home/$date

echo -e "\e[33m$snapshot_dir/root\033[0m"
btrfs subvol list -o $snapshot_dir/root
echo -e "\e[33m$snapshot_dir/home\033[0m"
btrfs subvol list -o $snapshot_dir/home

pfSense config backup

pfSense only allows you to automatically backup your router config if you have a gold subscription, so I made this script that logins into the admin panel and retrieves the latest config and downloads it to my desktop using wget.


cd /tmp

wget -qO- --keep-session-cookies --save-cookies cookies.txt \
  --no-check-certificate \
  | grep "name='__csrf_magic'" | sed 's/.*value="\(.*\)".*/\1/' > csrf.txt

wget -qO- --keep-session-cookies --load-cookies cookies.txt \
--save-cookies cookies.txt --no-check-certificate \
--post-data "login=Login&usernamefld=admin&passwordfld=$password&__csrf_magic=$(cat csrf.txt)" \  | grep "name='__csrf_magic'" \
| sed 's/.*value="\(.*\)".*/\1/' > csrf2.txt

wget --keep-session-cookies --load-cookies cookies.txt --no-check-certificate \
  --post-data "download=download&donotbackuprrd=yes&__csrf_magic=$(head -n 1 csrf2.txt)" \ -O config-router-`date +%Y%m%d%H%M%S`.xml

mv config-router* $backup

Mount NFS partition

A very long time ago I had issues on ubuntu automatically mounting my NFS connection from my server on bootup using fstab. so I made this script to either run on startup in my rc.local or manaully if I needed to mount it after the system has started.



if [[ $EUID -ne 0 ]]; then
    echo "This script must be run as root"
    exit 1

case $1 in
        if mountpoint -q $mount_point; then
            echo "$mount_point is active"
            timeout 5 mount $address:/mnt/storage $mount_point
        if mountpoint -q $mount_point; then
            umount -f -l $mount_point
            echo "$mount_point is not mounted"
        echo "Usage $0 {mount|umount}"


I got tired of manually compiling and packaging my LineageOS builds so I made this script to automate it. nothing too fancy but it gets the job done :smiley:


CODENAME="h850" # device codename
CERTS_DIR="$HOME/android-certs" # certs directory for signing package
PACKAGE_DIR="$HOME/lineage-package" # package install directory
RELEASE_TOOLS="build/tools/releasetools" # android release tools
ENVFILE="build/envsetup.sh" # environment file
DATE=$(date +"%Y-%m-%d--%H:%M:%S")

source $ENVFILE # setup environment
export USE_CCACHE=1 # enable ccache
export CCACHE_COMPRESS=1 # compress ccache

breakfast $CODENAME # prepare environment for target device
rm -rf out/dist/lineage_$CODENAME* # remove old target files
time mka target-files-package dist # compile and package target files

# generate signed target files
./$RELEASE_TOOLS/sign_target_files_apks -o -d $CERTS_DIR \
	out/dist/lineage_${CODENAME}-target_files-*.zip \

# generate install package
./$RELEASE_TOOLS/ota_from_target_files -k $CERTS_DIR/releasekey \
	--block --backup=true \
	signed-target_files.zip \

mv signed-target_files.zip $PACKAGE_DIR/signed-target_files-$DATE.zip # move signed target files to package directory
mv signed-ota_update-* $PACKAGE_DIR # move install package to package directory


When using the script above I noticed that my compile times were significantly longer than expected as I’m a dummy and I forgot to add the ccache variables. I’ve edited the script to include them.


Very simple script I made that pops up an i3 message bar after a specific amount of time.



if [[ -z $1 ]]; then TIME=1s; fi
if [[ -z $2 ]]; then MESSAGE="Your timer is ready"; fi

sleep $TIME && exec i3-nagbar -t warning -m "$MESSAGE"

This allows me to set a timer when its too late to otherwise set it on my phone and no more will I burn pizza :smiley:

I can also be extra lazy and further automate it by setting an i3 hotkey for instant Pizza notification:

bindsym $mod+F10 exec ~/bin/timer 15m "Your Pizza is ready"

(The Lazy) #8

I made a neat little python program that will check for errors. Is mostly due to the fact that im too lazy to just open up a terminal to find any errors myself.

Right now it only checks for errors on systemd and disk usage but when i think of more things i would like checked than ill add that in too

Added in support for checking entropy and whether or not there is anything in trash


When installing Fedora 28 I forgot to backup my LineageOS container so I’ve made a build to automate the creation of a new container and a custom init script that sets up the build environment as much as possible. This does a couple of things:

  1. Installs all the required packages needed to compile LineageOS
  2. Creates a user with a pre-defined .bashrc and .profile for the build environment
  3. Downloads and install the Android repo and platform-tools
  4. Create and syncs the repository ready to build LineageOS

I also included my previous docker builds for flarum and tiddilywiki as well because why not.


Small script I made that will rotate the display and touchscreen input device on my laptop so that they are the correct orientation. The accelerometer doesn’t auto rotate in i3 like it does in gnome so I’ve setup a mode that calls this script and using the arrow keys to rotate it in whatever orientation I want it :smiley:


OUTPUT="eDP-1" # laptop display
DEV="ELAN0732:00 04F3:2B45" # touchscreen input device
xrandr --output $OUTPUT --rotate $1 # rotate display

case $1 in
        xinput set-prop "$DEV" "Coordinate Transformation Matrix" 0 -1 1 1 0 0 0 0 1
        xinput set-prop "$DEV" "Coordinate Transformation Matrix" 0 1 0 -1 0 1 0 0 1
        xinput set-prop "$DEV" "Coordinate Transformation Matrix" 1 0 0 0 1 0 0 0 1
        xinput set-prop "$DEV" "Coordinate Transformation Matrix" -1 0 1 0 -1 1 0 0 1
        echo "Usage: $0 {left|right|normal|inverted}"


Simple python script I made to change the brightness of my displays using xrandr by increments of .25 depending on which workspace is active (I.E which monitor has focus).

#!/usr/bin/env python3

import os, sys, string, subprocess

def main():
    cmd = "xrandr --verbose | grep {} -A 5 | grep Brightness | cut -f 2 -d ' '".format(display())
    brightness = float(subprocess.getoutput(cmd))

    if sys.argv[1] == "+" and brightness < 1:
        brightness += .25
    elif sys.argv[1] == "-" and brightness > 0:
        brightness -= .25
    os.system("xrandr --output {} --brightness {}".format(display(), brightness))

def display():
    cmd = "i3-msg -t get_workspaces | jq '.[] | select(.focused==true).name'"
    cmd = subprocess.getoutput(cmd).replace("\"", "")

    if cmd == "8: ":
        return "DP-1"
    return "DP-2"

if len(sys.argv) <= 1:
    print("Usage: {} [+|-]".format(sys.argv[0]))

and in i3 i just set the bindings as

bindsym $mod+Shift+F12 exec ~/bin/brightness.py +
bindsym $mod+Shift+F11 exec ~/bin/brightness.py -

so I can now change the brightness levels of my displays on the fly without having to use their horrible OSDs to do it :smiley:


One of its main issues of the script above is it relies on me never changing the workspace on my second monitor, while this doesn’t happen often in an event that it does it’ll change the brightness of my primary display even though the second has focus, so to fix this i made a couple of improvements.

First I decided to get the location of my mouse and using that extrapolate which display has focused using:

xdotool getmouselocation --shell | head -n -3 | sed 's/[^0-9]*//g'

At first I was going to try and calculate between the x and y which has focus until I realized that all that really matters is that the x of the mouse is below the width of my left most display since the resolutions are combined into Screen 0 so could just be lazy and do a check such as:

if x < 2560:
    return "DP-1"
return "DP-2"

with 2560 being the width of my display and x the mouse coordinates on that axis.

Here is the new script with the changes is below:

#!/usr/bin/env python3

import os, sys, string, subprocess

def main():
    cmd = "xrandr --verbose | grep {} -A 5 | grep Brightness | cut -f 2 -d ' '".format(display())
    brightness = float(subprocess.getoutput(cmd))

    if sys.argv[1] == "+" and brightness < 1:
        brightness += .25
    elif sys.argv[1] == "-" and brightness > 0:
        brightness -= .25
    os.system("xrandr --output {} --brightness {}".format(display(), brightness))

def display():
    cmd = "xdotool getmouselocation --shell | head -n -3 | sed 's/[^0-9]*//g'"
    cmd = subprocess.getoutput(cmd)

    if int(cmd) < 2560:
        return "DP-1"
    return "DP-2"

if len(sys.argv) <= 1:
    print("Usage: {} [+|-]".format(sys.argv[0]))

I plan to improve it further by pulling the display id from xrandr based on the coordinates so nothing is hard-coded but that is for another time and this works well enough for my needs now and is vast improvement over my previous version with the added benefit that this now works without requiring the use of i3.


kinda simple one but mother wanted me to print 34 pdf files and when I asked Windows to print them from the file manager it told me i had to open each one and print them individually so transferred them over to Linux and made this small shell script to do it.


for f in *.pdf; do
    cat "$f" | lp -o scaling=//100// -oColorModel=KGray

Probably could’ve found a way todo it on Windows too but I’m far less familiar with batch scripting and this way still saves me a lot of time and effort.


@tsk Its very rough draft but this should do it. It grabs the user list based from the tier level/group you specify inside the api section and checks if the user is inside the group, if not it will send a POST API request to the forum and assign the group to that user. The script is in two parts, first handles the API requests (api.py).

import requests, json

Api_Key = "" # your discourse api key
groupName = "NSFW" # name of the group
tier_level = 3
url = "https://forum.0cd.xyz"

def _url(path):
    return url + path

def get_group():
    return requests.get(_url('/g/{}.json').format(groupName))

def get_group_members():
    return requests.get(_url('/g/{}/members.json').format(groupName))

def get_tl_members():
    return requests.get(_url('/g/trust_level_{}/members.json').format(tier_level))

def add_users(users, gid):
    headers = {'Api-Key': Api_Key, 'content-type':'application/json'}
    return requests.put(_url("/g/{}/members.json").format(gid), data=json.dumps({'usernames': ','.join(users)}), headers=headers)

and the second the main logic of the program

#!/usr/bin/env python3

import sys, requests, json, api

def main():
        resp = api.get_tl_members()
        gid = api.get_group()
        if resp.status_code != 200 or gid.status_code != 200:
            raise ApiError('Cannot fetch data: tl: {} group: {}'.format(resp.status_code, gid.status_code))
        member = []
        for users in resp.json()['members']:
            if users['username'] not in group():
        api.add_users(member, gid.json()['group']['id'])
    except(ApiError, requests.exceptions.ConnectionError) as e:

def group():
        resp = api.get_group_members()
        if resp.status_code != 200:
            raise ApiError('Cannot fetch data: {}'.format(resp.status_code))
        member = []
        for members in resp.json()['members']:
        return member
    except(ApiError, requests.exceptions.ConnectionError) as e:

class ApiError(Exception): pass


It should work mostly fine except there’s an issue in that the proper way handle the request is broken in Discourse so I’m brute forcing it by iterating over the users and sending a separate API request for each user, this has a downside in that Discourse has a request limit so adding lots of users at once will kill the API requests., haven’t done extensive testing because the only way todo that is in production but tested code works outside of the main for loop and it works fine.

fixed most of the issues above. it now makes a single API request with a list of usernames so no chance of hitting the requests limit and its a lot more efficient. old way was to add the users from their page on the admin panel one by one but now it pushes a list using the proper Add user call to the groups API.

(Legend) #15

Massive thanks mate


no problem, was fun to make. if you end up use it and have any issues let me know.


Had todo another one of these large prints again and got complained at because they needed to be in reverse order and double-sided, fixed it up and included a watch for printer queue. thankfully zsh makes it super easy to reverse the order of the file list using the On glob qualifier.


for f in *.pdf(On); do
    cat "$f" | lp -o scaling=//100// -o ColorModel=KGray -o sides=two-sided-long-edge

watch -n1 lpstat