sudo apt install handbrake
sudo apt install handbrake-cli
sudo apt install handbrake
sudo apt install handbrake-cli
snap-store --quit && sudo snap refresh snap-store
you might need to do:
sudo killall snap-store
sudo snap refresh snap-store
conky.config ={
— CONKY SETTINGS
background = false,
update_interval = 1,
total_run_times = 0,
cpu_avg_samples = 2,
net_avg_samples = 2,
override_utf8_locale = true,
double_buffer = true,
no_buffers = true,
text_buffer_size = 32768,
— CONKY: WINDOW SPECIFICATIONS
own_window = true,
own_window_argb_value = 192,
own_window_argb_visual = true,
own_window_class = ‘conky-semi’,
own_window_colour = ‘#0b0d18’,
own_window_hints = ‘undecorated,below,sticky,skip_taskbar,skip_pager’,
— own_window_transparent = true,
own_window_transparent = true,
own_window_type = ‘normal’,
— own_window_type = ‘desktop’,
double_buffer = true,
border_inner_margin = 6,
border_width = 5,
minimum_height = 1000,
minimum_width = 280,
maximum_width = 280,
alignment = ‘top_right’,
gap_x = -1910,
gap_y = 40,
— GRAPHICS SETTINGS
draw_shades = false,
draw_outline = true,
draw_borders = false,
draw_graph_borders = true,
— draw_graph_borders = false,
— TEXT SETTINGS
use_xft = true,
font = ‘Montserrat Regular:size=10’,
xftalpha = 0.2,
uppercase = false,
— Pad percentages to this many decimals (0 = no padding)
pad_percents = 2,
— COLOUR SETTINGS
default_color = ‘AAAAAA’,
default_shade_color = ‘333333’,
— default_outline_color = ‘111111’,
default_outline_color = ‘010101’,
color1 = ‘999999’,
color2 = ‘CCCCCC’,
color3 = ‘EEEEEE’,
color4 = ‘444444’,
color5 = ‘555555’,
color6 = ‘666666’,
color7 = ‘3b71a1’,
};
conky.text = [[
${alignc}${font Arial:size=20}${color #0040FF}Raspberry Pi OS ${color #0040FF}
${alignc}${color #0040FF}Hostname: ${color #0040FF}${nodename}
${color6}${hr 2}${color}
${alignc}${color1}${font Montserrat Bold:size=20}${time %H:%M:%S}${font}${color}${alignc}
${voffset -9}
${alignc}${font Montserrat Light:size=10}${time %A %d %B %Y}${font}${alignc}
${color6}${hr 2}${color}
${voffset -6}
${font Arial:bold:size=9}${color #00AAFF}System ${color #0000AA}${hr 2}
${color #00AAFF}${font Montserrat Light:size=9}Hostname:${alignr}$font${color #505050}$nodename
${color #00AAFF}${font Montserrat Light:size=9}Uptime:${alignr}$font${color #505050}$uptime_short
${color #00AAFF}${font Montserrat Light:size=9}Kernel:${alignr}$font${color #505050}$kernel ${voffset 5}
${color #00AAFF}${font Montserrat Light:size=9}Distro:${alignr}$font${color #505050}${exec lsb_release -a 2>/dev/null | grep -i description | cut -s -f2}
${color #00AAFF}${font Montserrat Light:size=9}Frequency:${alignr}$font${color #505050}${freq_g cpu0}Ghz
${color #00AAFF}${font Montserrat Light:size=9}Uptime:${alignr}$font${color #505050}${uptime}
${font Arial:bold:size=9}${color #00AAFF}CPU ${color #0000AA}${hr 2}
${voffset -15}
${font Montserrat Light:size=9}${color1}CPU TEMP:${color}${font} ${font :size=10}${alignr}${hwmon 1 temp 1}°${color4}C${color}${voffset 5}
${voffset -15}
$font${color #505050}CPU1 ${color #AAAAAA}${cpu cpu1}%${color #505050} ${cpubar cpu1}
CPU2 ${color #AAAAAA}${cpu cpu2}%${color #505050} ${cpubar cpu2}
CPU3 ${color #AAAAAA}${cpu cpu3}%${color #505050} ${cpubar cpu3}
CPU4 ${color #AAAAAA}${cpu cpu4}%${color #505050} ${cpubar cpu4}
${voffset -8}
${font Arial:bold:size=9}${color #00AAFF}Network ${color #0000AA}${hr 2}
${voffset -15}
${font Montserrat Light:size=10}${color1}IP ADDRESS:${color}${font} ${alignr}${addr eth0}
${voffset 5}${font Montserrat Light:size=10}${font}Download ${alignr}${downspeedf eth0}k/s (${totaldown eth0})
${color3}${downspeedgraph eth0 50,280 ADFF2F 32CD32 -t}
${font Montserrat Light:size=10}${font}Upload ${alignr}${upspeedf eth0}k/s (${totalup eth0})
${color3}${upspeedgraph eth0 50,280 FF0000 8B0000 -t}
${font Arial:bold:size=9}${color #00AAFF}HDD ${color #0000AA}${hr 2}
${color1}Used: ${color3}${fs_used /}${color1}${goto 175}Free:${goto 225}${color3}${fs_free /}
${color2}${fs_bar /}
${font Montserrat Light:size=8}${color1}DISK I/O:${color}${font} ${alignr}$diskio
${voffset 2}${font Montserrat Light:size=8}${color1}READ: ${color}${font} ${goto 80}${color4}${diskiograph_read 15,210 ADFF2F 32CD32 750}${color}
${voffset 2}${font Montserrat Light:size=8}${color1}WRITE:${color}${font} ${goto 80}${color4}${diskiograph_write 15,210 FF0000 8B0000 750}${color}
${font Arial:bold:size=9}${color #00AAFF}Memory ${color #0000AA}${hr 2}
$font${color DimGray}RAM $alignc $mem / $memmax $alignr ${color 188f18}$memperc%
${color 188f18}$membar${voffset 1}
${font Arial:bold:size=9}${color #00AAFF}Processes ${color #0000AA}${hr 2}
${voffset -15}
${font :size=10}${font Montserrat Light:size=8}${color1}TOTAL:${color}${font} ${alignr}${processes}
${voffset -10}
${font Montserrat Light:size=9}${color1}APP NAME: ${goto 160}MEMORY: ${goto 245}CPU: ${color}${font}
${voffset -16}
${font Montserrat Light:size=9}${color1}${top_mem name 1} ${color}${font} ${goto 160}${top mem 1} % ${goto 235}${top cpu 1} %
${font Montserrat Light:size=9}${color1}${top_mem name 2} ${color}${font} ${goto 160}${top mem 2} % ${goto 235}${top cpu 2} %
${font Montserrat Light:size=9}${color1}${top_mem name 3} ${color}${font} ${goto 160}${top mem 3} % ${goto 235}${top cpu 3} %
${font Montserrat Light:size=9}${color1}${top_mem name 4} ${color}${font} ${goto 160}${top mem 4} % ${goto 235}${top cpu 4} %
${font Montserrat Light:size=9}${color1}${top_mem name 5} ${color}${font} ${goto 160}${top mem 5} % ${goto 235}${top cpu 5} %
]];
spaces to underscore:
find . -name "* *" -type d | rename -v -f 's/ /_/g'
find . -name "* *" -type f | rename -v -f 's/ /_/g'
change all to lowercase:
find . -depth -exec rename -v -f 's/(.*)\/([^\/]*)/$1\/\L$2/' {} \;
fix “-_” examples:
find . -name "*-_*" -type f | rename -v -f 's/-_/_/g'
cd to the music folder
yt-dlp --extract-audio --audio-format mp3 (youtube link)
To find out about more options:
sudo systemctl restart systemd-resolved
optional:
install nslookup and dig:
sudo apt install -y dnsutils
test:
nslookup google.com
dig google.com
Ubuntu Software aka Snap Store opens up essentially blank.
Close the Snap Store application
Open the terminal (Ctrl+Alt+T)
Type the command:
ps -ef|grep snap-store|grep -v grep
…to check if snap-store
process is still running
If you get anything in the output, that indicates that snap-store
is still running. The output may be like:
username 24225 1871 2 21:41 ? 00:00:23 /snap/snap-store/542/usr/bin/snap-store
in that case type the command
kill (number)
…where number
is the first number shown in the line above, right after the username (in this case 24225)
repeat step 3 to confirm that snap-store
is not running anymore
type the command:
snap-store
…and watch the output for “access denied” messages.
#########
On Ubuntu 20.04LTS the app called “Ubuntu software” is in reality snap-store (the terminal command running it is “snap-store”, not “gnome-software”). Ubuntu 20.04 does not seem to come with gnome-software installed by default. For me, closing the app, then typing
killall snap-store
in a terminal and then re-launching the app solved the problem without a reboot.
“Files” File manager (Nautilus) is not working on an Ubuntu system. Files either stops working, or won’t load at all.
try:
sudo apt-get install --reinstall nautilus-data ## this one worked in 22.04
ps awx | grep nautilus
sudo kill (process ID)
sudo apt-get remove nautilus
sudo apt-get autoremove
sudo apt-get clean
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install nautilus
rsync -Pavuc <source> <target>
To see all the fonts on a system:
fc-list |sort
Use Font Manager to import and organize fonts. Once you’ve installed them with FM you can delete the downloads.
Installed fonts are found in:
~/.local/share/fonts/
/usr/share/fonts/
/usr/share/texmf/
ebook-convert gets installed with calibre
in epub folder:
for f in ./*.epub; do ebook-convert "$f" "${f%.epub}.pdf";done
Upload the image you want to use to your site’s media collection.
In your site’s WP editor, go to Appearance > Customize
Choose Site Identity
Select Site Icon down near the bottom of the sidebar (depending on your theme, version, etc.)
Choose your image. You may have to crop it down a bit.
Publish/Update and you’re done.
egrep -c '(vmx|svm)' /proc/cpuinfo # to confirm virtualization is enabled in BIOS > you need anything greater than zero returned sudo apt update sudo apt install -y qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils sudo apt install -y qemu-kvm qemu-system-x86_64 --version ## verify install sudo apt install -y virtinst libvirt-daemon sudo systemctl enable --now libvirtd # enable service sudo apt install -y virt-manager ## install the GUI reboot when transferring from old machine: copy over the qcow files to appropriate path restore backups of folder /etc/libvirt/qemu make sure the paths in the xml files are correct fix/import the xml files with: virsh define rhel901.xml
download the latest version from https://wordpress.org/download/releases/
select the tar.gz version
tar -xzvf wordpress-6.2.2.tar.gz -C .
upload the wordpress folder to your website(s)
download Canadian English from
https://en-ca.wordpress.org/download/#download-install
After downloading comic book or anime images as a mix of .webp and .jpg, run the following:
find . -name "* *" -type f | rename -v -f 's/ /_/g' && for f in *.webp; do dwebp $f -o $f.jpg;done && rename -v 's/\.webp\.jpg/\.jpg/' * && rm *.webp && sudo chown -R jeff:users * && sudo chmod -R 777 *
This will change any spaces to underscores, convert any .webp to .webp.jpg, rename those to just .jpg, set permissions and ownership.
Use the compress option in Gnomes Files to compress the jpg’s into a .zip, then rename the .zip to .cbz
or:
zip bookname.cbz *.jpg
In this example, we’ll set up a 16Gb encrypted partition on a separate disk.
Note that this will wipe everything on your partition.
As root, use fdisk to determine which partition you want to use:
# fdisk -l
You could also use lsblk to show your installed drives:
# lsblk
In our case, we’re going to encrypt /dev/sde1, a 16GB USB stick.
First thing you need to do is to come up with a passphrase that you can remember.
Next, we’ll start the encryption process.
In our case, we want:
# cryptsetup -v luksFormat --type=luks2 /dev/sde1
Pay attention to the prompts.
You can check the man page for more options.
Now let’s open the encrypted volume, and give it a name for mapping the partition:
# cryptsetup open /dev/sde1 mypart
Next, give it a filesystem, using ext4:
# mkfs.ext4 /dev/mapper/mypart
Now create a mount point:
# mkdir /mnt/testenc
To mount the encrypted volume, you can use the cryptsetup with the mount command.
# cryptsetup --type luks open /dev/sde1 mypart
# mount -t ext4 /dev/mapper/mypart /mnt/testenc
To close the partition, do the following:
# umount /place/to/mount
# cryptsetup close mypart
raspi occasionally creates weird folders under /root with incomprehensible names:
drwx------ 3 root root 4.0K Jul 7 2022 ''$'\033''[?25h'$'\033''[?7h'/
You can’t easily delete the folder with rm -rf.
First, you need to find the inode to the folder:
# ls -ihalF
total 80K
15 drwx------ 11 root root 4.0K Apr 18 10:42 ./
2 drwxr-xr-x 18 root root 4.0K Jul 21 08:34 ../
524545 drwx------ 3 root root 4.0K Jul 7 2022 ''$'\033''[?25h'$'\033''[?7h'/
So in this case the inode is 524545
Use ‘find’ to get rid of it:
find . -maxdepth 1 -type d -inum 524545 -exec rm -rf {} \;
Folder is now deleted
Just sharing a story of some hacking I’ve been doing over the last few weeks…
Some of you may know, one of my NAS’s (a D-link DNS-322) died a few weeks back.. I was able to save much of the data, but not all. It was a very old box, so I’m not too upset that it died. What data I could salvage I farmed out to my other systems, but now I’m running out of disk space in several places.
As it happens, I have an old Windows Home Server (an Acer Aspire H341) with 4 hot-swappable bays collecting dust. I haven’t used it since I discovered Linux many years ago. The problem with the WHS is that it’s headless, i.e. no video.
So.. I took an old HDD and bought a USB>SATA converter. Originally, I tried to just hook up a live USB and the HDD to a laptop, but the Ubuntu installer insisted on wiping out the EFI partition on the laptop’s HDD. There is apparently some trick to get around that, but what I saw looked pretty scary, so I avoided it.
Next, I tried to use the Raspi disk imager app to load Ubuntu Server onto the HDD. The disk imager is actually a pretty slick app, but in this case wouldn’t work. I think it doesn’t like Atom CPUs..
Next, I remembered that in the garage I have a disk-less desktop that a friend from work asked me to take to the recycle centre (and I will someday!) I took the HDD and plugged it into the desktop via USB, added the live USB stick, wired the desktop up to one of my monitors and hooked up one of those tiny Bluetooth keyboards.. I could then load Ubuntu on to the HDD. Next step was to take said HDD over to the WHS and fire it up from there. The lights did what lights usually do, but for the life of me I couldn’t get the thing to talk to the network. I actually did a wipe and load at one point just to prove myself wrong, but I still couldn’t see the WHS on the network.
Eventually, I got the idea that maybe the built-in NIC (on the Acer) was buggerski. So I bought a USB<>Ethernet adapter and tried to get connectivity that way. Still no dice. I thought maybe the adapter doesn’t get magically picked up by the OS, so I brought the adapter and the HDD back to the desktop computer, found that the adapter was seen, but disabled. So using Netplan, (which I still don’t like), I configured it to talk to the adapter as the primary NIC (learned a lesson about having the gateway listed twice as well). I then ran the whole lot back to the WHS, booted it up and whammo, I can now SSH into it.
Sweet zombie servers, Batman! The WHS lives again!
Now to reorganize all the cables and crap that I scattered all over the place…
The only drawback, I think, is that the WHS only has USB 2, so not going to see 1gb speeds. That’ll hurt during the mass file copying, but I can probably live with it for regular duty work. And since there are four bays, I can use LVM to make one big pile for backups and whatnot.
Reboot the system using either hard boot or soft boot.
Interrupt the boot process when the grub kernel choice comes up by using the down arrow key.
Go back to the top line (or other kernel if you prefer) and hit ‘e’ to temporarily edit that kernel’s boot parameters.
Find the line that starts with ‘linux’ (usually the fourth line) and go to the end of the line with the ‘end’ key.
Add the following to the end of the line:
init=/bin/sh
Then hit Ctrl-x to proceed. This should bring you to a blank screen with only ‘sh-5.1#’ (or whatever kernel version)
Enter the following lines:
/usr/sbin/load_policy -i
mount -o remount,rw /
passwd root (then enter your password twice)
/sbin/reboot -f
After the system reboots, test your new root password. If you boot to grub there may be a different user showing in the login screen, be sure to switch to root before you login.
Some of the widgets/plugins in the Raspi’s top Panel cause system instability. You’ll have to try to get a terminal to open, or else try to ssh in.
You can delete the user’s lxpanel config file, then reboot. This will recreate the default Panel, which you can now break again…
sudo rm -r ~/.config/lxpanel
From here you can either restart the X session with:
startx
or else reboot:
sudo reboot
A new default Panel will be generated.
This is a new security feature in Network Manager.
Disable the MAC address randomization by creating the file
/etc/NetworkManager/conf.d/100-disable-wifi-mac-randomization.conf
with the content:
[connection]
wifi.mac-address-randomization=1
[device]
wifi.scan-rand-mac-address=no
or:
mv /etc/NetworkManager/conf.d/100-disable-wifi-mac-randomization.conf /etc/NetworkManager/conf.d/100-disable-wifi-mac-randomization.conf.bak
echo "[connection]" >/etc/NetworkManager/conf.d/100-disable-wifi-mac-randomization.conf
echo "wifi.mac-address-randomization=1" >>/etc/NetworkManager/conf.d/100-disable-wifi-mac-randomization.conf
echo -e "\n[device]" >>/etc/NetworkManager/conf.d/100-disable-wifi-mac-randomization.conf
echo "wifi.scan-rand-mac-address=no\n" >>/etc/NetworkManager/conf.d/100-disable-wifi-mac-randomization.conf
Afterwards you may need to update your device’s MAC in your DHCP server.
Then reboot.
Command | |
ansible-playbook file.yaml | Run an Ansible playbook called file.yaml |
Authentication options | |
–user, -u <username> | Log in as username |
–private-key, –key-file <key> | Log in using SSH key (usually in ~/.ssh) |
–ssh-extra-args | Pass extra command options to SSH |
–vault-id <id> | Use vault identity ID |
–vault-password-file <key> | Use vault password file key |
–ask-vault-pass | Prompt for a vault password |
–become | Escalate privileges |
–ask-become-pass | Prompt for a password for become |
–become-method | Escalate privilege using a specific method |
ansible-doc –-type foo –list | List choices for become, connection, and other Ansible options |
Control options | |
–syntax-check | Verify syntax of a playbook, but do not run it |
–list-hosts | Show hosts listed in a playbook |
–list-tasks | Show tasks defined in a playbook |
–start-at-task <task_name> | Run playbook starting at task name |
–check | Run the playbook but don’t make changes |
–diff | Show diffs for what changes are made |
–module-path | Prepend colon-separated path to default path |
–connection <method> | Connect over method |
Playbook and YAML | |
Parameter: value | A YAML mapping entry is a key and a value |
– foo | A YAML sequence entry is an itemized list |
– bar | |
– baz | |
Distro: | A mapping entry can contain a sequence |
– Fedora | |
– RHEL | |
– Debian | |
– Slackware | |
OS: | Sequence items can contain mappings |
– Linux: Fedora | |
– BSD: NetBSD | |
Playbook structure | |
— | YAML files start with three dashes |
– name: “My play” | Use the name mapping to name your play |
hosts: all | Indent, and define which hosts the play runs on. List target hosts in etc/ansible/hosts |
tasks: | Open a tasks mapping, which will contain a sequence |
– name: | “My task” Give the task a name with the name mapping |
some_module: | Import a module as a new mapping containing a sequence of parameters. Find required and optional parameters in the module’s documentation. |
path: ‘/example/’ | Parameters are usually mappings using the command option as the key and an argument as the value. |
– name: “My other task” | A play may contain more than one task |
other_module: | A task usually imports a module |
foo: ‘bar’ |
Basics | |
grep [pattern] FILE | |
grep ‘^[A,E].*o’ f.txt | Find a string starting with A or E and ending in o |
grep -f pat.txt f.txt | Scan f.txt, using contents of pat.txt as regex |
grep -i Gnu f.txt | Find “gnu” in f.txt, ignoring capitalization |
grep -v gnu f.txt | Find all lines not containing “gnu” (invert match) |
grep -w ‘a.*o’ f.txt | Find whole word matches only, ignoring substrings |
grep -x ‘a.*o’ f.txt | Find whole line matches only, as in ^(a.*o)$ |
Output | |
-c | Print only the number of lines containing a match |
–colo[u]r | Display matches in color |
-l | Print the names of files with matches |
-L | Print the names of files searched that contained no matches |
-o | Print only the matched part of a line |
-s | Suppress errors (such as non-existent or unreadable files) |
-A n | Print n number of lines after a matching line |
-B n | Print n number of lines before a matching line |
-C n | Print n number of lines before and after a matching line |
Output prefixes | |
-b | Print the byte offset of the match within the input file |
-H | Print the filename containing a match |
-h | Do not print the filename containing a match |
-n | Print the line number of each match |
-T | Print an initial Tab before matches so that output is neatly aligned |
File and directory selection | |
-a | Process a binary file as if it were text |
-D <skip|read> | Skip or read a FIFO, device, or socket |
-d <skip|read|recurse> | Skip, read, or recurse through a directory |
–exclude ‘*.sh’ | Do not search any file with the .sh suffix |
–exclude-from FILE | Skip any file listed in FILE |
–exclude-dir *foo | Skip any directory ending in foo |
-r | When a directory is encountered, search files in it |
-R | Search directories and follow symlinks |
Variants | |
-G | Use basic regex (this is the default) |
-E | Extended regex |
-F | Interpret the search pattern as a fixed string, not regex |
-P | Use Perl regex (PCRE) |
Regular expression | |
. | Any single character |
? | Match preceding item zero or one time |
* | Match preceding item zero or more times |
+ | Match preceding item one or more times |
{2} | Match preceding item two times |
{3,} | Match preceding item three or more times |
{,4} | Match preceding item at most four times |
{1,5} | Match preceding item at least once, but no more than five times |
[A,B] | Match A or B |
[3-9] | Match all digits 3 to 9 |
^ | Start of a line |
$ | End of a line |
\s | Space |
[:alnum:] | Alphanumeric character |
[:alpha:] | Alphabetic character |
[:digit:] | Digits 0 through 9 |
[:punct:] | Punctuation |
[:space:] | Space |
Logical Volume Manager (LVM)
This is a quick and dirty cheat sheet on LVM using Linux, I have highlighted many of the common attributes for each command however this is not an extensive list, make sure you look up the command.
With the pvs, vgs and lvs commands, the number of verboses added the more verbose information for example pvs -vvvvv
Directory and Files | |
Directories and Files | ## Directories /etc/lvm – default lvm directory location /etc/lvm/backup – where the automatic backups go /etc/lvm/cache – persistent filter cache /etc/lvm/archive – where automatic archives go after a volume group change /var/lock/lvm – lock files to prevent metadata corruption # Files /etc/lvm/lvm.conf – main lvm configuration file $HOME/.lvm – lvm history |
Tools | |
diagnostic | lvmdump lvmdump -d dmsetup [info|ls|status] Note: by default the lvmdump command creates a tar ball |
Physical Volumes | |
display | pvdisplay -v pvs -v pvs -a pvs –segments (see the disk segments used) pvs attributes are: 1. (a)llocatable 2. e(x)ported |
scanning | pvscan -v Note: scans for disks for non-LVM and LVM disks |
adding | pvcreate /dev/sdb1 ## Create physical volume with specific UUID, used to recover volume groups (see miscellaneous section) pvcreate –uuid /dev/sdb1 Common Attributes that you may want to use: -M2 create a LVM2 physical volume |
removing | pvremove /dev/sdb1 |
checking | pvck -v /dev/sdb1 Note: check the consistency of the LVM metadata |
change physical attributes | ## do not allow allocation of extents on this drive, however the partition must be in a vg otherwise you get an error pvchange -x n /dev/sdb1 Common Attributes that you may want to use:–addtag add a tag -x allowed to allocate extents -u change the uuid |
moving | pvmove -v /dev/sdb2 /dev/sdb3 Note: moves any used extents from this volume to another volume, in readiness to remove that volume. However you cannot use this on mirrored volumes, you must convert back to non-mirror using “lvconvert -m 0” |
Volume Groups | |
display | vgdisplay -v vgs -v vgs -a -o +devices vgs flags: #PV – number of physical devices #LV – number of configured volumes vgs attributes are: 1. permissions (r)|(w) 2. resi(z)eable 3. e(x)ported 4. (p)artial 5. allocation policy – (c)ontiguous, c(l)ing, (n)ormal, (a)nywhere, (i)nherited 6. (c)luster |
scanning | vgscan -v |
creating | vgcreate VolData00 /dev/sdb1 /dev/sdb2 /dev/sdb3 vgcreate VolData00 /dev/sdb[123] ## Use 32MB extent size vgcreate VolData00 -s 32 /dev/sdb1 Common Attributes that you may want to use: -l maximum logical volumes -p maximum physical volumes -s physical extent size (default is 4MB) -A autobackup |
extending | vgextend VolData00 /dev/sdb3 |
reducing | vgreduce VolData00 /dev/sdb3 vgreduce –removemissing –force VolData00 |
removing | vgremove VolData00 Common Attributes that you may want to use: -f force the removal of any logical volumes |
checking | vgck VolData00 Note: check the consistency of the LVM metadata |
change volume attributes | vgchange -a n VolData00 Common Attributes that you may want to use: -a control availability of volumes within the group -l maximum logical volumes -p maximum physical volumes -s physical extent size (default is 4MB) -x resizable yes or no (see VG status in vxdisplay) |
renaming | vgrename VolData00 Data_Vol_01 note: the volume group must not have any active logical volumes |
converting metadata type | vgconvert -M2 VolData00 Note: vgconvert allows you to convert from one type of metadata format to another for example from LVM1 to LVM2, LVM2 offers bigger capacity, clustering and mirroring |
merging | # the old volumes group will be merged into the new volume group vgmerge New_Vol_Group Old_Vol_Group Note: you must unmount any fielsystems and deactivate the vg that is being merged “vgchange -a n “, then you can activiate it again afterwards “vgchange -a y “, then perform a vgscan, dont forget to backup the configuration |
spliting | vgsplit Old_Vol_Group New_Vol_Group [physical volumes] [-n logical volume name] |
importing | vgimport VolData00 Common Attributes that you may want to use: -a import all exported volume groups |
exporting | ## to see if a volume has already been export use “vgs” and look at the third attribute should be a x vgexport VolData00 Common Attributes that you may want to use: -a export all inactive volume groups |
backing up | ## Backup to default location (/etc/lvm/backup) vgcfgbackup VolData00 # Backup to specific location vgcfgbackup -f /var/backup/VolData00_bkup VolData00# Backup to specific location all volume groups (notice the %s) vgcfgbackup -f /var/backup/vg_backups_%s Note: the backup is written in plain text and are by default located in /etc/lvm/backup |
restoring | vgcfgrestore -f /var/backup/VolData00_bkup VolData00 Common Attributes that you may want to use: -l list backups of file -f backup file -M metadataype 1 or 2 |
cloning | vgimportclone /dev/sdb1 Note: used to import and rename duplicated volume group |
special files | vgmknodes VolData00 Note: recreates volume group directory and logical volume special files in /dev |
Logical Volumes | |
display | lvdisplay -v lvdisplay –maps display mirror volumes lvs -v lvs -a -o +devices ## lvs commands for mirror volumes lvs -a -o +devices lvs -a -o +seg_pe_ranges –segments## Stripe size lvs -v –segments lvs -a -o +stripes,stripesize## use complex command lvs -a -o +devices,stripes,stripesize,seg_pe_ranges –segments lvs attributes are: 1. volume type: (m)irrored, (M)irrored without initail sync, (o)rigin, (p)vmove, (s)napshot, invalid (S)napshot, (v)irtual, mirror (i)mage mirror (I)mage out-of-sync, under (c)onversion 2. permissions: (w)rite, (r)ead-only 3. allocation policy – (c)ontiguous, c(l)ing, (n)ormal, (a)nywhere, (i)nherited 4. fixed (m)inor 5. state: (a)ctive, (s)uspended, (I)nvalid snapshot, invalid (S)uspended snapshot, mapped (d)evice present with-out tables, mapped device present with (i)nactive table 6. device (o)pen (mounted in other words) |
scanning | lvscan -v lvmdiskscan |
creating | ## plain old volume lvcreate -L 10M VolData00## plain old volume but use extents, use 10 4MB extents (if extent size is 4MB) lvcreate -l 10 VolData00 ## plain old volume but with a specific name web01 lvcreate -L 10M -n web01 VolData00 ## plain old volume but on a specific disk lvcreate -L 10M VolData00 /dev/sdb1 ## a striped volume called lvol1 (note the captial i for the stripe size), can use -l (extents) instead of -L lvcreate -i 3 -L 24M -n lvol1 vg01## Mirrored volume lvcreate -L 10M -m1 -n data01 vg01 ## Mirrored volume without a mirror log file lvcreate -L 10M -m1 –mirrorlog core -n data01 vg01 Common Attributes that you may want to use: -L size of the volume [kKmMgGtT] -l number of extents -C contiguous [y|n] -i stripes -I stripe size -m mirrors –mirrorlog -n volume name |
extending | lvextend -L 20M /dev/VolData00/vol01Common Attributes that you may want to use:-L size of the volume [kKmMgGtT] -l number of extents -C contiguous [y|n] -i stripes -I stripe size Note: you can extend a ext2/ext3 filesystem using the “resize2fs” or “fsadm” command fsadm resize /dev/VolData01/data01 resize2fs -p /dev/mapper/VolData01-data01 [size] The -p option displays bars of progress while extendingthe filesystem |
reducing/resizing | lvreduce -L 5M /dev/VolData00/vol01 lvresize -L 5M /dev/VolData00/vol01 Note: rounding will occur when extending and reducing volumes to the next extent (4MB by default), you can use resize2fs or fsadm to shrink the filesystemfsadm resize /dev/VolData01/data01 [size] resize2fs -p /dev/mapper/VolData01-data01 [size] |
removing | lvremove /dev/VolData00/vol01 |
adding a mirror to a non-mirrored volume | lvconvert -m1 –mirrorlog core /dev/VolData00/vol01 /dev/sdb2Note: you can also use the above command to remove a unwanted log |
removing a mirror from a mirrored volume | lvconvert -m0 /dev/VolData00/vol01 /dev/sdb2Note: the disk in the command is the one you want to remove |
Mirror a volume that has stripes | lvconvert –stripes 3 -m1 –mirrorlog core /dev/VolData00/data01 /dev/sdd1 /dev/sde1 /devsdf1 |
change volume attributes | lvchange -a n /dev/VolData00/vol01 Common Attributes that you may want to use:-a availability -C contiguous [y|n] |
renaming | lvrename /dev/VolData00/vol_old /dev/VolData00/vol_new |
snapshotting | lvcreate –size 100M –snapshot -name snap /dev/vg01/data01 |
Miscellaneous | |
Simulating a disk failure | dd if=/dev/zero of=/dev/sdb2 count=10 |
reparing a failed mirror no LVM corruption | ## check volume, persume /dev/sdb2 has failed lvs -a -o +devices # remove the failed disk from the volume (if not already done so) , this will convert volume into a non-mirrored volume vgreduce –removemissing –force VolData00 ## replace the disk physically, remember to partion it with type 8e fdisk /dev/sdb …….. ## add new disk to LVM pvcreate /dev/sdb2 ## add the disk back into volume group vgextend VolData00 /dev/sdb2 ## mirror up the volume lvconvert -m1 –mirrorlog core /dev/VolData00/vol02 /dev/sdb2 |
corrupt LVM metadata without replacing drive | # attempt to bring the volume group online vgchange -a y VolData00 # Restore the LVM configation vgcfgrestore VolData00 # attempt to bring the volume grou online vgchange -a y VolData00 # file system check e2fsck /dev/VolData00/data01 |
corrupt LVM metadata but replacing the faulty disk | # attempt to bring the volume group online but you get UUID conflict errors make note of the UUID number vgchange -a y VolData00 vgchange -a n VolData00 ## sometimes it my only be a logical volume problem lvchange -a y /dev/VolData00/web02 lvchange -a n /dev/Voldata00/web02 ## replace the disk physically, remember to partion it with type 8e fdisk /dev/sdb …….. # after replacing the faulty drive the disk must have the previuos UUID number or you can get it from /etc/lvm directory pvcreate –uuid <previous UUID number taken from above command> /dev/sdb2 # Restore the LVM configation vgcfgrestore VolData00 # attempt to bring the volume group online or logical volume vgchange -a y VolData00 lvchange -a y /dev/VolData00/web02 # file system check e2fsck /dev/VolData00/data01Note: if you have backed the volume group configuration you can obtain the UUID number in the backup file by default located in /etc/lvm/backup or running “pvs -v” |
The linux ip command is similar to ifconfig, but more powerful and is intended to be a replacement for it. With ip you have the advantage of performing several network administration tasks with only one command.
In this tutorial, we are going to discuss 15 most common uses for ‘ip’ command, so let’s get going.
If you run ip link show command it will list all available network interfaces on your server.
$ ip link show
Example output:
$ ip link show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp0s3: mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 08:00:27:64:ab:7a brd ff:ff:ff:ff:ff:ff
To get the depth information of your network interfaces like IP Address, MAC Address information, use the following command as shown below.
$ ip addr show
The output will display the currently assign IP configuration for all network interfaces.
$ ip addr show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s3: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 08:00:27:64:ab:7a brd ff:ff:ff:ff:ff:ff inet 192.168.100.8/24 brd 192.168.100.255 scope global enp0s3 valid_lft forever preferred_lft forever inet 192.168.100.20/24 scope global secondary enp0s3:1 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe64:ab7a/64 scope link valid_lft forever preferred_lft forever
The “up” flag with interface name (enp0s3) enables a network interface. For example, the following command will activates the enp0s3 network interface.
$ ip link enp0s3 up Or $ sudo ip link set enp0s3 up
The “down” flag with interface name disables a network interface. For example, the following command will De-activates the enp0s3 network interface.
$ ip link set enp0s3 down or $ sudo ip link set enp0s3 down
To assign IP address to interface, we will use:
$ sudo ip addr add 192.168.100.4/255.255.255.0 dev enp0s3
We can also set broadcast address to interface with ‘ip’ command. By default no broadcast address is set, so to set a broadcast address command is:
$ $ sudo ip addr add broadcast 192.168.100.255 dev enp0s3
We can also set standard broadcast address along with IP address by using the following command,
$ sudo ip addr add 192.168.100.4/24 brd + dev enp0s3
As shown in the above example, we can also use ‘brd’ in place on ‘broadcast’ to set broadcast ip address.
The following command will remove an assigned IP address from the given interface:
$ sudo ip addr del 192.168.100.4/24 dev enp0s3
To add an alias i.e. assign more than one IP to an interface, execute below command:
$ sudo ip addr add 192.168.100.20/24 dev enp0s3 label enp0s3:1
Example outputs:
$ ip addr show enp0s3 2: enp0s3: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 08:00:27:64:ab:7a brd ff:ff:ff:ff:ff:ff inet 192.168.100.8/24 brd 192.168.100.255 scope global enp0s3 valid_lft forever preferred_lft forever inet 192.168.100.20/24 scope global secondary enp0s3:1 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe64:ab7a/64 scope link valid_lft forever preferred_lft forever
Checking routing information shows us the route a packet will take to reach the destination. To check the network routing information, execute the following command:
$ ip route show
Example output:
$ ip route show default via 192.168.100.1 dev enp0s3 proto static 192.168.100.0/24 dev enp0s3 proto kernel scope link src 192.168.100.8
Suppose now that you have an IP address which you need to know the route packets will take. You can use route option as follows:
$ ip route get 192.168.100.4
To change the default route, the ip command can be used as follows:
$ sudo ip route add default via 192.168.100.150/24
So now all network packets will travel via 192.168.100.150 as opposed to old default route. For changing the default route for a single interface & to make change route further, execute
$ sudo ip route add 10.0.0.2 via 192.168.100.150/24 dev enp0s3
To remove assigned static route, simply type the following command.
$ sudo ip route del 10.0.0.2
Default gateway can be specified globally or for in interface-specific config file. Advantage of default gateway is If we have more than one NIC is present in the system. You can add default gateway on the fly as shown below command.
sudo ip route add default via 192.168.100.1
The ip command can also be used to show the statistics of the various network interfaces. To do this you can use the ip command with the option -s and then specify the network device.
$ ip -s link
When you need to get information about a particular network interface, add the option ls followed by the name of the network interface. The option -s when used more than once gives you more information about that particular interface. This can be very useful especially when trouble shooting errors in network connectivity.
ip -s -s link ls enp0s3
Address Resolution Protocol (ARP) is used to translate an IP address to its corresponding physical address, commonly known as MAC address. With ip command you can view the MAC address of the devices connected in your LAN by using the option neigh or neighbour.
$ ip neigh
It is also possible to view netlink messages with ip command. The monitor option allows you to see the state of your network devices. For instance a computer on your LAN could be categorized as REACHABLE or STALE depending on its status. The command can be used as follows:
$ ip monitor all
Example output:
$ ip monitor all [NEIGH]192.168.100.4 dev enp0s3 lladdr c8:f7:33:de:30:d7 REACHABLE [NEIGH]192.168.100.4 dev enp0s3 lladdr c8:f7:33:de:30:d7 STALE [NEIGH]192.168.100.4 dev enp0s3 lladdr c8:f7:33:de:30:d7 PROBE [NEIGH]192.168.100.4 dev enp0s3 lladdr c8:f7:33:de:30:d7 REACHABLE
If you want to find a option which is not listed in above examples, then you can look for manual.
$ man ip
The command ip is a must have tool for network administrators and all Linux users alike. It is time to move from ifconfig, especially when you are writing scripts.
add an alias:
alias df='df -x"squashfs"'
ssh to nas, elevate to root
cd /volume1/video/
find . -name “@eaDir” -type d ## to see how many there are
find . -name “@eaDir” -type d -exec /bin/rm -rf {} + ## remove the folders
enable copy/paste to/from host:
sudo apt install spice-vdagent
/etc/wgetrc Default location of the global startup file. .wgetrc User startup file. #How to Download a Website Using wget
wget -r www.dlsite.com
#This downloads the pages recursively up to a maximum of 5 levels deep.
#Five levels deep might not be enough to get everything from the site. You can use the -l switch to set the number of levels you wish to go to as follows:
wget -r -l10 www.dlsite.com
#If you want infinite recursion you can use the following:
wget -r -l inf www.dlsite.com
# How to Download Certain File Types wget -A "*.mp3" -r
#The reverse of this is to ignore certain files. Perhaps you don't want to download executables. In this case, you would use the following syntax:
wget -R "*.exe" -r
#Other Parameters
-b, --background Go to background immediately after startup. If no output file is specified via the -o, output is redirected to wget-log.
-o logfile, --output-file=logfile Log all messages to logfile. The messages are normally reported to standard error.
-a logfile, --append-output=logfile Append to logfile. This option is the same as -o, only it appends to logfile instead of overwriting the old log file. If logfile does not exist, a new file is created.
-q, --quiet Turn off wget's output.
-v, --verbose Turn on verbose output, with all the available data. The default output is verbose.
-nv, --non-verbose Non-verbose output. Turn off verbose without being completely quiet (use -q for that), which means that error messages and basic information still get printed.
-i file, --input-file=file Read URLs from a local or external file. If "-" is specified as file, URLs are read from the standard input. (Use "./-" to read from a file literally named "-".)
-F, --force-html When input is read from a file, force it to be treated as an HTML file. This enables you to retrieve relative links from existing HTML files on your local disk, by adding to HTML, or using the --base command-line option.
-t number, --tries=number Set number of retries to number. Specify 0 or inf for infinite retrying. The default is to retry 20 times, with the exception of fatal errors like "connection refused'' or "not found'' (404), which are not retried.
-O file, --output-document=file The documents will not be written to the appropriate files, but all will be concatenated together and written to file.
-c, --continue Continue getting a partially-downloaded file. This option is useful when you want to finish up a download started by a previous instance of wget, or by another program. For instance: wget -c ftp://dlsite/filename
--progress=type Select the progress indicator you want to use. Legal indicators are "dot" and "bar".
-N, --timestamping Turn on time stamping. Output file will have timestamp matching remote copy; if file already exists locally, and remote file is not newer, no download will occur.
--no-use-server-timestamps Don't set the local file's timestamp by the one on the server.
-S, --server-response Print the headers sent by HTTP servers and responses sent by FTP servers.
--spider When invoked with this option, wget will behave as a web spider, which means that it will not download the pages, just check that they are there. For example, you can use wget to check your bookmarks: wget --spider --force-html -i bookmarks.html
-T seconds, --timeout=seconds Set the network timeout to seconds seconds. This option is equivalent to specifying --dns-timeout, --connect-timeout, and --read-timeout, all at the same time.
--dns-timeout=seconds Set the DNS lookup timeout to seconds seconds. DNS lookups that don't complete within the specified time will fail. By default, there is no timeout on DNS lookups, other than that implemented by system libraries.
--connect-timeout=seconds Set the connect timeout to seconds seconds. TCP connections that take longer to establish will be aborted. By default, there is no connect timeout, other than that implemented by system libraries.
--read-timeout=seconds Set the read (and write) timeout to seconds seconds. Reads that take longer will fail. The default value for read timeout is 900 seconds.
--limit-rate=amount Limit the download speed to amount bytes per second. The amount may be expressed in bytes, kilobytes (with the k suffix), or megabytes (with the m suffix). For example, --limit-rate=20k will limit the retrieval rate to 20 KB/s. This option is useful when, for whatever reason, you don't want wget to consume the entire available bandwidth.
-w seconds, --wait=seconds Wait the specified number of seconds between the retrievals. Use of this option is recommended, as it lightens the server load by making the requests less frequent. Instead of in seconds, the time can be specified in minutes using the m suffix, in hours using h suffix, or in days using d suffix.
--waitretry=seconds If you don't want wget to wait between every retrieval, but only between retries of failed downloads, you can use this option. wget will use linear backoff, waiting 1 second after the first failure on a given file, then waiting 2 seconds after the second failure on that file, up to the maximum number of seconds you specify. Therefore, a value of 10 will actually make wget wait up to (1 + 2 + ... + 10) = 55 seconds per file. By default, wget will assume a value of 10 seconds.
--random-wait Some websites may perform log analysis to identify retrieval programs such as wget by looking for statistically significant similarities in the time between requests. This option causes the time between requests to vary between 0 and 2*wait seconds, where wait was specified using the --wait option, to mask wget's presence from such analysis.
--no-dns-cache Turn off caching of DNS lookups. Normally, wget remembers the addresses it looked up from DNS so it doesn't have to repeatedly contact the DNS server for the same (typically small) set of addresses it retrieves. This cache exists in memory only; a new wget run will contact DNS again.
--retry-connrefused Consider "connection refused" a transient error and try again. Normally wget gives up on a URL when it is unable to connect to the site because failure to connect is taken as a sign that the server is not running at all and that retries would not help. This option is for mirroring unreliable sites whose servers tend to disappear for short periods of time.
--user=user, --password=password Specify the username user and password for both FTP and HTTP file retrieval. These parameters can be overridden using the --ftp-user and --ftp-password options for FTP connections and the --http-user and --http-password options for HTTP connections.
--ask-password Prompt for a password for each connection established. Cannot be specified when --password is being used, because they are mutually exclusive.
--unlink Force wget to unlink file instead of clobbering existing file. This option is useful for downloading to the directory with hardlinks.
-nd, --no-directories Do not create a hierarchy of directories when retrieving recursively. With this option turned on, all files will get saved to the current directory, without clobbering (if a name shows up more than once, the file names will get extensions .n).
-x, --force-directories The opposite of -nd; create a hierarchy of directories, even if one would not have been created otherwise. For example, wget -x http://fly.srk.fer.hr/robots.txt will save the downloaded file to fly.srk.fer.hr/robots.txt.
-nH, --no-host-directories Disable generation of host-prefixed directories. By default, invoking wget with -r http://dlsite/ will create a structure of directories beginning with dlsite/. This option disables such behaviour.
--protocol-directories Use the protocol name as a directory component of local file names. For example, with this option, wget -r http://host will save to http/host/... rather than just to host/....
--cut-dirs=number Ignore number directory components. This option is useful for getting a fine-grained control over the directory where recursive retrieval will be saved.
--http-user=user, --http-passwd=password Specify the username user and password on an HTTP server. According to the challenge, wget will encode them using either the "basic" (insecure) or the "digest" authentication scheme.
--ignore-length Unfortunately, some HTTP servers (CGI programs, to be more precise) send out bogus "Content-Length" headers, which makes wget start to bray like a stuck pig, as it thinks not all the document was retrieved. You can spot this syndrome if wget retries getting the same document again and again, each time claiming that the (otherwise normal) connection has closed on the very same byte. With this option, wget ignores the "Content-Length" header, as if it never existed.
--private-key=file Read the private key from file. This option allows you to provide the private key in a file separate from the certificate.
--private-key-type=type Specify the type of the private key. Accepted values are PEM (the default) and DER.
-r, --recursive Turn on recursive retrieving. -l depth, --level=depth Specify recursion maximum depth level depth. The default maximum depth is 5.
-K, --backup-converted When converting a file, backup the original version with an .orig suffix. Affects the behavior of -N.
-m, --mirror Turn on options suitable for mirroring. This option turns on recursion and time-stamping, sets infinite recursion depth and keeps FTP directory listings. It is currently equivalent to -r -N -l inf -nr.
-p, --page-requisites This option causes wget to download all the files that are necessary to properly display a given HTML page. Including such things as inlined images, sounds, and referenced stylesheets. Ordinarily, when downloading a single HTML page, any requisite documents that may be needed to display it properly are not downloaded. Using -r together with -l can help, but since wget does not ordinarily distinguish between external and inlined documents, one is generally left with "leaf documents'' that are missing their requisites. -A acclist, --accept acclist; -R rejlist, --reject rejlist Specify comma-separated lists of file name suffixes or patterns to accept or reject. Note that if any of the wildcard characters, *, ?, [ or ], appear in an element of acclist or rejlist, it will be treated as a pattern, rather than a suffix. -D domain-list, --domains=domain-list Set domains to be followed. domain-list is a comma-separated list of domains. Note that it does not turn on -H. --exclude-domains domain-list Specify the domains that are not to be followed. --follow-ftp Follow FTP links from HTML documents. Without this option, wget will ignore all the FTP links.
How to add Swap File
Follow these steps to add 1GB of swap to your server. If you want to add 2GB instead of 1 GB, replace 1G with 2G.
Create a file that will be used for swap:
sudo fallocate -l 1G /swapfile
If fallocate is not installed or if you get an error message saying fallocate failed: Operation not supported then you can use the following command to create the swap file:
sudo dd if=/dev/zero of=/swapfile bs=1024 count=1048576
Only the root user should be able to write and read the swap file. To set the correct permissions type:
sudo chmod 600 /swapfile
Use the mkswap utility to set up the file as Linux swap area:
sudo mkswap /swapfile
Enable the swap with the following command:
sudo swapon /swapfile
To make the change permanent open the /etc/fstab file and append the following line: /etc/fstab
/swapfile swap swap defaults 0 0
Copy
To verify that the swap is active, use either the swapon or the free command as shown below:
sudo swapon –show
NAME TYPE SIZE USED PRIO /swapfile file 1024M 507.4M -1
sudo free -h
total used free shared buff/cache available Mem: 488M 158M 83M 2.3M 246M 217M Swap: 1.0G 506M 517M
How to adjust the swappiness value
Swappiness is a Linux kernel property that defines how often the system will use the swap space. Swappiness can have a value between 0 and 100. A low value will make the kernel to try to avoid swapping whenever possible, while a higher value will make the kernel to use the swap space more aggressively.
The default swappiness value is 60. You can check the current swappiness value by typing the following command:
cat /proc/sys/vm/swappiness
Output:60
While the swappiness value of 60 is OK for most Linux systems, for production servers, you may need to set a lower value.
For example, to set the swappiness value to 10, you would run:
sudo sysctl vm.swappiness=10
To make this parameter persistent across reboots append the following line to the /etc/sysctl.conf file:
/etc/sysctl.conf
vm.swappiness=10
The optimal swappiness value depends on your system workload and how the memory is being used. You should adjust this parameter in small increments to find an optimal value.
How to remove Swap File
If for any reason you want to deactivate and remove the swap file, follow these steps:
First, deactivate the swap by typing:
sudo swapoff -v /swapfile
Remove the swap file entry /swapfile swap swap defaults 0 0 from the /etc/fstab file.
Finally, delete the actual swapfile file using the rm command:
sudo rm /swapfile
install exiftool using:
sudo apt install libimage-exiftool-perl
run with:
exiftool ‘-filename<%f_$imagesize.%e’ (images folder)
background yes
use_xft yes
xftfont 123:size=8
xftalpha 0.1
update_interval 0.5
total_run_times 0
own_window yes
own_window_type normal
own_window_transparent yes
own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager
double_buffer yes
minimum_size 250 5
maximum_width 400
draw_shades no
draw_outline no
draw_borders no
draw_graph_borders no
default_color gray
default_shade_color red
default_outline_color green
alignment top_right
gap_x 10
gap_y 10
no_buffers yes
uppercase no
cpu_avg_samples 2
net_avg_samples 1
override_utf8_locale no
use_spacer right
TEXT
${font Arial:size=20}${color Tan1}Raspberry Pi OS
${color orange}Hostname: ${color}${nodename}
${voffset -90}
${color DimGray}
${font}
${font Arial:bold:size=10}${color Tan1}SYSTEM ${color DarkSlateGray} ${hr 2}
$font${color DimGray}$sysname $kernel $alignr $machine
Frequency $alignr${freq_g cpu0}Ghz
Uptime $alignr${uptime}
File System $alignr${fs_type}
${font Arial:bold:size=10}${color Tan1}CPU ${color DarkSlateGray}${hr 2}
$font${color DimGray}Temp: $alignr ${exec /opt/vc/bin/vcgencmd measure_temp | cut -c6-9} C
$font${color DimGray}CPU1 ${cpu cpu1}% ${cpubar cpu1}
CPU2 ${cpu cpu2}% ${cpubar cpu2}
CPU3 ${cpu cpu3}% ${cpubar cpu3}
CPU4 ${cpu cpu4}% ${cpubar cpu4}
${cpugraph DimGray DarkSlateGray} $color
${font Arial:bold:size=10}${color Tan1}MEMORY ${color DarkSlateGray}${hr 2}
$font${color DimGray}MEM $alignc $mem / $memmax $alignr $memperc%
$membar
$font${color DimGray}SWAP $alignc $swap / $swapmax $alignr $swapperc%
$swapbar
${font Arial:bold:size=10}${color Tan1}HDD ${color DarkSlateGray}${hr 2}
$font${color DimGray}/home $alignc ${fs_used /home} / ${fs_size /home} $alignr ${fs_free_perc /home}%
${fs_bar /home}
${font Arial:bold:size=10}${color Tan1}TOP PROCESSES ${color DarkSlateGray}${hr 2}
${color DimGray}$font${top_mem name 2}${alignr}${top mem 2} %
$font${top_mem name 3}${alignr}${top mem 3} %
$font${top_mem name 4}${alignr}${top mem 4} %
$font${top_mem name 5}${alignr}${top mem 5} %
${font Arial:bold:size=10}${color Tan2}NETWORK ${color DarkSlateGray}${hr 2}
$font${color DimGray}IP on eth0 $alignr ${addr eth0}
Down $alignr ${downspeed eth0} kb/s
Up $alignr ${upspeed eth0} kb/s
Downloaded: $alignr ${totaldown eth0}
Uploaded: $alignr ${totalup eth0}
Newer versions of Ubuntu Desktop have removed the capability to set the background to a solid colour. You may run the following command in a terminal window to accomplish this task:
To clear the current image setting:
Newer versions of Ubuntu Desktop have removed the capability to set the background to a solid colour. You may run the following command in a terminal window to accomplish this task:
To clear the current image setting:
> gsettings set org.gnome.desktop.background picture-uri ""
To set the primary colour (the first colour in a gradient or the solid colour):
> gsettings set org.gnome.desktop.background primary-color '#000000'
To set the secondary colour (the second colour in a gradient, ignored for solid):
> gsettings set org.gnome.desktop.background secondary-color '#000000'
To set the background type (‘solid’ for one color, ‘horizontal’ or ‘vertical’ for gradient):
> gsettings set org.gnome.desktop.background color-shading-type 'solid'
To be able to enable Touchpad right-click on Ubuntu 18.04 running in Lenovo Thinkpad, you’ll need to install gnome-tweak-tool
which is a tool used to Tweak and customize Gnome Desktop Environment.
$ sudo apt install gnome-tweak-tool
Once launched, navigate to “Keyboard & Mouse”
Under “Mouse Click Emulation”, Select “Area”
Close Tweaks tool and you should have Right-click on your Lenovo Thinkpad laptop working.
To start a systemd
service, executing instructions in the service’s unit file, use the start
command:
Since
systemd
knows to look for *.service
files for service management commands, so the command could just as easily be typed like this:
To stop a currently running service, you can use the stop
command instead:
To restart a running service, you can use the restart
command:
If the application in question is able to reload its configuration files (without restarting), you can issue the reload
command to initiate that process:
If you are unsure whether the service has the functionality to reload its configuration, you can issue the reload-or-restart
command. This will reload the configuration in-place if available. Otherwise, it will restart the service so the new configuration is picked up:
To start a service at boot, use the enable
command:
This will create a symbolic link from the system’s copy of the service file, usually in /lib/systemd/system
or /etc/systemd/system
) into the location on disk where systemd
looks for autostart files (usually /etc/systemd/system/some_target.target.wants
.
To disable the service from starting automatically, you can type:
This will remove the symbolic link.
To enable and start a service in one command, use
To check the status of a service on your system, you can use the status
command:
There are also methods for checking for specific states. For instance, to check to see if a unit is currently active (running), you can use the is-active
command:
This will return the current unit state, which is usually active
or inactive
. The exit code will be “0” if it is active, making the result simpler to parse in shell scripts.
To see if the unit is enabled, you can use the is-enabled
command:
This will output whether the service is enabled
or disabled
and will again set the exit code to “0” or “1” depending on the answer to the command question.
A third check is whether the unit is in a failed state. This indicates that there was a problem starting the unit in question:
This will return active
if it is running properly or failed
if an error occurred. If the unit was intentionally stopped, it may return unknown
or inactive
. An exit status of “0” indicates that a failure occurred and an exit status of “1” indicates any other status.
The commands so far have been useful for managing single services, but they are not very helpful for exploring the current state of the system. There are a number of systemctl
commands that provide this information.
To see a list of all of the active units that systemd
knows about, we can use the list-units
command:
This will show you a list of all of the units that systemd
currently has active on the system. The output will look something like this:
Output
UNIT LOAD ACTIVE SUB DESCRIPTION
atd.service loaded active running ATD daemon
avahi-daemon.service loaded active running Avahi mDNS/DNS-SD Stack
dbus.service loaded active running D-Bus System Message Bus
dcron.service loaded active running Periodic Command Scheduler
dkms.service loaded active exited Dynamic Kernel Modules System
getty@tty1.service loaded active running Getty on tty1
. . .
The output has the following columns:
systemd
unit namesystemd
. The configuration of loaded units is kept in memory.Since the list-units
command shows only active units by default, all of the entries above will show loaded
in the LOAD column and active
in the ACTIVE column. This display is actually the default behavior of systemctl
when called without additional commands, so you will see the same thing if you call systemctl
with no arguments:
We can tell systemctl
to output different information by adding additional flags. For instance, to see all of the units that systemd
has loaded (or attempted to load), regardless of whether they are currently active, you can use the --all
flag, like this:
This will show any unit that systemd
loaded or attempted to load, regardless of its current state on the system. Some units become inactive after running, and some units that systemd
attempted to load may have not been found on disk.
You can use other flags to filter these results. For example, we can use the --state=
flag to indicate the LOAD, ACTIVE, or SUB states that we wish to see. You will have to keep the --all
flag so that systemctl
allows non-active units to be displayed:
Another common filter is the --type=
filter. We can tell systemctl
to only display units of the type we are interested in. For example, to see only active service units, we can use:
The list-units
command only displays units that systemd
has attempted to parse and load into memory. Since systemd
will only read units that it thinks it needs, this will not necessarily include all of the available units on the system. To see every available unit file within the systemd
paths, including those that systemd
has not attempted to load, you can use the list-unit-files
command instead:
Units are representations of resources that systemd
knows about. Since systemd
has not necessarily read all of the unit definitions in this view, it only presents information about the files themselves. The output has two columns: the unit file and the state.
Output
UNIT FILE STATE
proc-sys-fs-binfmt_misc.automount static
dev-hugepages.mount static
dev-mqueue.mount static
proc-fs-nfsd.mount static
proc-sys-fs-binfmt_misc.mount static
sys-fs-fuse-connections.mount static
sys-kernel-config.mount static
sys-kernel-debug.mount static
tmp.mount static
var-lib-nfs-rpc_pipefs.mount static
org.cups.cupsd.path enabled
. . .
The state will usually be enabled
, disabled
, static
, or masked
. In this context, static means that the unit file does not contain an install
section, which is used to enable a unit. As such, these units cannot be enabled. Usually, this means that the unit performs a one-off action or is used only as a dependency of another unit and should not be run by itself.
We will cover what masked
means momentarily.
So far, we have been working with services and displaying information about the unit and unit files that systemd
knows about. However, we can find out more specific information about units using some additional commands.
To display the unit file that systemd
has loaded into its system, you can use the cat
command (this was added in systemd
version 209). For instance, to see the unit file of the atd
scheduling daemon, we could type:
Output
[Unit]
Description=ATD daemon
[Service]
Type=forking
ExecStart=/usr/bin/atd
[Install]
WantedBy=multi-user.target
The output is the unit file as known to the currently running systemd
process. This can be important if you have modified unit files recently or if you are overriding certain options in a unit file fragment (we will cover this later).
To see a unit’s dependency tree, you can use the list-dependencies
command:
This will display a hierarchy mapping the dependencies that must be dealt with in order to start the unit in question. Dependencies, in this context, include those units that are either required by or wanted by the units above it.
Output
sshd.service
├─system.slice
└─basic.target
├─microcode.service
├─rhel-autorelabel-mark.service
├─rhel-autorelabel.service
├─rhel-configure.service
├─rhel-dmesg.service
├─rhel-loadmodules.service
├─paths.target
├─slices.target
. . .
The recursive dependencies are only displayed for .target
units, which indicate system states. To recursively list all dependencies, include the --all
flag.
To show reverse dependencies (units that depend on the specified unit), you can add the --reverse
flag to the command. Other flags that are useful are the --before
and --after
flags, which can be used to show units that depend on the specified unit starting before and after themselves, respectively.
To see the low-level properties of a unit, you can use the show
command. This will display a list of properties that are set for the specified unit using a key=value
format:
Output
Id=sshd.service
Names=sshd.service
Requires=basic.target
Wants=system.slice
WantedBy=multi-user.target
Conflicts=shutdown.target
Before=shutdown.target multi-user.target
After=syslog.target network.target auditd.service systemd-journald.socket basic.target system.slice
Description=OpenSSH server daemon
. . .
If you want to display a single property, you can pass the -p
flag with the property name. For instance, to see the conflicts that the sshd.service
unit has, you can type:
Output
Conflicts=shutdown.target
We saw in the service management section how to stop or disable a service, but systemd
also has the ability to mark a unit as completely unstartable, automatically or manually, by linking it to /dev/null
. This is called masking the unit, and is possible with the mask
command:
This will prevent the Nginx service from being started, automatically or manually, for as long as it is masked.
If you check the list-unit-files
, you will see the service is now listed as masked:
Output
. . .
kmod-static-nodes.service static
ldconfig.service static
mandb.service static
messagebus.service static
nginx.service masked
quotaon.service static
rc-local.service static
rdisc.service disabled
rescue.service static
. . .
If you attempt to start the service, you will see a message like this:
Output
Failed to start nginx.service: Unit nginx.service is masked.
To unmask a unit, making it available for use again, use the unmask
command:
This will return the unit to its previous state, allowing it to be started or enabled.
While the specific format for unit files is outside of the scope of this tutorial, systemctl
provides built-in mechanisms for editing and modifying unit files if you need to make adjustments. This functionality was added in systemd
version 218.
The edit
command, by default, will open a unit file snippet for the unit in question:
This will be a blank file that can be used to override or add directives to the unit definition. A directory will be created within the /etc/systemd/system
directory which contains the name of the unit with .d
appended. For instance, for the nginx.service
, a directory called nginx.service.d
will be created.
Within this directory, a snippet will be created called override.conf
. When the unit is loaded, systemd
will, in memory, merge the override snippet with the full unit file. The snippet’s directives will take precedence over those found in the original unit file.
If you wish to edit the full unit file instead of creating a snippet, you can pass the --full
flag:
This will load the current unit file into the editor, where it can be modified. When the editor exits, the changed file will be written to /etc/systemd/system
, which will take precedence over the system’s unit definition (usually found somewhere in /lib/systemd/system
).
To remove any additions you have made, either delete the unit’s .d
configuration directory or the modified service file from /etc/systemd/system
. For instance, to remove a snippet, we could type:
To remove a full modified unit file, we would type:
After deleting the file or directory, you should reload the systemd
process so that it no longer attempts to reference these files and reverts back to using the system copies. You can do this by typing:
Targets are special unit files that describe a system state or synchronization point. Like other units, the files that define targets can be identified by their suffix, which in this case is .target
. Targets do not do much themselves, but are instead used to group other units together.
This can be used in order to bring the system to certain states, much like other init systems use runlevels. They are used as a reference for when certain functions are available, allowing you to specify the desired state instead of the individual units needed to produce that state.
For instance, there is a swap.target
that is used to indicate that swap is ready for use. Units that are part of this process can sync with this target by indicating in their configuration that they are WantedBy=
or RequiredBy=
the swap.target
. Units that require swap to be available can specify this condition using the Wants=
, Requires=
, and After=
specifications to indicate the nature of their relationship.
The systemd
process has a default target that it uses when booting the system. Satisfying the cascade of dependencies from that single target will bring the system into the desired state. To find the default target for your system, type:
Output
multi-user.target
If you wish to set a different default target, you can use the set-default
. For instance, if you have a graphical desktop installed and you wish for the system to boot into that by default, you can change your default target accordingly:
You can get a list of the available targets on your system by typing:
Unlike runlevels, multiple targets can be active at one time. An active target indicates that systemd
has attempted to start all of the units tied to the target and has not tried to tear them down again. To see all of the active targets, type:
It is possible to start all of the units associated with a target and stop all units that are not part of the dependency tree. The command that we need to do this is called, appropriately, isolate
. This is similar to changing the runlevel in other init systems.
For instance, if you are operating in a graphical environment with graphical.target
active, you can shut down the graphical system and put the system into a multi-user command line state by isolating the multi-user.target
. Since graphical.target
depends on multi-user.target
but not the other way around, all of the graphical units will be stopped.
You may wish to take a look at the dependencies of the target you are isolating before performing this procedure to ensure that you are not stopping vital services:
When you are satisfied with the units that will be kept alive, you can isolate the target by typing:
There are targets defined for important events like powering off or rebooting. However, systemctl
also has some shortcuts that add a bit of additional functionality.
For instance, to put the system into rescue (single-user) mode, you can just use the rescue
command instead of isolate rescue.target
:
This will provide the additional functionality of alerting all logged in users about the event.
To halt the system, you can use the halt
command:
To initiate a full shutdown, you can use the poweroff
command:
A restart can be started with the reboot
command:
These all alert logged in users that the event is occurring, something that only running or isolating the target will not do. Note that most machines will link the shorter, more conventional commands for these operations so that they work properly with systemd
.
For example, to reboot the system, you can usually type:
To create a new Tmux session and attach to it, run the following command from the Terminal:
$ tmux
Or,
$ tmux new
To detach from a current Tmux session, just press Ctrl+b
and d
. You don’t need to press this both Keyboard shortcut at a time. First press "Ctrl+b"
and then press "d"
.
Once you’re detached from a session, you will see an output something like below.
[detached (from session 0)]
If you use multiple sessions, you might get confused which programs are running on which sessions. In such cases, you can just create named sessions.
For example if you wanted to perform some activities related to web server in a session, just create the Tmux session with a custom name, for example “webserver” (or any name of your choice).
$ tmux new -s webserver
To view the list of open Tmux sessions, run:
$ tmux ls
You can attach to the last created session by running this command:
$ tmux attach
Or,
$ tmux a
If you want to attach to any specific named session, for example “myoffice”, run:
$ tmux attach -t myoffice
Or, shortly:
$ tmux a -t myoffice
When you’re done and no longer required a Tmux session, you can kill it at any time with command:
$ tmux kill-session -t myoffice
To kill when attached, press Ctrl+b
and x
. Hit "y"
to kill the session.
You can verify if the session is closed with tmux ls
command.
To Kill Tmux server along with all Tmux sessions, run:
$ tmux kill-server
Be careful! This will terminate all Tmux sessions even if there are any running jobs inside the sessions without any warning.
When there were no running Tmux sessions, you will see the following output:
$ tmux ls no server running on /tmp/tmux-1000/default
Tmux has an option to split a single Tmux session window into multiple smaller windows called Tmux panes.
This way we can run different programs on each pane and interact with all of them simultaneously.
Each pane can be resized, moved and closed without affecting the other panes. We can split a Tmux window either horizontally or vertically or both at once.
To split a pane horizontally, press Ctrl+b
and "
(single quotation mark).
To split a pane vertically, press Ctrl+b
and %
.
We can also split a pane horizontally and vertically at the same time. Take a look at the following screenshot.
First, I did a horizontal split by pressing Ctrl+b
"
and then split the lower pane vertically by pressing Ctrl+b %
.
As you see in the above screenshot, I am running three different programs on each pane.
To switch between panes, press Ctrl+b and Arrow keys (Left, Right, Up, Down).
In the previous example, we run three different commands on each pane. However, it is also possible to run send the same commands to all panes at once.
To do so, press Ctrl+b
and type the following command and hit ENTER
:
:setw synchronize-panes
Now type any command on any pane. You will see that the same command is reflected on all panes.
To swap panes, press Ctrl+b
and o
.
Press Ctrl+b
and q
to show pane numbers.
To kill a pane, simply type exit
and ENTER
key. Alternatively, press Ctrl+b
and x
. You will see a confirmation message. Just press "y"
to close the pane.
Now, I am going to zoom the left pane which has top command running. To do so, switch to left pane and press and z. The tmux prefix command is Ctrl+b, so I type Ctrl+b followed by z to zoom the pane. It is that simple!
Note: To switch between panes, press Ctrl+b and Arrow keys (Left, Right, Up, Down).
To autostart Tmux session when connecting via SSH, edit your remote system’s ~/.bash_profile file:
$ nano ~/.bash_profile
If the file is not available, just create it.
And add the following lines in it:
if [ -z "$TMUX" ]; then tmux attach -t default || tmux new -s default fi
Save and close the file. Log out and log back into the remote systems. You will be landed into a new Tmux session named ‘default’.
Now, start any process inside the Tmux session and detach it from by pressing Ctrl+b followed by d. The process will keep running inside the Tmux session even after the network connection is dropped. Once the network connect is back, SSH into the remote system and you will see that the job is still running!!
Toggling Status Line
Turn it off:
$ tmux set-option status off
And, turn it on:
$ tmux set-option status on
You can also bind a key (t in this example) to toggle the status line on and off, as follows:
$ tmux bind-key t set-option status
Open a terminal and use the following command:
# tmux
There’s a difference between Session vs Window vs Pane (split) in tmux
[ctrl + b] + c
Another option uses [ctrl+b] + w
[ctrl+b] + ,
tmux new -s sessionname
[ctrl+b] + $
use:
[ctrl + b] + d
tmux ls
tmux attach -t session_name_or_number
Horizontal (Left/Right) | Vertical (Up/Down) |
[ctrl + b] + % | [ctrl + b] + “ |
[ctrl+b] + arrow keys
[ctrl+b] + z
[ctrl + b] + x
Resize panes
This assumes that you’ve hit ctrl + b and : to get to the command prompt
Here are some additional tmux pane resizing examples:
:resize-pane -D (Resizes the current pane down)
:resize-pane -U (Resizes the current pane upward)
:resize-pane -L (Resizes the current pane left)
:resize-pane -R (Resizes the current pane right)
:resize-pane -D 10 (Resizes the current pane down by 10 cells)
:resize-pane -U 10 (Resizes the current pane upward by 10 cells)
:resize-pane -L 10 (Resizes the current pane left by 10 cells)
:resize-pane -R 10 (Resizes the current pane right by 10 cells)
tmux kill-session -t session_name_or_number
tmux kill-server
Command | Explanation |
tmux kill-session -t name | Kill the mentioned session |
tmux kill-session -a | Kill all sessions except the last used session |
tmux kill-server | Kill all tmux sessions |
tmux attach -t name | Attach/enter the mentioned session |
[Ctrl +b] + d | Detach/exit the current session |
[Ctrl +b] + $ | Name or rename current session |
[Ctrl +b] + c | Create new window |
[Ctrl +b] + , | Name or rename current window |
[Ctrl +b] + w | List all windows |
[Ctrl +b] + & | Kill the current window |
[Ctrl +b] + n/p/N | Move to next/previous/Nth window |
[Ctrl +b] + % | Create horizontal pane |
[Ctrl +b] + “ | Create vertical pane |
[Ctrl +b] + space key | Toggle the layout of current pane |
[Ctrl +b] + z | Zoom into the current pane |
[Ctrl +b] + x | Kill the current pane |
If you’re using Ansible >2.2.0, you can set the ansible_python_interpreter configuration option to /usr/bin/python3:
ansible my_ubuntu_host -m ping -e ‘ansible_python_interpreter=/usr/bin/python3’
or in your inventory file:
[ubuntu_hosts]
hostname1
hostname2
[ubuntu_hosts:vars]
ansible_python_interpreter=/usr/bin/python3
Using the find command
find ~/bin -iname filename
or:
find ~/bin -iname “filen*”
Moving large number of files
find /source/directory -mindepth 1 -maxdepth 1 -name ‘*’ -print0 | xargs -0 mv -t /target/directory;
using find with rename recursively
find . -depth -exec rename -v ‘s!\texttoremove/newtext/’ {} +
you may have to run it twice if the subfolders get renamed in the process
or:
find . -type f -iname “Ghost*” -exec rename -v ‘s/S02\ E/S02E/g’ {} \;
move files from subfolders to parent folder
Move to target folder and execute:
find . -mindepth 2 -type f -print -exec mv {} . \;
Delete empty directories
find . -empty -type d -delete
find . -empty -type f -delete
Command
find /tmp -name “foo.txt” Find a file a called foo.txt in /tmp
find /tmp -iname “foo.txt” Find a file (case insensitive) called foo.txt in /tmp
find /tmp -name “foo*” Find a file starting with the substring foo
find /tmp -regex “.*f.*t” Find regex pattern (regex must include the full path)
Time
-mtime -7 Modified within the last 7 days
-mtime +1 -mtime -7Modified more than 1 day ago, but no more than 7
-daystart Start from today rather than from 24 hours ago
Recursion
-maxdepth 2 Go no more than 2 subdirectories deep during search
-mindepth 4Ignore results that are less than 4 subdirectories deep
-mount -xdev Don’t search directories contained on another filesystem
File type
find ~ -type d -iname “foo” Find a directory in ~ called foo
d directory l symlink p named pipe (FIFO)
f file s socket b block (buffered) special
c character (unbuffered) special D Door (GNU find on Solaris)
Other attributes
-uid 1000User ID is 1000
-user tux User name is tux
-writable -readable File is writable, readable
-perm u=rwx -perm 700Permissions are exactly 700
-perm -u+w,g+w -perm -220User or group has write permission –
-perm /a+w -perm /222 At least one permission is set to write /
-size +5M File is larger than 5 MB
-true Always true
Actions
-exec grep foo {} \; Execute grep on each file found
-ok sed ‘s/foo/bar/g’ {} \; Prompt user to execute sed on each file found
-execdir chmod 700 {} \; Run chmod (in subdirectory of result) on each file found
-fprint Add a newline to output -fprint0Do not add a newline
-ls Print results in ls -dils format
-fls output.txt Write results, in ls -dils frormat, to output.txt
-fprint -fprint0Write output to out.txt …with no newline
-prune Don’t descend into subdirectories
-quit Quit (usually used after other actions)
Find multiple files in Linux
The find
command is used in various ways. One thing you don’t want to do as a system administrator is work harder than you need to. Instead of running the same command to search for one file over and over, you can use the find
command to locate multiple files at the same time.
sudo find /home -type f -name file.txt -exec {} \;
This one-liner can be broken down. I find it best almost to read it as a sentence:
/home
directory-type f
) or a directory (-type d
)-name file.txt
)Find large files in Linux
You can also use find
to discover large files in Linux. Finding large files has proven helpful to me in the long run. find
can help to locate large files quickly, such as backups and ISO files.
sudo find / -type f -size +500000k -exec ls -lh {} \;
This one-liner can be broken down:
/
directorytype -f
)500000k
ls -lh
on the files found in the previous outputFind specific file types in Linux
Another good method is to locate file extensions using the find
command. I find this helpful, as it has shown me ways of finding specific files with only a specific keyword. In this case, the example below is looking for files that only contain a specific extension:
sudo find / -type f \( -name "*.sh" -o -name "*.txt" )
To dissect this:
/
directory-type f
) or a directory (-type d
).sh
or .txt
Find modified files in Linux
The last example shows how to find
a file modified in the last 50 days. This can be helpful when you need to locate recently modified files due to a security reason or if there are unwanted users on the network accessing other files.
sudo find / -type f -ctime +50 -exec rm -f {} \;
The command above shows:
/
directory-type f
) or a directory (-type d
)50
daysrm -f
on the files found in the previous outputThis can help remove those malicious files all in one go. You just have to make sure that the files you select are the files you want to remove. One way to check is to run the command without the exec
section to see the files that come up in the output. If there are a large number of files, redirect the output into a file:
find / -type f -ctime +50 > files.txt
The content can be reviewed and verified before you run a one-liner that removes the /etc
folder. Not ideal.
check out: find . -criteria etc -print0 | xargs -0 command (might be safer than -exec stuff)
List Symlinks
sudo find / -type l
Finding files by size
find -size +1G -ls 2>/dev/nullthe +1G means “larger than a gigabyte”
Finding files by inode number
find -inum 919674 -ls 2>/dev/null919674 is the inode number
Finding files with a specific file owner or group
find /home -user bob -name “*.png”-lsfind /tmp -group admins -ls
Finding files with no owners or groups
find /tmp -nouser -ls
Finding files by last update time
find /home/bob -mtime -1
Finding files by when permissions were last changed
find . -ctime -1 -ls
Finding files based on last access times
find -name “*.pdf” -atime -2
Finding files based on their age relative to another file
find . -newer dig1 -ls
Finding files by type
b block (buffered) special
c character (unbuffered) special
d directory
p named pipe (FIFO)
f regular file
l symbolic link
s socket
find . -type l -ls
Limiting how deeply find should look
The -mindepth and -maxdepth options control how deeply into the file system the search will look (from the current location or starting point).
find -maxdepth 3 -name “*loop”
Finding files only if empty
find . -maxdepth 2 -empty -type f -ls
Finding files by permissions
find -perm 777 -type f -ls
Using find to help you get rid of files
find . -name filename -exec rm {} \;
The {} represents the name of each of the files located by the search criteria.
replace -exec with -ok if you want it to ask for a confirmation before it removes any file.
find . -name runme -ok rm -rf {} \;
sudo vim edit /etc/pulse/default.pa
comment out “load-module module-switch-on-port-available”
Download mkvtoolnix from:
https://mkvtoolnix.download/downloads.html
You can use mkvmerge (from mkvtoolnix) to inspect the file to see what the ID of the subtitles data will be:
mkvmerge -i input.mkv
This will list the tracks, for example like this:
File 'input.mkv': container: Matroska
Track ID 1: video (V_MPEG4/ISO/AVC)
Track ID 2: audio (A_AAC)
Track ID 3: subtitles (S_TEXT/UTF8)
Based on the ID of the track, call the following command, where is the one you identified above. You can use any srt filename that you want.
mkvextract tracks input.mkv :.srt
So, in our case, that would have been:
mkvextract tracks input.mkv 3:subs.srt
Use ‘lsblk’ to determine which drive is the SD card.
To copy an iso or img file to the SD card, use:
sudo dd bs=4M if=2021-10-30-raspios-bullseye-armhf-full.img of=/dev/sdd status=progress oflag=sync
To backup an SD card to a file, use:
sudo dd bs=4M if=/dev/sdd of=sd_backup.iso status=progress oflag=sync
After a long stretch of not cleaning up the posts, I really cleaned them up.. Be ready for all new stuff!
The DNS forwarding seems to have been implemented now.
Thanks to Jason Eckert for his help in getting this transfer completed.
Now to start uploading articles….
Just put in the order to forward the DNS from the old host to the new one, at 2015-Mar-03 11:03…
We’re moving to a new host.. keep your fingers crossed!