I’ve been setting up a new Proxmox server and messing around with VMs, and wanted to know what kind of useful commands I’m missing out on. Bonus points for a little explainer.

Journalctl | grep -C 10 'foo' was useful for me when I needed to troubleshoot some fstab mount fuckery on boot. It pipes Journalctl (boot logs) into grep to find ‘foo’, and prints 10 lines before and after each instance of ‘foo’.

  • roran@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    27 days ago
    cd `pwd`
    

    for when you want to stay in a dìr that gets deleted and recreated.

    cat /proc/foo/exe > program
    cat /proc/foo/fd/bar > file
    

    to undelete still-running programs and files still opened in running programs

  • demonsword@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    27 days ago

    Something that really improved my life was learn to properly use find, grep, xargs and sed. Besides that, there are these two little ‘hacks’ that are really handy at times…

    1- find out which process is using some local port (i.e. the modern netstat replacement):

    $ ss -ltnp 'sport = :<port-number>'

    2- find out which process is consuming your bandwidth:

    $ sudo nethogs

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      I always just do ss -ltnp | grep <port-number>, which filters well enough for my purposes and is a bit easier to remember…

    • eli@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      26 days ago

      You can do “ss -aepni” and that will dump literally everything ss can get its hands on.

      Also, ss can’t find everything, it does have some limitations. I believe ss can only see what the kernel can see(host connections), but tcpdump can see the actual network flow on the network layer side. So incoming, outgoing, hex(?) data in transit, etc.

      I usually try to use ss first for everything since I don’t think it requires sudo access for the majority of its functionality, and if it can’t find something then I bring out sudo tcpdump.

  • Oinks@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    27 days ago

    I’m not much of a one-liner collector but I like this one:

    vim +copen -q <(grep -r -n <search> .) 
    

    which searches for some string and opens all instances in vim’s quickfix list (and opens the quickfix window too). Navigate the list with :cn and :cn. Complex-ish edits are the obvious use case, but I use this for browsing logs too.

    Neovim improves on this with nvim -q - and [q/]q, and plenty of fuzzy finder plugins can do a better version using ripgrep, but this basic one works on any system that has gnu grep and vim.

    Edit:

    This isn’t exactly a command, but I can’t imagine not knowing about this anymore:

    $ man grep
    /  -n       # double space before the dash!
    

    brings you directly to the documentation of the -n option. Not the useless synopsis or any other paragraphs that mention -n in passing, but the actual doc for this option (OK, very occasionally it fails due to word wrap, but assuming the option is documented then it works 99% of the time).

  • Ŝan • 𐑖ƨɤ@piefed.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    27 days ago

    ripgrep has mostly replaced grep for me, and I am extremely conservative about replacing core POSIX utilities - muscle memory is critical. I also tend to use fd, mainly because of its forking -x, but its advantages over find are less stark þan rg’s improvements over grep.

    nnn is really handy; I use it for everything but the most trivial renames, copies, and moves - anyþing involving more þan one file. It’s especially handy when moving files between servers because of þe built-in remote mounting.

    • marighost@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      Would you recommend nnn for transfering ~5Tb of media between two local servers? Seems like a weird question but it’s something I’ll have to do soon.

      • Ŝan • 𐑖ƨɤ@piefed.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        27 days ago

        No. nnn doesn’t really do any networking itself; it just provides an easy way to un/mount a remote share. nnn is just a TUI file manager.

        For transfering 5TB of media, I’d acquire a 5TB USB 3.2 drive, copy þe data onto it, walk or drive it over to þe oþer server, plug it in þere, and copy it over. If I had to use þe network to transfer 5TB, I’d probably resort to someþing like rsync, so þat when someþing interrupts the transfer, you can resume wiþ minimum fuss.

        • marighost@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          27 days ago

          I could very easily, I’ve just only use rsync a handful of times for one-off files or small directories. Thinking of using it for several Tbs scares me 😅

          • ranzispa@mander.xyz
            link
            fedilink
            arrow-up
            0
            ·
            27 days ago

            When transfering large amounts of data I’d most definitely advise using rsync. Something fails, connection falls and everything is okay as it’ll pick up where it left off.

    • emb@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      27 days ago

      rg and fd have been so much easier to use than the classics to me. Great replacements!

      bat is another one that I think can be worth switching to, though not as essential.

  • @marighost I dont use Prox, but for various random linux commands… ive got a wealth. :D in the journalctl vein.

    journalctl -xeu \<service name\>

    ex: journalctl -xeu httpd

    Gives you the specific journal output for the given service. In this example. httpd.

    Also, journalctl is more than boot logs, its all of your logs from anything controlled by systemd. Mounts, services, timers, even sockets.

    For example. On my system, i have /var/home as a mount. systemctl and journalctl can give me info on it with:

    systemctl status var-home.mount
    journalctl -xeu var-home.mount

    You can see all of the mounts with.
    systemctl list-units --type=mount

    Or, see all of your services with
    systemctl list-units --type=service

    Or all of your timers with
    systemctl list-timers

    We do a weekly show on getting into linux terminals, commands, tricks, and share our experience… It’s called Into the Terminal. on the Red Hat Enterprise Linux youtube channel. I’ll send you a link if you’re interested.

  • netvor@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    25 days ago

    I think vipe is underrated; it takes whatever is on its stdin, shoves it in a temp file, opens your favorite text editor (EDITOR environment variable) and waits in the background until you finish editing the file and close it. Then it outputs the edited text to its stdout.

    It’s useful in all kinds of pipes, but personally I use it tons of times a day in combination with xclip, in something like this:

    xclip -o -selection primary | vipe | xclip -i -selection clipboard
    

    (I actually have a bit fancier version of this pipe wrapped in a Bash function named xvxx.)

    On my setup, this takes my current text selection, opens it in vim, and lets me edit it before it sends it to the “traditional” Ctrl+C clipboard. It’s super handy for editing comments like this one.

    If you often find yourself writing complex Bash pipelines involving generating some output and then running set of commands per line (perhaps in a while loop), sometimes replacing the “selection part” with vipe can be easier than coming up with right filter.

    find_or_ls_or_grep_something | vipe | for while read -r foo; do some_action "$foo"; done
    

    And if you are really confident with Bash, you can go even a step further and do:

    you might find something like this useful sometimes:

    find_or_ls_or_grep_something | vipe | bash
    

    and just create a large dumb one-off script, manually curating what’s exactly done. Remember that editing large lists in vim can be made much easier by utilizing vim’s ability to invoke unix filter commands (those greps and uniqs and seds et al.) on the buffer, and /or block editing mode using Ctrl+V (that last one method goes really well with column -t).

  • Auster@thebrainbin.org
    link
    fedilink
    arrow-up
    0
    ·
    27 days ago

    (Fixed the bolding issue)

    From a file I keep since I started using Linux near 5 years ago:

    Display the RAM usage:
    watch -n 5 free -m
    Useful if you open way too much stuff and/or you’re running on budget processing power, and don’t want your computer freezing from 3 hours.
    Also useful if you use KDE’s Konsole integrated into the Dolphin file manager and you must for some reason not close the Dolphin window. You’d just need to open Dolphin’s integrated Konsole (F4), run the command and without closing it, press F4 again to hide the Konsole.

    Terminal-based file browser that sorts by total size:
    ncdu
    why is the cache folder 50 GB big?

    Mass-check MD5 hashes for all files in the path, including subfolders:
    find -type f \( -not -name "md5sum.txt" \) -exec md5sum '{}' \; > md5sum.txt
    Change md5sum (and optionally the output file’s name) for your favorite/needed hash calculator command.

    For mounting ISOs and similar formats:
    sudo mount -o loop path/to/iso/file/YOUR_ISO_FILE.ISO /mnt/iso

    And unmounting the file:
    sudo umount /mnt/iso
    Beware there’s no N in the umount command

    For creating an ISO from a mounted disc:
    dd if=/dev/cdrom of=image_name.iso

    And for a folder and its files and subfolders:
    mkisofs -o /path/to/output/disc.iso /path/from/input/folder

    Compress and split files:
    7z -v100m a output_base_file.7z input_file_or_folder

    Changes the capslock key into shiftlock on Linux Mint (not tested in other distros):
    setxkbmap -option caps:shiftlock
    Was useful when the shift key from a previous computer broke and I didn’t have a spare keyboard.

    If you want to run Japanese programs on Wine, you can use:
    LC_ALL=ja_JP wine /path/to/the/executable.exe
    There are other options but this is one that worked the better for me so I kinda forgor to take note of them.

    List all files in a given path and its subfolders:
    find path_to_check -type f
    Tip: add > output.txt or >> output.txt if you’d rather have the list in a TXT file.

    Running a program in Wine in a virtual desktop:
    wine explorer /desktop=session_name,screen_size /path/to/the/executable.exe

    E.g.:
    wine explorer /desktop=MyDesktop,1920x1080 Game.exe

    Useful if you don’t want to use the whole screen, there are integration issues between Linux, Wine and the program, or the program itself has issues when alt-tabbing or similar (looking at you, 2000’s Windows games)

    Download package installers from with all their dependencies:
    apt download package_name
    Asks for sudo password even when not running as sudo. Downloaded files come with normal user permissions thankfully. Also comes with an installation script but if you want to run it offline, iirc you need to change apt install in the script for dpkg -i.

    If you use a program you’d rather not connect to the internet but without killing the whole system’s connection, try:
    firejail --net=none the_command_you_want_to_run

    Or if you want to run an appimage:
    firejail --net=none --appimage the_command_you_want_to_run

    If you want to make aliases (similar to commands from Windows’ PATH) and your system uses bash, edit the file $HOME/.bashrc (e.g. with Nano) and make the system use the updated file by either logging out and in, or running . ~/.bashrc

    Python/Pip have some nifty tools, like Cutlet (outputs Japanese text as Romaji), gogrepoc (for downloading stuff from your account using GOG’s API), itch-dl (same as gogrepoc but for Itch.io), etc. If you lack the coding skills and doesn’t mind using LLMs, you could even ask one to make some simpler Python scripts (key word though: simpler).

    If you want to run a video whose codec isn’t supported by your system (e.g. Raspberrian which only supports H.264, up to 1080p):
    ffmpeg -i input_video.mkv -map 0 -c:v libx264 -preset medium -crf 23 -vf scale=1920:1080 -c:a copy -c:s copy output_video.mkv

  • kittenroar@beehaw.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    27 days ago

    systemd-run lets you run a command under some limitations, ie

    systemd-run --scope -p MemoryLimit=1000M -p CPUQuota=20% ./heavyduty.sh
    
    • kittenroar@beehaw.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      27 days ago

      ulimit can also be used to define limits, but for a user rather than a process. This could protect you against, ie, a fork bomb

  • jim3692@discuss.online
    link
    fedilink
    arrow-up
    0
    ·
    26 days ago

    docker run --rm -it --privileged --pid=host debian:12 nsenter -a -t1 "$(which bash)"

    If your user is in the docker group, and you are not running rootless Docker, this command opens a bash shell as root.

    How it works:

    • docker run --rm -it creates a temporary container and attaches it to the running terminal
    • --privileged disables some of the container’s protections
    • --pid=host attaches the container to the host’s PID namespace, allowing it to access all running processes
    • debian:12 uses the Debian 12 image
    • nsenter -a -t1 enters all the namespaces of the process with PID 1, which is the host’s init since we use --pid=host
    • "$(which bash)" finds the path of the host’s bash and runs it inside the namespaces (plain bash may not work on NixOS hosts)
  • HiddenLayer555@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    27 days ago

    parallel, easy multithreading right in the command line

    inotifywait, for seeing what files are being accessed/modified

    tail -F, for a live feed of a log file

    • ranzispa@mander.xyz
      link
      fedilink
      arrow-up
      0
      ·
      27 days ago

      should be the number of hardware threads available on the system by default

      No, not at all. That is a terrible default. I do work a lot on number churning and sometimes I have to test stuff on my own machine. Generally I tend to use a safe number such as 10, or if I need to do something very heavy I’ll go to 1 less than the actual number of cores on the machine. I’ve been burned too many times by starting a calculation and then my machine stalls as that code is eating all CPU and all you can do is switch it off.