I know firefox has the very useful “Copy clean Link” option in the context menu, but I would like a similar feature for copying links from any other software, like spotify for example. So I am looking for some software that hooks into the clipboard pipeline, and cleans any URL that gets added. I tried googling for something like it, but was completely unsuccessful. Does anyone have a clue how I might go about achieving this?

Thanks in advance :)

Edit: I found out about klipper’s actions, which provide the option to run a command when a string that matches a regex is added to the clipboard buffer. I am not sure how to properly use this though, so any help is appreciated!

  • utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    4 hours ago

    Looks like https://old.reddit.com/r/kde/comments/d3m0fz/how_to_open_links_in_mpv_with_klipper/ is a good starting point, i.e

    • Open menu in system tray.
    • Right click on Clipboard => Configure Clipboard.
    • Go to Actions Configuration => Add Action.

    then… to try! :D I’m just discovering this too but seems like the right way.

    That said I’d be cautious and limit the use case to only what you have, e.g. Spotify links, at least at first because I imagine one can get into hairy edge cases quickly.

    Keep us posted!

  • JubilantJaguar@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 day ago

    You never define “clean”.

    To strip excess URL parameters (i.e. beginning “&”, almost certainly junk) if the clipboard buffer contains a URL and only a URL (Wayland only):

    if url=$(printf '%s' "$(wl-paste --no-newline | awk '$1=$1' ORS=' ')" | egrep -o 'https?://[^ ]+') ; then
      wl-copy "${url%%\&*}"
    fi
    
    • enkers@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      1 day ago

      Edit: Oh, OP basically already said the same thing.

      I think it really depends on the website and even where you are on the website. For example, if you’re on YT, the watch?v=<b64_id> is probably not something you want to throw away. If you’re on a news site like imaginarynews.com/.../the-article-title/?tracking-garbage=<...> then you probably do. It’s just a matter of having “sane” defaults that work as most people would expect.

      • JubilantJaguar@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        24 hours ago

        Sure, but my script only gets rid of the second and later parameters, i.e. ones with & not ?. Personally I don’t think I’ve ever seen a single site where an & param is critical. These days there few where the ? matters either, but yes YT is a holdout.

        • ivn@jlai.lu
          link
          fedilink
          English
          arrow-up
          0
          ·
          23 hours ago

          There are plenty of sites that use more than one parameters. It’s true that a lot of sites now use the history API instead of url parameters but you can still find plenty, and you have no garante about the parameters order. Any site with a search page that have a few options will probably use url parameters instead of the history API. It’s easier to parse and will end up being shorter most of the time.

    • silly goose meekah@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      1 day ago

      Fair enough, I haven’t given that too much thought myself until now. After playing around with Firefox’s URL cleaning, I realized there are some parameters I want to keep. So, by clean I mean removing all unnecessary parameters in the URL.

      For example, https://youtu.be/jNQXAC9IVRw would become https://youtu.be/jNQXAC9IVRw, but https://www.youtube.com/watch?v=jNQXAC9IVRw keeps it’s parameter, because it is necessary.

      I guess replicating the logic for deciding which parameters to keep is not trivial, so the easiest solution is probably just manually pasting links into firefox, and just copying them cleanly from there. Thanks for providing some code, though!

    • traches@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      Query parameters are junk? They have tons of legitimate uses, they’re one of the better places to keep state.

      • silly goose meekah@lemmy.worldOP
        link
        fedilink
        arrow-up
        0
        ·
        14 hours ago

        As a WebDev… URL parameters are definitely not the place to keep state… Were not in the 00’s anymore. They do have legit uses, but we have JS localStorage nowadays.

        • traches@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 hours ago

          They have pretty different use cases. Localstorage is for when you want persistence across page loads, not necessarily specific to any particular page but specific to a browser. An example would be storing user-selected light or dark mode.

          Query parameters are specific to a page/URL and you get a lot of things for free when you use them:

          • back/forward navigation
          • bookmarking
          • copy-paste to share
          • page level caching
          • access on both server and client

          Query parameters are good for things like searches, filters, sorting, etc

          • silly goose meekah@lemmy.worldOP
            link
            fedilink
            arrow-up
            0
            ·
            12 hours ago

            I disagree. I definitely prefer REST APIs that use the file path for searches, filters, sorting. You get most if not all benefits from query parameters, and if done correctly it is just as clearly readable as query params.

  • enkers@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    1 day ago

    That’d be cool. Whenever I’m sharing a YT link, I’m always a bit suspicious of what info the youtu.be URL is hiding, so I paste it into a browser to get a clean URL.

    Maybe this is silly, but I’d be cool to do that automatically.

    • ivn@jlai.lu
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      23 hours ago

      Well for youtube it’s quite easy, there are only 4 useful parameters that I can think of, the video id v, the playlist id list and index if it’s a playlist and the time t if you’re sending a specific time in the video. Everything else can be removed. Here’s what uBlock Origin with the AdGuard URL Tracking filter list:

      ! Youtube
      $removeparam=embeds_referring_euri,domain=youtubekids.com|youtube-nocookie.com|youtube.com
      $removeparam=embeds_referring_origin,domain=youtubekids.com|youtube-nocookie.com|youtube.com
      $removeparam=source_ve_path,domain=youtubekids.com|youtube-nocookie.com|youtube.com
      ||youtube.com^$removeparam=pp
      
    • Flagstaff@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      It rarely ever does anything in my experience.

      Anyway, I built a URL-cleaning script in AutoHotkey, but that’s Windows-only; I, too, am on the hunt for a Linux equivalent. Maybe this could be done in SikuliX or Espanso, via a Python script, but I suck at Python so far.

      • cmnybo@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 day ago

        It’s never worked for me either. The ClearURLs addon has a function to copy a clean URL and that works great though. It’s open source, so maybe someone could turn its cleaning function into a program that could be used for the clipboard.