Downloading multiple PDFs in a batch depending on your browser can be a time consuming job, so the easiest way to do this properly and platform independent is simply to write a small python script ๐Ÿ˜ƒ๐Ÿ

With request you can access the website hmtl-content and with beautifulsoup in combination with wget you can download them in a few seconds, the script looks as follows:

import requests
from bs4 import BeautifulSoup as soup
import os

# Define Website to Download pdf
url = 'website to download pdfs'

# Get Website content
r = requests.get(url)

# Create soup object of requests object
soup = soup(r.text, 'html.parser')

# Loop through all elements of the website with the tag a
for link in soup.find_all('a'):
    # Download pdf if the name pdf is in the hyperlink and
    # is not a None Object
    if link.get('href') is not None and '.pdf' in link.get('href'):
        # Download pdf with wget
        os.system('wget '+ link.get('href'))

You can also find a link to the Github Gist here.