Windows Batch: How to use GET return values to append only valid software download links to a text file?

  batch-file, curl, http-status-codes, wget, windows

My problem is the following:
I scripted a workaround to automatise the download of program files containing version numbers in its file names like programv1.55.50.zip (http or https sites). The workaround is coming from linear rising version numbers (predecessor is version 1.55.49, actual is 1.55.50 and successor "maybe" 1.55.51) I used excel to build rows 1.55.50 to 1.57.99 and joined them with the download link. As result I get per example:

https:linktofileprogramv1.55.49.zip, https:linktofileprogramv1.55.50.zip .. https:linktofileprogramv1.57.99.zip.

All links I pasted into a linkstoprogram.txt file, one row per link for the use with switch -i of wget.

Using

wget --timestamping --referer foo --recursive --no-parent --input-file=C:Pathlinkstoprogram.txt --show-progress --append-output=C:Path%Timestamp%_program.log

on all that links I get downloaded the two files programv1.55.49.zip and the actual programv1.55.50.zip and massive error logs for all the others non available "dummy" files. Thats why I want to separate the log of this queries. The next procedure is sorting them by date and deleting all files exempt the newest one and copy it.

FOR /F "skip=1 eol=: delims=" %%G IN ('dir /b /o-d *.zip') DO SDelete64 -p 3 -r -nobanner "%%G"

copy programv<.zip program.zip

Question for use in Windows batch, I also use the newest CoreUtils for Windows in batch:

So how can I examine the returned HTTP status codes by checking 200,302,403, and 404 values if they pass or fail?

If all status codes passes append the complete download link to a within this procedure created text file, if one or more status codes fail do nothing.

wget -q --spider address 

seem to give out errorlevel only for status code 200. So all errorlevel seem to be 0.
I found some reference about this context but do not get further on:

Here answer @DrFloyd5, here answer @Jim Davis, and here answer @Stuart Siegler for Windows.

This Bash-scripts (especially answer @jm666) gets near to what I want, but I cannot reproduce it in batch and need as result a text file with the passing download links, not the error codes on the links!

#!/bin/bash
while read LINE; do
  curl -o /dev/null --silent --head --write-out '%{http_code}n' "$LINE"
done < url-list.txt

and

dosomething() {
        code="$1"; url="$2"
        case "$code" in
                200) echo "OK for $url";;
                302) echo "redir for $url";;
                404) echo "notfound for $url";;
                *) echo "other $code for $url";;
        esac
}

#MAIN program
while read url
do
        uri=($(echo "$url" | sed 's~http://([^/][^/]*)(.*)~ ~'))
        HOST=${uri[0]:=localhost}
        FILE=${uri[1]:=/}
        exec {SOCKET}<>/dev/tcp/$HOST/80
        echo -ne "GET $FILE HTTP/1.1nHost: $HOSTnn" >&${SOCKET}
        res=($(<&${SOCKET} sed '/^.$/,$d' | grep '^HTTP'))
        dosomething ${res[1]} "$url"
done << EOF
http://stackoverflow.com
http://stackoverflow.com/some/bad/url
EOF

OR

Does there exist a solution to use the wget -A switch so that I do not need the workaround at all?

I am interested in both solutions!

Source: Windows Questions

LEAVE A COMMENT