Jump to content

Scraping OK to seed the torrents automatically?


Recommended Posts

I would love to seed the modpack (even the WOT one, though I don't play WOT), I've got a server and uncapped gigabit fiber, so, it'd be great, but I don't want to download the torrent file every update, and set it to seed manually. If it'd at all be possible to have an RSS feed for the torrent file, or magnet links in the RSS feed, or whatever, that'd make it easier.

 

I sort of jumped the gun, normally I'd ask first, but I wanted a proof of concept first before I asked, and so... yeah. I made a python script that, rather than using an RSS feed to get the torrent files, it just scrapes the webpage. I've got it set to only do that every 4 hours, if that's too often (or if any is too often) please let me know, and I'll change it. It seems to work perfectly, though I haven't seen a version change, yet. The idea is to download the torrent, seed, then once there is a new version, delete the old version and stop seeding it, download the new version and seed it, and so on.

 

I'm sure I can't be the only one here who might want to do this, but I don't want to put this into the wild without your consent, and the more who do it obviously the more people scraping the website, which isn't great... if a lot of people would be interested, I could set up an RSS feed to use instead via scraping, so that it only needs to be scraped by one bot, allowing the others to use the RSS feed. Originally I thought of scraping, putting it in an RSS feed, and then seeding with a different application, but merging it all into a single Python script seems to work fine, sans RSS feed, so setting up the RSS feed would be, well, trivial, at this point.

 

Is this okay? Apologies if it is not, I can stop ASAP if you want. If you want to provide an RSS feed yourself for the torrents, that'd resolve the issue completely. I don't want to share the script without an okay, but I would happily provide it via PM to Aslain if you want it.

  • Like 1
Link to comment
  • Administrator

Thanks for your offer, but I don't need more download links, current links are enough for downloading the modpack exe. I create .torrent file manually, auto-script maybe could help it, but probably too much effort for a this downloading method which is very unopopular since most of people thinks torrents are for pirating only and are affraid of it 😛

Link to comment
3 hours ago, Aslain said:

Thanks for your offer, but I don't need more download links, current links are enough for downloading the modpack exe. I create .torrent file manually, auto-script maybe could help it, but probably too much effort for a this downloading method which is very unopopular since most of people thinks torrents are for pirating only and are affraid of it 😛

 

Apologies, I'm a tad confused.

 

I'm not offering to make more download links or anything (though I guess I did offer an RSS Feed to do the same as I am asking to do, that's effortless and I'd put that in a "contribute" page, rather than on the download page, along with the code to do what I'm suggesting, though, demand doesn't seem to be that high, admittedly), just asking if it's okay to continue what I'm doing, which is automatically download the torrent you have on the site so that I can help seed it for as long as that version of the file is relevant, then download the next torrent, seed it, and so on.

 

I'm contributing a decent amount of bandwidth to the people who do download via torrent, though it's nowhere near saturating my upload, it is admittedly more than I guessed, those that aren't scared of it lol. I am assuming that's freeing up some of your bandwidth, at least?

 

I take it you're the only initial seeder? I suppose I'm asking if it's okay for me to continue to use a Python script to scrape the webpage every 4 hours (I can make it a longer period, say 8 hours, if you wish) to see if there's a new version, if a new version, it's downloaded and seeded, otherwise continue seeding the current version until the next check.

 

Follow?

 

Edit: And to be clear, I already made it, just to ensure it could be done, and just sort of did it, despite it being something I'd always ask permission for before scraping a page on a site like yours. All I can do now is apologize and ask permission to continue.

Edited by Chemputer
clarification
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use and Privacy Policy.