chevron_left chevron_right
Login Register invert_colors photo_library


Stay updated and chat with others! - Join the Discord!
Thread Rating:
  • 0 Vote(s) - 0 Average


Website Copier filter_list
Author
Message
Website Copier #1
This is really simple and easy to make. There are only a few problems and they will be listed at the end.

How does this work?
Basically, it finds your target website, decodes it and writes it toan HTML file.

What does it not do?
  • Save images.
  • Save external CSS
  • Write to file IF there is a special character in the file. Ex: \u253c

The only needed imports for this file is urllib, and sys. Any others are quite useless to us.

Code:
import urllib,sys

Next we just check the version of the user, if it is greater than version 2, it imports urllib.request as urlreq. If not, urllib2 is imported as urlreq.

Code:
if sys.version_info[0] > 2:
    import urllib.request as urlreq
else:
    import urllib2 as urlreq

Now we prompt the user for their target.

Code:
target    = input('Website: ')

Now that we've prompted the user, we need to build our path name, the name of the file that we save, and our website request that will tell us which website to steal the HTML from.

Code:
request   = 'http://'
website   = (request + target)
auto_path = target + '/'
path_name = (target.replace('.','') + '.html').replace('/','')

All we have to do now is open the website and save it to an HTML file.

The next few lines of code just open the website.

Code:
open_web  = urlreq.urlopen(website)
read      = str(open_web.read().decode())

It's time to save your ripped/stolen/peanut buttered website.

Code:
f = open(path_name,'w')
f.write(read)
f.close()
print('Saved.')

We're finished.

Problems and caveats:
  • Special characters cause the entire project to fail. Ex: \u253c

Full code:
Code:
import urllib,sys

if sys.version_info[0] > 2:
    import urllib.request as urlreq
else:
    import urllib2 as urlreq

target    = input('Website: ')
request   = 'http://'
website   = (request + target)
auto_path = target + '/'
path_name = (target.replace('.','') + '.html').replace('/','')
open_web  = urlreq.urlopen(website)
read      = str(open_web.read().decode())

f = open(path_name,'w')
f.write(read)
f.close()
print('Saved.')

Reply

RE: Website Copier #2
Holy shit, what's this? Ruins actually making a good post? I must be dreaming.

Nice one Duubz, this is what I like to see from you. Smile
[Image: F4Z9Dqw.png]

Reply

RE: Website Copier #3
Very nice. I am pleasantly surprised.
[Image: 7ajmN5P.jpg]

Skype: oni_sl (Add)
Steam: Oni | SL (Add)

Reply

RE: Website Copier #4
Nice program Duubz! This might come in useful when I'm using IE, but must of the time I just use Ctrl + S.

Reply

RE: Website Copier #5
(03-09-2014, 08:48 AM)Aurora Wrote: Nice program Duubz! This might come in useful when I'm using IE, but must of the time I just use Ctrl + S.

What demon would crawl up your ass to make you use IE? o.O

Reply

RE: Website Copier #6
(03-09-2014, 08:49 AM)Duubz Wrote: What demon would crawl up your ass to make you use IE? o.O

School. Dem assholes.

Reply

RE: Website Copier #7
(03-09-2014, 08:51 AM)Aurora Wrote: School. Dem assholes.

You have my prayers of survival.

Reply

RE: Website Copier #8
(03-09-2014, 08:53 AM)Duubz Wrote: You have my prayers of survival.

Don't you worry. I have Firefox portable, Tor, and Chrome Portable on my USB.

Reply






Users browsing this thread: 1 Guest(s)