How I work to get such good results.

Last updated on September 21st, 2024

At this time, when you can find a lot of false offers with linking lists or offers to blast 2 million links on sites like Fiverr, I will share with you information about how I work. As a result, you will see how many links it is possible to create at once on the Internet and you will be able to evaluate other offers.

I gained access to crawler data in December 2023 and since then the main way to acquire new places has been this data. The entire data is over 2.4 billion web pages stored in one hundred one-gigabyte files.
To read this data, I wrote a script that analyzes it. Using the Redis database, I run it on 20 VPS servers, thanks to which the problem with using a proxy is immediately solved. It takes 4-7 days for scripts to review this data.

The scripts read the content of individual pages, searching for those with a comment form. Recently, I also added other searches such as: redirect links, forums, links with contact form 7. Of course, in the future it is possible to add searches for other scripts and I am counting on your advice. If you have any demand for specific search don’t hesitate to contact with me on: shop(at)hitman.agency.

I use Debian 12 and as you see to count all links it needs 1,34 min.

To explain: “wc -l” command – counts the number of lines in the file and files contain:
zap-links-all.txt – all analyzed links and it’s: 2,798,044,377 links
zap-comm-all.txt – all links with comments form and it’s: 155,661,266 links
zap-cont-all.txt – all links with ContactForm 7 form and it’s: 35,565,825

It’s true that not all of these links are auto-approval links, but they all have comment forms. Now I begin the most difficult part of my job. I’ll have to try posting all these links and see if they get auto-approved. I needed to come up with some strategy, posting 155 million of links doesn’t seem like a good idea. This is where Scrapebox came to my rescue with its ability to filter duplicate domains. Using it, I divide a large file with 150 million links into files with a single domain occurrence and create three such files. Then I post to all these links, combine the resulting files with verified links, truncate them to root and search the master file for all links to the selected domains. After all this, I only have about 8 million links left to check. In effect I get list of about 270-330 thousand links where is it possible to post auto approved comments.

All links is 211 GB and comm links is 13 GB 🙂

From the 270k links received from the procedure, I create sets of 20k, 45k, 75k and 150k links, which can be purchased in my store. As you can see, I still have a lot of spare places to post, from which I can supplement the lists if they run out.

I currently use GSA Search Engine Ranker for posting. Even though it is a very old 32-bit program with a few bugs, it is comfortable for me at the moment. Initially, I do not use the captcha breaker because checking all the links takes quite a long time, so in their original form the lists I created do not require the use of the captcha breaker. However, I am saving all information about the captcha pages and will check these other pages at a later date.

GSA SER bugs and SB bugs are a topic for another article. But now I want to ask you: Have you ever seen Scrapebox with 150 million links loaded?

www.hitman.agency

Equipment I used in this project:

  1. processor: AMD Ryzen 9 7950X
    4.5 GHz, 16 core, 32 threads
  2. memory: 64GB
  3. disk: nvme 12600 MB/s
  4. connection: 1 Gb/s