Monday, 30 September 2013

Web Scraper Shortcode WordPress Plugin Review

This short post is on the WP-plugin called Web Scraper Shortcode, that enables one to retrieve a portion of a web page or a whole page and insert it directly into a post. This plugin might be used for getting fresh data or images from web pages for your WordPress driven page without even visiting it. More scraping plugins and sowtware you can find in here.

To install it in WordPress go to Plugins -> Add New.
Usage

The plugin scrapes the page content and applies parameters to this scraped page if specified. To use the plugin just insert the

[web-scraper ]

shortcode into the HTML view of the WordPress page where you want to display the excerpts of a page or the whole page. The parameters are as follows:

    url (self explanatory)
    element – the dom navigation element notation, similar to XPath.
    limit – the maximum number of elements to be scraped and inserted if the element notation points to several of them (like elements of the same class).

The use of the plugin is of the dom (Data Object Model) notation, where consecutive dom nodes are stated like node1.node2; for example: element = ‘div.img’. The specific element scrape goes thru ‘#notation’. Example: if you want to scrape several ‘div’ elements of the class ‘red’ (<div class=’red’>…<div>), you need to specify the element attribute this way: element = ‘div#red’.
How to find DOM notation?

But for inexperienced users, how is it possible to find the dom notation of the desired element(s) from the web page? Web Developer Tools are a handy means for this. I would refer you to this paragraph on how to invoke Web Developer Tools in the browser (Google Chrome) and select a single page element to inspect it. As you select it with the ‘loupe’ tool, on the bottom line you’ll see the blue box with the element’s dom notation:


The plugin content

As one who works with web scraping, I was curious about  the means that the plugin uses for scraping. As I looked at the plugin code, it turned out that the plugin acquires a web page through ‘simple_html_dom‘ class:

    require_once(‘simple_html_dom.php’);
    $html = file_get_html($url);
    then the code performs iterations over the designated elements with the set limit

Pitfalls

    Be careful if you put two or more [web-scraper] shortcodes on your website, since downloading other pages will drastically slow the page load speed. Even if you want only a small element, the PHP engine first loads the whole page and then iterates over its elements.
    You need to remember that many pictures on the web are indicated by shortened URLs. So when such an image gets extracted it might be visible to you in this way: , since the URL is shortened and the plugin does not take note of  its base URL.
    The error “Fatal error: Call to a member function find() on a non-object …” will occur if you put this shortcode in a text-overloaded post.

Summary

I’d recommend using this plugin for short posts to be added with other posts’ elements. The use of this plugin is limited though.



Source: http://extract-web-data.com/web-scraper-shortcode-wordpress-plugin-review/

Sunday, 29 September 2013

Microsys A1 Website Scraper Review

The A1 scraper by Microsys is a program that is mainly used to scrape websites to extract data in large quantities for later use in webservices. The scraper works to extract text, URLs etc., using multiple Regexes and saving the output into a CSV file. This tool is can be compared with other web harvesting and web scraping services.
How it works
This scraper program works as follows:
Scan mode

    Go to the ScanWebsite tab and enter the site’s URL into the Path subtab.
    Press the ‘Start scan‘ button to cause the crawler to find text, links and other data on this website and cache them.

Important: URLs that you scrape data from have to pass filters defined in both analysis filters and output filters. The defining of those filters can be set at the Analysis filters and Output filters subtabs respectively. They must be set at the website analysis stage (mode).
Extract mode

    Go to the Scraper Options tab
    Enter the Regex(es) into the Regex input area.
    Define the name and path of the output CSV file.
    The scraper automatically finds and extracts the data according to Regex patterns.

The result will be stored in one CSV file for all the given URLs.

There is a need to mention that the set of regular expressions will be run against all the pages scraped.
Some more scraper features

Using the scraper as a website crawler also affords:

    URL filtering.
    Adjustment of the speed of crawling according to service needs rather than server load.

If  you need to extract data from a complex website, just disable Easy mode: out press the  button. A1 Scraper’s full tutorial is available here.
Conclusion

The A1 Scraper is good for mass gathering of URLs, text, etc., with multiple conditions set. However this scraping tool is designed for using only Regex expressions, which can increase the parsing process time greatly.



Source: http://extract-web-data.com/microsys-a1-website-scraper-review/

Friday, 27 September 2013

Visual Web Ripper: Using External Input Data Sources

Sometimes it is necessary to use external data sources to provide parameters for the scraping process. For example, you have a database with a bunch of ASINs and you need to scrape all product information for each one of them. As far as Visual Web Ripper is concerned, an input data source can be used to provide a list of input values to a data extraction project. A data extraction project will be run once for each row of input values.

An input data source is normally used in one of these scenarios:

    To provide a list of input values for a web form
    To provide a list of start URLs
    To provide input values for Fixed Value elements
    To provide input values for scripts

Visual Web Ripper supports the following input data sources:

    SQL Server Database
    MySQL Database
    OleDB Database
    CSV File
    Script (A script can be used to provide data from almost any data source)

To see it in action you can download a sample project that uses an input CSV file with Amazon ASIN codes to generate Amazon start URLs and extract some product data. Place both the project file and the input CSV file in the default Visual Web Ripper project folder (My Documents\Visual Web Ripper\Projects).

For further information please look at the manual topic, explaining how to use an input data source to generate start URLs.


Source: http://extract-web-data.com/visual-web-ripper-using-external-input-data-sources/

Thursday, 26 September 2013

Scraping Amazon.com with Screen Scraper

Let’s look how to use Screen Scraper for scraping Amazon products having a list of asins in external database.

Screen Scraper is designed to be interoperable with all sorts of databases and web-languages. There is even a data-manager that allows one to make a connection to a database (MySQL, Amazon RDS, MS SQL, MariaDB, PostgreSQL, etc), and then the scripting in screen-scraper is agnostic to the type of database.

Let’s go through a sample scrape project you can see it at work. I don’t know how well you know Screen Scraper, but I assume you have it installed, and a MySQL database you can use. You need to:

    Make sure screen-scraper is not running as workbench or server
    Put the Amazon (Scraping Session).sss file in the “screen-scraper enterprise edition/import” directory.
    Put the mysql-connector-java-5.1.22-bin.jar file in the “screen-scraper enterprise edition/lib/ext” directory.
    Create a MySQL database for the scrape to use, and import the amazon.sql file.
    Put the amazon.db.config file in the “screen-scraper enterprise edition/input” directory and edit it to contain proper settings to connect to your database.
    Start the screen scraper workbench

Since this is a very simple scrape, you just want to run it in the workbench (most of the time you want to run scrapes in server mode). Start the workbench, and you will see the Amazon scrape in there, and you can just click the “play” button.

Note that a breakpoint comes up for each item. It would be easy to save the scraped details to a database table or file if you want. Also see in the database the “id_status” changes as each item is scraped.

When the scrape is run, it looks in the database for products marked “not scraped”, so when you want to re-run the scrapes, you need to:

UPDATE asin
SET `id_status` = 0

Have a nice scraping! ))

P.S. We thank Jason Bellows from Ekiwi, LLC for such a great tutorial.


Source: http://extract-web-data.com/scraping-amazon-com-with-screen-scraper/

Using External Input Data in Off-the-shelf Web Scrapers

There is a question I’ve wanted to shed some light upon for a long time already: “What if I need to scrape several URL’s based on data in some external database?“.

For example, recently one of our visitors asked a very good question (thanks, Ed):

    “I have a large list of amazon.com asin. I would like to scrape 10 or so fields for each asin. Is there any web scraping software available that can read each asin from a database and form the destination url to be scraped like http://www.amazon.com/gp/product/{asin} and scrape the data?”

This question impelled me to investigate this matter. I contacted several web scraper developers, and they kindly provided me with detailed answers that allowed me to bring the following summary to your attention:
Visual Web Ripper

An input data source can be used to provide a list of input values to a data extraction project. A data extraction project will be run once for each row of input values. You can find the additional information here.
Web Content Extractor

You can use the -at”filename” command line option to add new URLs from TXT or CSV file:

    WCExtractor.exe projectfile -at”filename” -s

projectfile: the file name of the project (*.wcepr) to open.
filename – the file name of the CSV or TXT file that contains URLs separated by newlines.
-s – starts the extraction process

You can find some options and examples here.
Mozenda

Since Mozenda is cloud-based, the external data needs to be loaded up into the user’s Mozenda account. That data can then be easily used as part of the data extracting process. You can construct URLs, search for strings that match your inputs, or carry through several data fields from an input collection and add data to it as part of your output. The easiest way to get input data from an external source is to use the API to populate data into a Mozenda collection (in the user’s account). You can also input data in the Mozenda web console by importing a .csv file or importing one through our agent building tool.

Once the data is loaded into the cloud, you simply initiate building a Mozenda web agent and refer to that Data list. By using the Load page action and the variable from the inputs, you can construct a URL like http://www.amazon.com/gp/product/%asin%.
Helium Scraper

Here is a video showing how to do this with Helium Scraper:


The video shows how to use the input data as URLs and as search terms. There are many other ways you could use this data, way too many to fit in a video. Also, if you know SQL, you could run a query to get the data directly from an external MS Access database like
SELECT * FROM [MyTable] IN "C:\MyDatabase.mdb"

Note that the database needs to be a “.mdb” file.
WebSundew Data Extractor
Basically this allows using input data from external data sources. This may be CSV, Excel file or a Database (MySQL, MSSQL, etc). Here you can see how to do this in the case of an external file, but you can do it with a database in a similar way (you just need to write an SQL script that returns the necessary data).
In addition to passing URLs from the external sources you can pass other input parameters as well (input fields, for example).
Screen Scraper

Screen Scraper is really designed to be interoperable with all sorts of databases. We have composed a separate article where you can find a tutorial and a sample project about scraping Amazon products based on a list of their ASINs.


Source: http://extract-web-data.com/using-external-input-data-in-off-the-shelf-web-scrapers/

Tuesday, 24 September 2013

Selenium IDE and Web Scraping

Selenium is a browser automation framework that includes IDE, Remote Control server and bindings of various flavors including Java, .Net, Ruby, Python and other. In this post we touch on the basic structure of the framework and its application to  Web Scraping.
What is Selenium IDE


Selenium IDE is an integrated development environment for Selenium scripts. It is implemented as a Firefox plugin, and it allows recording browsers’ interactions in order to edit them. This works well for software tests, composing and debugging. The Selenium Remote Control is a server specific for a particular environment; it causes custom scripts to be implemented for controlled browsers. Selenium deploys on Windows, Linux, and iOS. How various Selenium components are supported with major browsers read here.
What does Selenium do and Web Scraping

Basically Selenium automates browsers. This ability is no doubt to be applied to web scraping. Since browsers (and Selenium) support JavaScript, jQuery and other methods working with dynamic content why not use this mix for benefit in web scraping, rather than to try to catch Ajax events with plain code? The second reason for this kind of scrape automation is browser-fasion data access (though today this is emulated with most libraries).

Yes, Selenium works to automate browsers, but how to control Selenium from a custom script to automate a browser for web scraping? There are Selenium PHP and other language libraries (bindings) providing for scripts to call and use Selenium. It is possible to write Selenium clients (using the libraries) in almost any language we prefer, for example Perl, Python, Java, PHP etc. Those libraries (API), along with a server, the Java written server that invokes browsers for actions, constitute the Selenum RC (Remote Control). Remote Control automatically loads the Selenium Core into the browser to control it. For more details in Selenium components refer to here.



A tough scrape task for programmer

“…cURL is good, but it is very basic.  I need to handle everything manually; I am creating HTTP requests by hand.
This gets difficult – I need to do a lot of work to make sure that the requests that I send are exactly the same as the requests that a browser would
send, both for my sake and for the website’s sake. (For my sake
because I want to get the right data, and for the website’s sake
because I don’t want to cause error messages or other problems on their site because I sent a bad request that messed with their web application).  And if there is any important javascript, I need to imitate it with PHP.
It would be a great benefit to me to be able to control a browser like Firefox with my code. It would solve all my problems regarding the emulation of a real browser…
it seems that Selenium will allow me to do this…” -Ryan S

Yes, that’s what we will consider below.
Scrape with Selenium

In order to create scripts that interact with the Selenium Server (Selenium RC, Selenium Remote Webdriver) or create local Selenium WebDriver script, there is the need to make use of language-specific client drivers (also called Formatters, they are included in the selenium-ide-1.10.0.xpi package). The Selenium servers, drivers and bindings are available at Selenium download page.
The basic recipe for scrape with Selenium:

    Use Chrome or Firefox browsers
    Get Firebug or Chrome Dev Tools (Cntl+Shift+I) in action.
    Install requirements (Remote control or WebDriver, libraries and other)
    Selenium IDE : Record a ‘test’ run thru a site, adding some assertions.
    Export as a Python (other language) script.
    Edit it (loops, data extraction, db input/output)
    Run script for the Remote Control

The short intro Slides for the scraping of tough websites with Python & Selenium are here (as Google Docs slides) and here (Slide Share).
Selenium components for Firefox installation guide

For how to install the Selenium IDE to Firefox see  here starting at slide 21. The Selenium Core and Remote Control installation instructions are there too.
Extracting for dynamic content using jQuery/JavaScript with Selenium

One programmer is doing a similar thing …

1. launch a selenium RC (remote control) server
2. load a page
3. inject the jQuery script
4. select the interested contents using jQuery/JavaScript
5. send back to the PHP client using JSON.

He particularly finds it quite easy and convenient to use jQuery for
screen scraping, rather than using PHP/XPath.
Conclusion

The Selenium IDE is the popular tool for browser automation, mostly for its software testing application, yet also in that Web Scraping techniques for tough dynamic websites may be implemented with IDE along with the Selenium Remote Control server. These are the basic steps for it:

    Record the ‘test‘ browser behavior in IDE and export it as the custom programming language script
    Formatted language script runs on the Remote Control server that forces browser to send HTTP requests and then script catches the Ajax powered responses to extract content.

Selenium based Web Scraping is an easy task for small scale projects, but it consumes a lot of memory resources, since for each request it will launch a new browser instance.



Source: http://extract-web-data.com/selenium-ide-and-web-scraping/

Monday, 23 September 2013

Outsource Data Mining Services to Offshore Data Entry Company

Companies in India offer complete solution services for all type of data mining services.

Data Mining Services and Web research services offered, help businesses get critical information for their analysis and marketing campaigns. As this process requires professionals with good knowledge in internet research or online research, customers can take advantage of outsourcing their Data Mining, Data extraction and Data Collection services to utilize resources at a very competitive price.

In the time of recession every company is very careful about cost. So companies are now trying to find ways to cut down cost and outsourcing is good option for reducing cost. It is essential for each size of business from small size to large size organization. Data entry is most famous work among all outsourcing work. To meet high quality and precise data entry demands most corporate firms prefer to outsource data entry services to offshore countries like India.

In India there are number of companies which offer high quality data entry work at cheapest rate. Outsourcing data mining work is the crucial requirement of all rapidly growing Companies who want to focus on their core areas and want to control their cost.

Why outsource your data entry requirements?

Easy and fast communication: Flexibility in communication method is provided where they will be ready to talk with you at your convenient time, as per demand of work dedicated resource or whole team will be assigned to drive the project.

Quality with high level of Accuracy: Experienced companies handling a variety of data-entry projects develop whole new type of quality process for maintaining best quality at work.

Turn Around Time: Capability to deliver fast turnaround time as per project requirements to meet up your project deadline, dedicated staff(s) can work 24/7 with high level of accuracy.

Affordable Rate: Services provided at affordable rates in the industry. For minimizing cost, customization of each and every aspect of the system is undertaken for efficiently handling work.

Outsourcing Service Providers are outsourcing companies providing business process outsourcing services specializing in data mining services and data entry services. Team of highly skilled and efficient people, with a singular focus on data processing, data mining and data entry outsourcing services catering to data entry projects of a varied nature and type.

Why outsource data mining services?

360 degree Data Processing Operations
Free Pilots Before You Hire
Years of Data Entry and Processing Experience
Domain Expertise in Multiple Industries
Best Outsourcing Prices in Industry
Highly Scalable Business Infrastructure
24X7 Round The Clock Services

The expertise management and teams have delivered millions of processed data and records to customers from USA, Canada, UK and other European Countries and Australia.

Outsourcing companies specialize in data entry operations and guarantee highest quality & on time delivery at the least expensive prices.

Herat Patel, CEO at 3Alpha Dataentry Services possess over 15+ years of experience in providing data related services outsourced to India.

Visit our Facebook Data Entry profile for comments & reviews.

Our services helps to convert any kind of  hard copy sources, our data mining services helps to collect business contacts, customer contact, product specifications etc., from different web sources. We promise to deliver the best quality work and help you excel in your business by focusing on your core business activities. Outsource data mining services to India and take the advantage of outsourcing and save cost.



Source: http://ezinearticles.com/?Outsource-Data-Mining-Services-to-Offshore-Data-Entry-Company&id=4027029