Back in 2008, I wrote a PHP class that fetched an arbitary URL, parsed it, and coverted it into an PHP object with different attributes for the different elements of the page. I recently updated it and sent it along to a company that wanted a programming example to show I could code in PHP.
I thought someone may well find a use for it – I’ve used the class in several different web scraping applications, and I found it handy. From the readme:
This is a class I wrote back in 2008 to help me pull down and parse HTML pages I updated it on 14/01/10 to print the results in a nicer way to the commandline. - David Craddock (firstname.lastname@example.org) /// WHAT IT DOES It uses CURL to pull down a page from a URL, and sorts it into a 'Page' object which has different attributes for the different HTML properties of the page structure. By default it will also print the page object's properties neatly onto the commandline as part of its unit test. /// FILES * README.txt - this file * page.php - The PHP Class * LIB_http.php - a lightweight external library that I used. It is just a very light wrapper around CURL's HTTP functions. * expected-result.txt - output of the unit tests on my development machine * curl-cookie-jar.txt - this file will be created when you run the page.php's unit test /// SETUP You will need CURL installed, PHP's DOMXPATH functions available, and the PHP command line interface. It was tested on PHP5 on OSX. /// RUNNING Use the php commandline executable to run the page.php unit tests. IE: $ php page.php You should see a bunch of information being printed out, you can use: $ php page.php > result.txt That will output the info to result.txt so you can read it at will.
Here’s an example of one of the unit tests, which fetches this frontpage and parses it:
If you want to download a copy, the file is below. If you find it useful for you, a pingback would be appreciated.