A friend of mine pointed me to an interesting app this afternoon. Piggy-Bank
The description from the site is
bq. Piggy Bank is an extension to the Firefox web browser that turns it into a “Semantic Web browser�
It seems to take the various website data (via screen scrape) create an RDF document out of it. That information is stored in a central location which can be queried by the said application. For instance, one of the pieces of data it collects is location. So it could collect the location of all the apartments for rent from the apartment rental website. It would also collect the location from the bus terminal website. Ultimately you could do a query to find apartments within a given radius of all the bus terminal in you location and display it on a google map.
It’s a very interesting concept. I have looked at semantic web stuff before and understood the basic concepts of the underlying technology, but did not get what it could be used for. This app makes it fall into place a little more than before.
One of the things I still fail to understand about the semantic web stuff is how to go about using the data that is published. Nobody would publish the RDF/OWL model in the same way, just as to developers would come up with a slightly different database structure. It all stores the same data, but in a different way.
Using the apartment for instance. Two different property management companies publish an RDF document for the web.
Propery Management site 1 comes up with a model like this
codepre
Apartment:
Address:
line1
line2
city
…
Specs:
square_footage
number_of_bedroooms
…
/pre/code
And Property Management Site 2 comes up with a model like this:
codepre
Apartment:
addressLine1
addressLine2
city
…
squareFootage
numberOfBedroooms
…
/pre/code
They both have exactly the same data, just modeled differently. One is more normalized than the other and the attributes/elements of the document are are slightly different.
Perhaps I am missing something but it seems that if you wanted a central storage location for semantic web stuff, you would have to do alot of mapping between one document and another. This would seem to limit the number of sites you could query because of the time and labor involved in mapping.
I really want to believe in widescale use of this technology, but I fail to see it right now.