mandag den 22. juni 2009

Towards web 3.0 ???


Last week W3C opened a conference in San Jose, California, on Semantic technology. Also present at the conference were a number of vendors all hoping to commercialize products within the realm of semantics. I got curious and decided to make some research to see how far web 3.0 had really come; I put a question up on Facebook, but the reactions I got were like this: “ Why are we talking about web 3.0? We haven’t even starting exploiting web 2.0?”

So where are we?

We are definitely moving away from web 1.0, connecting computers, over web 2.0 - connecting people - to what may be called web 3.0 – If that means, that web 3.0 is where you use the value of the web 2.0 technologies plus the semantic tools to find your way into all the crab and get the right and/or most likely answers to your questions. In order to do this, of course you need standards and the status of this work was a major part of last week’s conference.

W3C owns some of the core technologies within the semantic domain, and it is seen as a major turning point these days that the maturity of the basic technologies: RDF – Resource Development Framework, and OWL - Web Ontology Language. Based on the standards developed here and also of course the XML-standards, the semantic query language SPARQL has been developed.

Related, but independently maintained standards such as XBRL, is also moving ahead to help clarify definitions and meanings across systems and boundaries.

The whole concept of semantics is particularly important in a multi-language set up like the European Community, and EU as such have since long promoted the use of semantics as a core technology to provide interoperability across the boundaries of Europe. One of the first pan-European areas where principles of the semantic web are being defined and tested in the area of eProcurement. The PEPPOL Conference in January in Copenhagen kicked off the project, where a number of IT companies and representatives from users are seated. The Danish Ministry of Finance and IBM Denmark are both partners here. A general perspective of the PEPPOL Project can be found here: http://www.peppolinfrastructure.com/20090129ConnectingtoPEPPOL.pdf

PEPPOL is still a development project, although the demonstration phase will appear pretty soon.

But as the multitude of globally available solutions presented in San Jose here in June showed, we may now be on the brink of a real break-through to have a multitude of commercial applications available.

Ivan Herman is responsible for W3C’s semantic programme. He gave a lengthy interview that described his opinion of the status. Her stated it like this:

“Web 3.0 is the idea of having data on the web defined and linked in a way that it can be used by machines not just for display purposes, but for automation, integration and reuse of data across various applications." (From San Fransisco W3C Conference)

Other blogs and discussion for a on the net have been dealing with the topic for quite some time.

Phil Wainewright: What to expect from web 3.0? , tries to explain what the main differences and concepts for breakthrough in web 3.0 really are. He explains, that web 3.0 consists of

4 Layers: API-Layer – where service providers give access to content and data. He thinks this layer is pretty mature, with almost no profit left for new comers. The next layer, The Aggregation services Layer contains all the goodies of web 2.0 like RSS Feeds etc., and the third and exciting ‘new’ area is the Application Services layer – Where office, EPR, CMS, and other applications and services are buying offered on demand, software as a service. A fourth layer may consist of Serviced Clients, and this may also be an interesting new business area, according to Phil Wainewright.

As an example of one of the application areas he expects to thrive, is the WebEX Office SaaS:

WebEX – example of a company focusing on delivering SaaS using web 3.0

Searching the web I also found Richard MacManus, lecturing about web 3.0:

“Web 1.0 is characterized by enabling reading, web 2.0 = read/write where everybody becomes a publisher – but web 3.0??”

“Unstructured information will give way to structured information, paving the way to more intelligent computing.”

The essence of his expectations is that web sites will be turned into web services, whether or not this should be considered a brand new paradigm is a matter of taste, but in Rachard MacManus opnion:

“There is a difference in the solutions we are seeing in 2009: More products based on structured data (Wolfram Alpha), more real time – made sadly necessary because of the situation in Iran - (twitter, OneRiot), better filters (FriendFeed and Facebook with copies FF)”

So of web 3.0 is all about structuring data and making data available, then some of the new semantic techniques for storing relations between entities - TripleStore Technology – like ‘Peter is friend with Susan’ – or ‘Muhammed is a member of the AK81 gang’ is the way ahead. Much easier than to describe EDIFACT rules in the 90’ties, and if you could really create this links of links of links and use powerful searching tools across the variety of databases, then we will surely see a new level of intelligent computing.

According to Alexander Korts, (april 2009) in his article on The web of Data, creating Machine Accessible Information gives the following example:

One promising approach is W3C's Linking Open Data (LOD) project. The above image (on top of blog) illustrates participating data sets. The data sets themselves are set up to re-use existing ontologies such as WordNet, FOAF, and SKOS and interconnect them.

The data sets all grant access to their knowledge bases and link to items of other data sets. The project follows basic design principles of the World Wide Web: simplicity, tolerance, modular design, and decentralization. The LOD project currently counts more than 2 billion RDF triples, which is a lot of knowledge. (A triple is a piece of information that consists of a subject, predicate, and object to express a particular subject's property or relationship to another subject.) Also, the number of participating data sets is rapidly growing. The data sets currently can be accessed in heterogeneous ways; for example, through a semantic web browser or by being crawled by a semantic search engine.

This in a way makes it reassuring and at the same time illustrates the amount of work still ahead of us before we will reach the ‘promised’ land of web 3.0. Also it shows, that web 3.0 is a journey and not and end in itself. And finally: that we will have to master Web 2.0 techniques and imbed this into all the traditional services and solutions before we get a user friendly and intuitive way of accessing all these data. But most important: It puts a pressure on all Governments (in particular) but also on private companies to make data available. Tim Berness Lee gave a very interesting and inspiring pitch on this matter in February this year: Tim Berners Lee on the next web

Conclusion is that we are still only stumbling at the foot of the mountain, but we have spotted the way ahead.

(And if you are really interested in the topic of Digital Libraries, maybe you should attend the Conference on Semantic Web for digital Libraries in Trento in September. )

Ingen kommentarer: