mandag den 30. november 2009

Eurocities Annual General Meeting

Stockholm Nov 25-28

The Eurocities organization that covers a number of the largest and most advanced cities in Europe, last week had their annual general meeting in Stockholm during the Swedish EU Presidency.

The main theme of this year's annual conference was of course focused on the upcoming COP 15 meeting in Copenhagen and the working theme name was 'Urban Challenges – Sustainable Solutions'. (The entire conference is available on line using this link - – you have to sign in first, but then you are in there, really a nice solution)

The mayor of Stockhom, Steen Nordin (M) welcomed the participants and explained that a large part of the program was dedicated to a number of green projects that participated to Stockholm's nomination as the Green Capital of Europe. Eurocities president, Jozias van Aartsen from Hague likewise expressed the hope that the conference could be a clear sign to the World that as the majority of the global population is now living in cities, the cities themselves are not longer 'just' the problem, but very much important players to solve the global challenges and the global warming in particular, but also the key areas where Eurocities has been active and where 3 awards were due under the headings of 'Innovation',' Participation' and ' Cooperation'.

One of the most interesting and engaging experts on international Health, professor Hans Rosling from Karolinska, then presented a breathtaking tour-de-force of numbers to describe how the World has changed and how all our prejudices dividing the World in 'They' and 'We' were no longer valid.

Look at this shortened version of his speech:

Under the headline: 'Sustainable Solutions for Business, Science, People and Culture' the conference split up in 9 groups and went on bus tours to study practical experiences and cooperations between Stockholm city and private and public organizations.

For my part I joined the tour to 'World Leading ICT cluster' at Kista, which in fact I have followed from its' early beginning with Ericsson and IBM as the first partners. Now more than 30.000 jobs have been created in more than 200 ICT companies in Kista. Really impressive and a positive experience and clear signs that the crisis is almost over in Sweden.

The other tours were labeledFuture Suburbia' – renewing Järva, a suburb from the 60's built to meet housing shortage and now in need for renovation - 'Teleco

m for green Cities' – a Visit to Ericsson Headquarter also describing the Intelligent Transportation System in Stockholm, where IBM performed the coordinating role – the 'Intelligent Electricity Project' – smart grids, smart electric cars - 'Creative cluster'- sustainable design in an industrial regenerated area - Mobility flows' – showing how the public bus system was converted to Biogas, also descr

ibing the Intelligent Traffic system - 'The Hammarby Model' – Integrated management of Waste, Water and Energy to 'Heating the city' – showing how 80% of the heating comes from

renewable sources and finally 'Low carbon Urban Transport'.

It was a tremendous well organized 2.5 hours extension for everybody and it was really impressive!

The plenum meeting was then led by mr. Ballasteros from the EU commission on the topic of 'the Covenant of the Mayors':

The evening program called for a galla dinner at Stadshuset, in the Blue Hall, where the famous Nobel Price ceremonies take place. The evening called for a reward to the cities that had earlier been presented as nominees for the categories Innovation, Participation and Cooperation.

Winner for the innovation group was the city of Malaga, that had developed an interactive map in 3D to optimize localization of solar energy batteries. Birmingham won the category 'participation' with an interesting project on building an 'eco-village', and finally city of Dortmund won the prize for Cooperation by an impressive, yet low budget project on ' Consultation Circle for Energy Effectiveness and Climate Protection'

Friday morning program was dedicated to 'the Stockholm Appeal':

The mayors of the largest cities in Europe and the United States have co-authored a mutual appeal, "The Stockholm Appeal on Climate Change". The appeal manifests the signatories’ desire for the COP15 meeting in Copenhagen to result in an international climate change agreement.

The US chairman for the US Mayors Goncress, Mrs. Elisabeth Krautz already the day before stressed the dedication from the US cities to back the Stockholm initiative.

The Stockholm Appeal was signed earlier this year by U.S. Conference of Mayors President Seattle (WA) Mayor Greg Nickels, U.S. Conference of Mayors CEO and Executive Director Tom Cochran, Stockholm Mayor Steennnn Nordin, and Mayor of The Hague EUROCITIES President Jozias von Aartsen.

The Appeal was presented to the representatives of the Swedish Presidency of the European Union, as well as the Obama Administration by Mayor Kautz and Mayor Nordin. From this ceremony we have the following citation from the Canadian mayor:

"U.S. mayors proudly stand with our European counterparts in asking international recognition of the role of local leaders in climate protection. At the forefront of creative strategies, U.S. mayors have forced our national government to act to combat climate change. To date, over 1,000 mayors have signed the USCM climate protection agreement pledging to meet or beat Kyoto goals. Mayoral leadership is a major impetus in climate protection," said Mayor Nickels.

The Swedish Deputy Prime Minister, Maud Olofsson and the US Chargé D'Affair in Stockholm, Robert Silverman, underlined the strong intention behind the Stockholm Appeal, and Robert Silvermann declared that his president and the congress this time would do it's utmost to reach a politically binding treaty in Copenhagen on climate warming.

And finally, before the forum reports, Senator John Kerry appeared on video conference to persuade the European audience about the good will of the Congress in Washington, where he was at least partly successful. We are all now very much focused on what will happen in Copenhagen but take it as a good sign that President Obama has declared that he will participate.

Stockholm's Mayor Steen Nordin will hand over the Stockholm Appeal to the UN Conference COP15 in December in Copenhagen.

For the full Wording of the Stockholm Appeal, see this link:

fredag den 23. oktober 2009

Hospitals of the Futurè

An important part of the Danish Prime Ministers future plans deal with the construction of a number of bright, new so-called Super Hospitals. When he addressed the Danish Parliament on his opening speech first Tuesday in October, the amount of 40-50 Billion Dkr (9 – 10 B US $) was mentioned.

In Denmark the Regions are responsible for hospitals, but they have no tax income of their own, so they are completely depending on the Government allocation of funds. So the 5 regions are looking to negotiations with the Government, and as it looks they are treated rather differently; the regions that have been most eager to close down small and inefficient hospitals stand to gain the most.

In any case the question any Dane should put to his regional politicians, now up for re-election in November, is how they see the future of the hospitals. If this is not discussed, I fear the entire discussion will be around the brick-and-mortar investment, and not about the rethinking of the entire health structure as I feel is highly needed – not least in light of the demographic changes we can foresee, including the lack of doctors, nurses, more elderly, more chronic diseases etc.

So where can we find the inspiration needed before these new super tankers are cast in iron?
Normally you would look to US:
But US has its own problems and has a highly different tradition than Northern European health systems, where this is an integral part of our welfare system, hence mainly paid for by the tax budgets. (Try to see the notices from the US Commission on the Future of Health Care from 2008: )

There are sources and best-of-breed examples out there that can easily by reached; the web page for DesignIT has a very interesting article of hospitals of the future.
One of the cases that DesignIT describes is the Il Camino Hospital (Silicon Valley, Ca):

Il Camino Hospital is a completely wireless hospital that is well on its way to becoming paper and film free. The hospital estimates that the new system has saved them 120,000 dollars a year in medical costs and 300,000 dollars a year in avoidable errors.
The hospital’s wireless technology includes:
· Voice-activated communicators: Nurses and doctors use a small voice-controlled, hands-free device that they wear around the neck to communicate with each other. The device is made by Silicon Valley start-up Vocera Communications Inc.
· Biometric supply cabinets: A device that enables authorised personnel to open doors or cupboards for medicine and other medical items by reading a thumbprint.
· Automated laboratory system: Laboratory tests go through various ‘stations’ that together make up an automated production line. Staff calls it ‘race track’. Automisation is made by
· Beckman Coulter Inc.
· Tablet PCs and handhelds: Doctors test small, tablet computers and handhelds devices. In time, these will replace the clipboard. These devices are manufactured by Hewlett-Packard Corp.
But this system would appear to be particularly exposed to terror attack or technical failure – unfortunately key considerations in the future. How will medical staff cope if the system goes down? Will they be able to treat patients? While innovative, this hospital must also be fool proof. Otherwise technology will control people rather than vice versa.”

But we don't need to go to US to find fine examples of forward oriented hospitals: In Glasgow, for instance the Homeopathic Hospital has an interesting, holistic approach that could very well be part of the design principles for Hospitals for next several decades.

In Taiwan, the Chiang Gung memorial hospital with 8.800 beds has a very advanced deployment of RFID for controlling logistics, likewise an important part of the future High Tech Hospital, where the focus is on accountability, quality, saving human resources for the health and care oriented tasks.
Asia seems to be really pushing the uptake of new technology when they plan for future hospitals;
See for instance these award winners Asian hospitals: . Not so surprisingly, there is a number of Singapore, Taiwan, Philippine references, but China and India is beginning to appear on this list as well.
Construction of hospitals is indeed a hot topic in these mega-countries.

In Sweden, Karolinska Sjukhuset, already a leading Nordic hospital in many specialties, is constructing a new site where the use of microelectronics and Gene technology is being deployed at a World record level. See for instance this description on their use of ICT.
Hospital trends in Europe can be found here: .
Since 2006 IBM has been running a Health Competency/Innovation center for the Latin speaking countries out of Barcelona. The center benefits from cooperation with the Hospital of Barcelona, one of Spain's most modern hospitals.

Each of the hospitals awarded in US, in Asia, being recognized as World leaders, contribute to the picture of the Hospitals of the Future, yet no hospital alone seems to have it all.
Given the pressure from the Climate debate and the mentioned adverse demographics plus not least the current financial crisis, new aspects has to come into the design principles.
And of course IT is going to play a major role. (See this New Scientist Article)

But in line with IBM's recent announcement of the Smart Planet initiative, I recommend that we look at the entire value chain of 'Health' – and not only look at the Hospital and the IT infrastructure: It has to be regarded as it is: A system of systems.
A Hospital is but one (important) station en route from disease to cure, but it doesn’t solve the long range development in human habits: from eating and drinking to travelling and social interaction.

The brick-and-mortar thinking should be replaced by thinking intelligent buildings:
imbedded control of material, light, control of electricity, water, sewage..

Around the hospitals we have a huge logistic task: transporting patients in and out, particularly in light of many more ambulant treatments, but also emergency systems, evacuation plans, logistics for delivery of goods, linen, visitors etc.

And we have the internal communication and collaboration between all the different stakeholders, doctors and nurses,, including admin, records management, knowledge management – and, not to forget: Electronic Patient Records that can be exchanged, distributed and disseminated with full compliance for data protection.

Another subsystem is the flow in and out of highly advanced diagnostic systems, surgery, remote collaboration, robotics. The advanced RFID plus asset management can take care of hospital beds, instruments, (patients), all sorts of critical material.

The final major sub-system is of course the entire process of a patient's engagement, not least how the out-patient services are provided, how collaboration can be obtained between hospitak satff and primary care, between patient and local government social services, including telemedicine and a holistic treatment of cronic patients in their own home, something we have been engaged in in far too many 'pilot projects' – now it is the time to deliver.

Or as Obama has put it: It's time to Change. This is understandable if you look at the status reports from 2008. So we Northern European spoiled citizens must cross our fingers that his medical plan for US comes true, because that would create yet a stronger, global drive to real forward thinking hospitals in the future.

mandag den 19. oktober 2009

Video Surveillance and Privacy

In Copenhagen this summer has seen a number of gang shootings in certain districts. Events like these are certain to raise a request for more video surveillance, more policing, and prolonged prison verdicts.
Video surveillance in Denmark has been relatively limited compared to UK and US, even banks have been restricted from using outdoor video surveillance. But like threats from terrorism, the threat from gang wars - however local and limited, might change the attitude against video surveillance using all sorts of arguments like “The law abiding citizens have nothing to hide” to “It will result in much faster arrests”, even “This will prevent crimes”. (See Camwatch: Preventing Crime 24/7 )

According to Gus Hosein , visiting senior fellow at London School of Economics and recent speaker at the European Privacy Seminar in Copenhagen “The Net will not Forget” , more than 44 studies on the effect of Video Surveillance have shown that the effect in preventing crime is almost non-existing, most likely reductions observed in thefts of bicycle and from cars, whereas violent crimes do not seem to be reduced at all. Also the suggestion that video surveillance should be a significant help to identify criminals after the fact, is doubtful – at least when we consider the traditional types of CCTV surveillance systems. (SeeWikipedia: )

According to this article in the Telegraph, only one crime solved pr. year pr. 1.000 cameras seem to be the result: See ,And similarly, according to UK police, CCTV is an utter fiasco:
One of the reasons may be due to the old fashioned technology that is typically used: Up to 80 per cent of CCTV footage seized by police is of such poor quality that it is almost worthless for detecting crimes, it has been claimed.
And yet CCTV accounts for three quarters of the Home Office's total spending on crime prevention, making it the single most heavily-funded crime prevention measure outside the criminal justice system.

A comparison of the number of cameras in each London borough with the proportion of crimes solved there found that police are no more likely to catch offenders in areas with hundreds of cameras than in those with hardly any. In fact, four out of five of the boroughs with the most cameras have a record of solving crime that is below average:

To get an overview of which laws regarding privacy and Video Surveillance are in force in Europe, see the following article on legal regulations of CCTV in Europe:

But even if permission is granted to establish a video surveillance system, this may be very questionable as this report points out:
The reports points out that authorisation should always include guidelines on the management and storage of the product of the surveillance, which should be under the supervision of the Data Protection Agency. Further, the following issues exist:“ -The continuing confusion with regard to the need for authorisation when surveillance equipment (such as CCTV) is focused on an individual in a public place. It is not where the CCTV is placed (which may be overt or covert) but the manner in which the camera is used that is determinative of whether the surveillance is covert; and- Authorising Officers not knowing the capability of the surveillance equipment which they are authorising. For instance, there are differences between video cameras that record continuously and those activated by motion; and between thermal image and infra-red capability. These differences may have an important bearing on how a surveillance operation is conducted and the breadth of the authorisation being granted. Therefore, a simple authorisation for ‘cameras’ is usually insufficient ”

Even if the number of US studies on video surveillance is limited, some material can be found:
Video Surveillance - Is It An Effective Crime Prevention Tool ? (Obs: this is dated 1997 )

The issues around video surveillance are several: The cameras will invariably shoot a number of completely innocent people, and either he footage is stored for an unknown period of time, storage maybe guarded, maybe not – and sometimes the cameras are directly linked to a control centre, where officers or even private persons can observe and identify persons that can be linked to specific places at specific hours. Who sees this? How is it recorded? It is definitely not an excuse that the quality may be poor - as this may even lead to other types of misinterpreting, like who is actually shown on the tapes.

In spite of the UK experiences, a number of cities in US are installing another type of video surveillance systems based on more intelligent cameras, seem to become the next hit: See this ABC Chigaco Interview: Intelligent IRIS – Video Analytics

The new type of systems may have several benefits, for instance that the cameras can be programmed so that they only record out-of-line situations, whether it is traffic, lack of movement, crossing an (invisible) border line. This automatically reduces the privacy problem of storing tons of innocent person’s data. Also the ability (not discussed in the Chigaco clippings) to mask individuals to avoid recognition of phases is a clear improvement over traditional CCTV-systems.
(Intelligent cameras come in many makes, IP Cameras: - But the system as such requires a network, an architecture, analytical solutions and the privacy intelligence on top – like in Chigaco)

The so-called Smart Video Surveillance is discussed here:
The article clearly describes the benefits of pre-programmed observation criteria, and if this is combined with dynamic microphones, it may prove useful in assisting in arrests of real criminals and even increase privacy compared to traditional surveillance systems.
Yet another wave of ‘indirect’ video surveillance systems is rapidly on the rise: Congestion charging of cars in an out of city centres as well as video assisted toll systems represents another threat.

See this article on ‘Congestion pricing, the road to the Surveillance State:’
And it really seems that quite a number of cities will have this kind of solutions in operations. Here is a short overview of traffic congestion schemes:

In Stockholm, one of the most successful anti-congestion road charging systems, the privacy question is solved by strict regulations on the storage and retention of the data, and by blurring the faces of the drivers and passengers on all footage. It may even be further improved, for instance by deploying some of the solutions described in this article on “Congestion pricing that respects driver’s privacy” by Andrew Blumberg (From Stanford) and Robin Chase (From Meadow Networks)

So to sum up:
Video Surveillance is erroneously being interpreted as trust-enhancing, supposed to calm the upset citizens, whereas the truth so far is that it simply doesn’t work: It can’t be shown to prevent crimes – violence due to drugs, alcohol, gangs etc. will prevail, probably either disregarding the risk or moving to other parts of cities, and the quality as we know it in CCTV is not very helpful in police work after the crime has been committed.
We may have new, more intelligent solutions coming to the market, but it must be required that privacy is embedded in these solutions. This goes for masking faces – maybe making it possible to lift masking after a verdict based on suspicion – it goes for regulations on storing and retention, and it goes for rules for deployment, particularly of covert cameras, which in any cases should be limited in the public space.
These rules should also be followed even if the purpose is not ‘surveillance’ but Road Charging,

tirsdag den 6. oktober 2009

Security& privacy in biometrics – how do we ensure proportionality ?

A basic principle in the current
European Data Protection Act is to ensure proportionality between the level and amount of personal identifiable data, that you have to reveal to identify yourself has to be proportional to the risk and danger incurred if the identity is faked or stolen.

The recent years have seen a growth in tools for identification, mainly in the biometric area, that has led to the risk of 'overreacting' using easy biometrics where lesser level of authentication could have been used. One of the latest strange cases from Denmark is a night club, that has been
allowed by the data protection agency to take customers fingerprints at the entrance as a means to secure against violent behavior. Horror examples of major collection of biometric data is of course U K's collection of DNA profiles for children, a practice that was started 5 or 6 years ago.

The risks involved are related to the kind of threat you are trying to prevent: Do we need the security tool to reveal the identity and all related information? This may be the case if we have a strong suspicion that a person is directly related in crime or an act of terror. Or do we only need to know if a person is 18 years old so it is legal to sell alcohol to him/her? Similarly, within the health area a nurse and a doctor do not need to have full access to a patients medical record if he has lost his consciousness and need a blood transfusion, only the key information of blood type and current medication.

So the use of biometrics in itself is one dimension of the game - and the other dimension is what the biometric identification gives access to reveal of PII – Personally Identifiable Information - at the same time or as a consequence of using the biometrics.

The first question of proportionality is then solely related to the 'strength' of the biometric method used. A weak solution is a quick, convenient solution which is non-intrusive, non-incriminating and non-discriminating in regard to civil rights and color of skin, sex, race and religion. For this purpose simple biometrics like a
signature (Analog or digitized) may be better than a fingerprint ( traditional, optical electronic scanning using a template to generate a simple bit stream) - because fingerprints may be seen as incriminating, offensive, police-like. while a face recognition reveals race, color of skin and maybe sex, and thus does not meet the other criteria.

Signatures may be faked, fingerprints (simple fingerprints) can be stolen – in bizarre cases it has been seen that criminals have cut off fingers of owners of Mercedes 300S cars to break the fingerprint starting mechanism. (This risk is probably less in Northern Europe, though.) Or it may be
difficult to read the results properly.

When stronger proof is needed, it is acceptable to rely on methods with higher reliability – like the thermal scanning of fingerprints, that measures the distance from the underlying blood, revealing riffs and valleys, again to be transformed by fast fourier transformation to a template consisting of 0's and 1's. This prevents the use of faked fingerprints copied on a strip of tape – and even the rough case of cutting off Mercedes' owner's finger –( presumably the blood has stopped circulating – so no heat difference). Also
Iris recognition has been suggested, whereas 3D face recognition at this point still has a higher rate of errors. It has been suggested to use at least 2 types of biometry, like the US border control where you combine fingerprints with face recognition.
In any case the reliability of the identification methodology applied in every case has to discussed and explained before any solution is deployed. (
See article about reliability)

It may be OK under well-defined circumstances to use higher level of trusted biometrics, even if they are not 100% proof. The second dimension of the question than is what other PII is stored with the template or the face geometry is stored and how these data are protected. This is a question of data stewardship and again should be in proportion to the use of the data. Taking the example from the Danish night club that has been granted permission to store peoples' fingerprints, these should definitely not be store with any other information than the purpose: Is this guy know to have a tendency to quarrel – NOT his name, address etc. Even if this is kept using cryptography, it is not in proportion to the use of the biometric data.

Other types of biometrics are recognition of moving patterns,
voice recognition, pattern of the veins, retina scan – and of course DNA. Whereas the failure rate (both positive and negative) of the first 2 of these types are still relatively high, the 3 other may reveal unwarranted additional details of the health situation of the individual, hence these items should only be used for forensic purposes and not just collected arbitrarily or even – as in the UK DNA case – systematically.

An important aspect of using Biometrics is also how it will be possible to revoke or change the biometrics as the person changes. Whereas fingerprints remain stable for a longer period in life, face geometry changes a lot from childhood to old age, so does walking patterns, voice. And people do have cosmetic operations in their faces, accidents may change the looks and behavior so any system based on biometrics should have a way to allow for changes of this kind and it should be possible to revoke biometrics.

But as the technology improves and computing power is increasing, one solution which could use biometrics and at the same time prevent the data from occurring in the open space or being communicated could be to have an ID card with a number of different domains, each holding the relevant information linked to the person: one domain simply stating the age, another for the bank including bank account numbers, one for driving license use, one for medical/health care use, one for insurance use, one for credit cards, one for public identification purposes.
If this identity card can be activated by a fingerprint reader plus a pin code, the citizen could then select exactly how much PII he wants to reveal in the situation. This is in line with the P
rimeLife recommendations from IBM Zürich Lab, that has just got the German award for forward think identity management solution. This type of solution has the advantage that the user is in full control and that no central database is required for the biometric data.

In a few days I will discuss the use of video surveillance, what we know about it as a crime prevention tool and what may be a more intelligent way of using it.

onsdag den 30. september 2009

Privacy igen-igen

(Den kinesiske mur - kan mure beskytte mod angreb på Privacy?)

I sidste uge afholdt IT- og telestyrelsen i samarbejde med DI ITEK, den franske og den tyske ambassade og forbrugerrådet en konference under overskriften 'The Net will not forget' om Privacy. Tilstrømningen var ganske pæn, større end den vi så ved maj måneds arrangement 'Pyt med Privatlivet', som Teknologisk Institut arrangerede, og hvor jeg havde fornøjelsen at være indlægsholder.
'The Net will Not Forget' gav ikke anledning til megen presseomtale, og interesseniveauet er som sådan nok ikke blevet højere end da Teknologirådet i 2006 lavede en rapport om 'ICT and Privacy in Europe' som en del af det europæiske EPTA-samarbejde.

Når interessen for privacy og beskyttelse af private data ikke er større i Danmark, hænger det formodentlig sammen med at vi i visse sammenhænge scorer højest som værende 'Verdens mest lykkelige befolkning'. I denne sammenhæng kan man spørge sig selv, om det også berettiger til at være tossegode.

Noget kunne tyde på det – for ikke nok med at vi har en solid tillid til vores offentlige myndigheder, vi kaster os også rask væk ud i offentliggørelse af personlige detaljer på Facebook, som mere end 2 mio af os er på, og der høres ingen protester i befolkningen, når politikerne på Københavns Rådhus og i Folketinget går ind for udvidet videoovervågning på Nørrebro, selvom ikke mindre end 44 internationale, seriøse studier ikke har været i stand til at påvise, at videoovervågning har anden præventiv effekt end en marginal reduktion af cykeltyverier og indbrud i biler.

Gus Hosein, en af de internationale talere på 'The Net will not Forget', fremviste en række eksempler på nye og raffinerede overvågningsteknikker, som dog nok vil kunne få selv sløve danskere til at reagere: Fra at være et tilbud til besøgende i Disneyland, kan man nu som forældre få udstyret sine børn med et 'no break' GPS-armbånd, så man kan sidde hjemme og se, hvor ens barn befinder sig, få en alarm når foruddefinerede grænser overskrides og kunne sikre sig, at barnet har en panikknap at trykke på, 'just in case'. Hvem sagde 'Curlinggeneration'?

Gus Hosein har tidligere gjort sig bemærket ved at fremhæve, at USA, UK, Kina og Sovjet er Verdens førende når det kommer til overvågning og kontrol af egne og andre landes borgere – og vel at mærke i den rækkefølge.

Men jeg vil påstå, at hvis ikke man forstår hvilken udvikling, der er i gang, og bliver bedre til både at styre og udnytte teknologien, så vil Danmark meget snart kunne være på linie med de 4 overvågningssamfund – og ikke nødvendigvis fordi en bevidst politisk linie fører denne vej, men fordi vi ganske simpelt ukritisk accepterer tilsyneladende fornuftige metoder til at passe på vores børn, vores biler, beskytte lokalsamfund mod kriminalitet, forhindre terrorhandlingerne på den ene side og skattesnyd på den anden – samtidigt med at vi jo 'ikke har noget at skjule', så alle billeder fra forskellige festlige sammenkomster og uheldige situationer lægges på nettet, fordi det er 'sjovt' – og fordi ingen rigtigt kan overskue konsekvensen.

Vi har – så vidt vi ved – her i landet kun oplevet få tilfælde af egentligt identitetstyveri, selvom forekomsten af forfalskede og stjålne danske pas hos illegale indvandrere efterhånden må være betragteligt.

Sidste gang man undersøgte de europæiske nationers holdninger til Privacy og Data Security, var Danmark i den laveste tredjedel og altså ikke særligt motiverede til at gøre noget ved det. Så derfor finder mange mennesker det helt naturligt, når private virksomheder beder om at få CPR-nummeret for at oprette een i deres kunderegister – ja nogle virksomheder giver endda prmæier og rabatter for at få det. Det er vi tilsyneladende ligeglade med, på trods af at sofistikerede Customer Relations Management systemer er i stand til at crawle nettet, finde sociale netværker som Facebook, Linkedin, Plaxo, Twitter, YouTube – og sammenligne den enkelte kundes profiler, netværker, ja sågar mønstre i søgning på nettet, så tilbud kan skræddersys og de derved i princippet gennemføre prisdifferentieringer og målrettede kampagner.

Der er ikke megen debat længere om registersamkøringer, og i stedet arbejder man nu målbevidst mod at etablere een indgang til alle offentlige borgerdata (Se – hvorved borgerens arbejde (= selvbetjening) lettes, men hvor fjernelsen af tidligere barrierer altså også nu gør det muligt at få et samlet overblik over alle borgerens forhold. Er det en trussel? Det beror på hvordan den enkelte oplever det. Og hvor godt det offentlige beskytter personlige data mod utilsigtede afsløringer – for ikke at tale om videresalg af data til private virksomheder.

De helt åbenlyse trusler kommer fra et antal kilder:

  1. Botnets

    Vi kan ikke forvente at alle er lige gode til at beskytte deres PC'ere mod virus og hacker angreb. Det betyder, at vi må regne med at et antal PC'ere er inficerede med trojanske heste, der kan fjernstyres og benyttes til cyber-angreb. Som Estland oplevede forrige år, Georgien og Kazakstan sidste år – massive angreb, der lammede landenes kommunikation over internettet med borgere og banker i op til en uge.

  1. Hackere/Fishing

    bliver mere sofistikerede – næste trin er forsøg på at lænse bankkonti ved at franarre folk pinkoder og bankkoder. Det har vi set et kraftigt stigende antal eksempler på, og det bliver ikke bedre før den nye digitale signatur er rullet ud – nu med et års forsinkelse.

  1. Regnekraft/lagerkapacitet

    Teknologien udvikler sig hastigt – beregningskapacitet og lagringskapacitet firdobles nu nærmest hvert andet år. Og det betyder at oversættelse fra et sprog til et andet, lagring af f.eks. Telefonsamtaler og skypesamtaler kan blive det næste mål for hackere, og en oversættelse til arabisk, russisk, amerikansk kan finde sted 'on the fly'.

  2. WI FI – forbrugeradfærd

    Samtidigt er det påfaldende så mange private WI FI anlæg, der står pivåbne og hvor også ganske almindelig trafik kan aflyttes og kopieres. Ja, vi ER lalleglade. CPR-numre flyder på nettet. Og en logningsaftale med teleselskaberne, der undtager små kollektiver er jo en klar hjælp til potentielle lovovertrædere og dermed er resten af logningsbekendtgørelsen en pestilens for flere og til ringe nytte.

Dilemmaet er altså klart: Vi skal beskytte os mod terrorisme og acceptere at 'nogen' med fornuftige retsmidler kan foretage overvågning, vi skal beskytte os mod at blive franarret og bestjålet, men det er ikke en statsopgave at beskytte os og certificere vores private PC'ere, men derimod nok at undervise os i HVORDAN vi kan beskytte os. Da truslerne ændrer sig, er et regulært kompetencecenter nødvendigt, som kan rådgive borgerne på den ene side og politikere og erhvervsliv på den anden. Det er ikke 'bare' en opgave for efterretningsvæsener.

I Sundhedssektoren er privacy mere end andre steder beskyttet af lovgivning, der kræver samtykke fra patienten for videregivelse af oplysninger. Her er faktisk flere dilemmaer: For det første skal vi redde patientens liv, og derudover har vi brug for patientdata til at bedrive forskning, og har derfor brug for pseudonym-tilgang til data på en kontrolleret måde.

Vi har brug for en meget finkornet styring af adgang til patientdata – men samtidigt behov for en høj effektivitet på hospitalerne - og vi har endelig brug for at have en klar lovgivning, der følger den teknologiske udvikling: Vi må forhindre at individers DNA-profil udleveres til private forsikrings- selskaber. Hvor man bør overveje lovgivning for at forhindre, at svage patienter bliver tvungent til 'frivilligt' at aflevere sådanne data.

Igen støder vi på parakdokset at lovgivningen uvægerligt altid vil halte efter den teknologiske udvikling. Lovgiver har ikke en chance for at forudse, hvad videnskaben kan frembringe næste år: Hjernescanninger som grundlag for en analyse af børns udviklingspotentiale? Gen-analyser til at forudse, hvem der bør gifte sig med hvem? Vi har allerede set tilsyneladende raske, unge kvinder der får operet bryster bort, fordi DNA-profilen siger, at sandsynligheden for at udvikle kræft er over 80% i den familie. Så næste dilemma er hvordan man beskytter borgerne mod forsikringsselskaber, arbejdsgivere – og måske sig selv!

Så vi kan konstatere, at vi har behov for følgende:

  • Klare retningslinier for etablering af offentlige såvel som private IT-systemer, der indeholder kortlægning af risiko for utilsigtede afsløringer af data, en analyse af hvad konsekvensen i så fald kan være, samt tiltag – både tekniske og organisatoriske, der på en balanceret måde reducerer risikoen for 'udslip'

  • En strategi og en klar plan for information til borgerne om hvordan de selv kan beskytte sig, hvor de kan få hjælp og rådgivning og – måske – oprettelse af testcentre, hvor de kan få adgang til at kontrollere deres udstyr

  • Tilskyndelse af udvikling af simple metoder til kryptering og til udvikling af granuleret databeskyttelse, som for eksempel påvist i PrimeLife, hvor ideen er, at den enkelte afslører præcis så stor en delmængde af vedkommendes identitiet som står i forhold til den transaktion, han/hun ønsker at udføre: Er man borger i Birkerød? er man myndig? Har man kørekort? Har man en patientjournal? Har man en bankkonto? Eller har man brug for at afsløre sion fulde (borger)identitet over for det offentlige – men derimod ikke ens indkøbsvaner og trafikmønstre?

  • En debat om hvordan lovgivning indrettes på en mere hensigtsmæssig måde end ved detailstyring – kort sagt, en grad af Anglisering af lovgivningen, der tillader udvikling af ny teknologi og en højere grad af fokus på formål og hensigten med lovgivningen.

I næste Blogindlæg vil jeg se på biometri og de hermed forbundne sikkerhedsfordele og risici.

onsdag den 5. august 2009

Welfare Technology – Architectural Considerations

(The Continua Architecture for Tele Health Care)
In this blog let us try to get a closer look at the requirements for an architectural approach to ‘Welfare Technology’. Without this and without a set of well defined open standards and standard interfaces, this area will soon become a jungle. It is already crowded with non-communicating gadgets, robots, semi-automatic toilets, showers, pets and various pieces of kitchen equipments.
First, we need a conceptual framework for which purposes we are pursuing when we are talking ‘welfare’.
Welfare has (at least) the following attributes:

1. Autonomy – to be able to manage your own life, movements and daily activities – also for impaired person
2. Safety/security – feeling secure from crime, intrusion, natural catastrophes, accidents in your home and when moving around
3. Wellness – being able to exercise physically and mentally
4. Socially accepted and included – communicate with peers, family, colleagues and be part of a community
5. Disease self Management – being able to control chronic (or other diseases)

If you look at ‘welfare’ in this context, it is obvious that Tele Medicine is only a fraction of this picture, even if the importance of managing own diseases of course should not be underestimated - in particular as more than half the population over 65 years of age suffer from one or more chronic diseases and soon will consume more than 80% of the total health care costs in the Western Societies.

For each of the 5 Welfare Attributes, a different set of Stakeholders becomes visible beside the citizen/patient herself: Those providing services (nurses, health assistants, doctors, social service providers, fitness centres etc.), stakeholders that sell equipment and SW, the payers (the citizen herself, the municipality or the insurance company), peers (friends, family, community members).
It is likewise obvious that stand-alone devices only account for a few of the potential number of devices, monitors, instruments and gadgets that provide the arsenal of welfare technology.

Each set of stakeholder has a specific set of needs regarding the information measured, communicated, analyzed and presented for them: The Doctor needs to have a reliable set of measurements and correlation data with historic medical record as well as a range for possible future developments in key health indicators. The fire brigade and the rescuing team only need a confirmed, fast alert before they move out to help. The patient/citizen herself need to have a simple yet trustworthy user interface that is adapted to her capabilities, experience and physical and mental condition; The social network involving friends, family and community need varying degrees of signal quality and possible same time capacity.
So the following principles for the technical solution should be respected:

- User Interfaces in accordance with the receivers skill and level of comprehension
- ‘Need to know’ tailored to the stakeholder
- Integration with back end services and data bases where needed and presented in context
- Logging and tracing in accordance with audit rules and regulations
- Compliant with security and privacy regulations
- Open interfaces to allow for constant adoption of new technologies

This has led the CONTINUA organisation to launch the recommendation for a generic architecture, that meets their overall objective for Welfare Technology:
“Fostering independence through establishing a system of interoperable personal telehealth solutions that empower people and organizations to better manage health and wellness.”
The Continua Architecture can be studied at this link:

Also vendors of Bluetooth technology have adhered to the demand for interoperability using open standards. These standards are described as Principles of Health Interoperability in HL7 and SNOMED definitions.
But the Continua Architecture in my opinion does not cover the more holistic approach to Welfare Technology which we are discussing; CONTINUA is focusing too much on the technical interoperability between the components and not on the semantic and logic interoperability between the stakeholders, which play a very important role in the total configuration of a complete solution.
In fact, collaboration and hence web 2.0 connection functionality is a cornerstone of a practical solution, where the client/patient/citizen needs to exchange view points with a variety of assisting helpers.
While portal technology and SOA principles makes a lot sense primarily for the professional participants in the Welfare Network, the community side, in particular private persons, relatives, NGO’s and peers has a clear need of collaboration tools and techniques otherwise known from Web 2.0.

Hence I propose the following generic Welfare Technology Architecture, which also meets the requirements of CONTINUA, HL7 and SNOMED.

And if you try to look for web 2.0 solutions in healthcare, you will be amazed!
Here is just a small sample:

In a later blog we will discuss the economic impact and experiences of Welfare (and Health) Technologies.

torsdag den 23. juli 2009

Welfare technology – buzzword or reality?

( Picture from the Aarhus Eldertech Study)

Recently I was invited to participate in a meeting preceding the start up of a special line of bachelor education at the University of Southern Denmark focusing on Welfare Technology. It seems to be the first university level education aiming specifically at welfare as a technological focus area.

This topic has gained increasing clout during the last couple of years, backed by the Danish Technological Institute, medical organizations and larger cities like Aarhus, Odense and Copenhagen.

It is expected that this represents a new growth area for Danish Industry and could play an important role, particularly to meet the growing need for health and care personnel as a result of the ageing population. Checking the origin of the term on the net, I found that the first mentioning of Welfare Technology to my astonishment was actually a project, were I had participated as advisor: In 2006 the Danish Technology Board started a project called ‘New Technology for Elderly Care’. In this project we tried to distinguish between technology which is merely focusing on saving physical labour and technology, which actually supported elderly and/or handicapped persons with control of their own (chronic) disease, physical environment, increased mobility and better social contact. It seems that this distinction between objectives has been generally accepted.

In 2007 almost when the Danish project was being finalised, EU launched a program for funding joint initiatives within the member states and associated states called Ambient Assisted Living, clearly focused on elderly. As commissioner Viviane Reading stated at the start of the project:

"There is no reason for older people in Europe to miss out on the benefits of new technologies. The solutions and services resulting from this programme will help them to remain active in society as well as staying socially connected and independent for a longer time. "

At this point 2 calls for proposals have been conducted: The first in 2008 focusing on “ICT based solutions for Prevention and Management of Chronic Conditions of Elderly People” and the second round this year with a focus on: “ICT based solutions for Advancement of Social Interaction of Elderly People”. One might expect rather narrow sighted projects to come out of these specific objectives, but looking at for instance the Persona project from the first call, it doesn’t seem to have influenced the project partners, and the Persona project is really a holistic type of project trying to take the various aspects of self-managing and personal control as well as social inclusion into consideration. The city of Odense and the Danish MEDCOM organisation are very active here.

But welfare technology has more actors than the handicapped/elderly persons and their relatives: If technology is deployed to increase the living conditions and diminish effects of chronic diseases and variations in health, it has to be seen in context with the entire health community and the services related to it – potentially embracing all the players in the field.

In order to make a technical proof for this type of technology and the need for an IT infrastructure behind the gadgets and measurement equipment, IBM DK has participated in a number of projects trying to test both the validity of the technology as well as the infrastructure – and not least: together with the caretakers and the users find out if the interfaces was understandable and if the technology was seen as a help. One of the first projects of this kind was the Eldertech project with the city of Aarhus. (Or download brochure describing the project ).Also the University of Aarhus and the Alexandra Institute participated in this very successful project. Also the city of Copenhagen, inspired by the Technology Council report, last year started a project aiming at improving the quality of life for elderly people living at a municipal old age home, Sjölund.

A new organisation has seen the dawn of light in Denmark: Carenet ( )

This organisation has municipalities, service providers as well as technology providers as members and are 100% dedicated to welfare technology and projects around it.

So it seems that the level of activity in Denmark is very high, and I expect one of the reasons behind this is the timing of the administrative reform, where 275 municipalities were merged to a mere 99 but with an increased responsibility for health care and for people suffering from chronic diseases.

But we are still in the early days – we need to take the positive experiences from the first small scale projects and expand them to a much larger group of citizens. At the same time we need to have a set of recommendations for investment in Welfare Technology: It may be all right to introduce an artificial seal for people suffering from Alzheimers and to introduce the WII game to keep people active, but the real challenge will be to create the tru infrastructure for gradually adding new services and gadgets as technology evolves.

For that we need a set op open standards – as the international CONTINUA alliance is aiming at, and we need to have Danish Software providers and manufacturers to work together to create an open, yet secure set of framework and building blocks. The Danish organisation for Open Source Vendors, OSL, has engaged in an initiative to engage manufacturers of Welfare Technology to apply open source based SW as well as open interfaces. This initiative will lead to seminars and networks with members of the Danish Industry – and possibly also manufacturers and providers from other countries. For those interested, pls. send a note to OSL at and you will receive further information.

torsdag den 9. juli 2009

The European Union and web 3.0

Already in October 2008 the European Commission launched a consultation on Web 3.0 to try to position Europe and to communicate a strategy to the member states during 3.rd quarter of 2009 proposing a policy approach “addressing the whole range of political and technological issues related to the move from RFID and sensing technologies to the Internet of Things”

This indicates that at the time of the initial consultation, the commission’s view of Web 3.0 was more focused on the ‘internet of things’ than the current understanding of being the semantic network of interrelated data, or as Tim Berners-Lee put it: “The Semantic Web is an extension of the current web in which information is given well-defined meaning, better enabling computers and people to work in cooperation.”

During the Autumn a number of external experts critized the Commission for lack of innovation and understanding of web 3.0. See for instance this blog critizising the Commisioner Vivianne Reading:

So what is the current status of EU’s view on web 3.0?. In January 2009, Gerard Santucci, Head of Unit from DG Information Society and Media, presented his and his unit’s view on the status of the IoT for the conference on the Future of the Internet. At this point the EU seems to begin to focus also on the tremendous amounts of data that could be available – either as a direct result of collected data through the IoT or by making data available online, so Santucci in his paper also stressed the need for security around it: “The exercise of data subject rights in the context of intelligent networked environments will require the creation of complex Identity Management systems, based on interoperability, identification, authentication, and authorisation.”

In May EU hosted a conference on the Future of the Internet in Prague. This was a step towards a much more comprehensive understanding of the potential promises of the semantic web, and an excellent video was presented here. Also the Swedish Government, before their presidency of ERU beginning July 1, 2009, was present and demonstrated through State Secretary Leif Zetterberg a more forward looking attitude, but again no mentioning of the opportunities and issues around the potential for a semantic web in Europe. His focus areas were 1) formulate a new ICT strategy for Europe from 2011-2015, 2) Harmonize use Digital TV Frequencies in Europe as well as creating one common market for telecommunication industry, and 3) aiming at a secure infrastructure. But no words on making public data available or securing interoperability to create a whole new web 3.0 market place. Hopefully the Swedish Presidency will redress this during the fall.

In June the commission launched an action plan to embrace ‘The Internet of Things’ (IoT) including a comprehensive list of sub-action items, 13 in all ranging from Governance of IoT, over Privacy, Risc Assessments, Impact on Society, Need for Standards, Research Needed, Public-Private Partnership for IoT, RFID Issues, Institutional Awareness, International Dialogue to Measurement of the Uptake. But not much information about one of the key issues in the i2010 Strategy for ICT: Creation of a Single European Information Space with a strong emphasis on Content Online, as stated by Ken Ducatel, Head of Unit from DG/INFSO responsible for the follow-on program to the i2010 in a presentation in March 2009.

In May 2009 a group of international experts called upon the EU Commission to increase focus of solving the inherent problems of the current infrastructure and standards in the Internet:

“The Internet was never designed for how it is now being used and is creaking at the seams. We have connectivity today but it is not ubiquitous; we have bandwidth but it is not limitless; we have many devices but they don’t all talk to each other. We can transfer data but the transfers are far from seamless. We have access to content but it can’t be reused easily across every device. Applications and interfaces are still not intuitive, putting barriers in the way of the Internet’s benefits for many people. And, since security was an afterthought on the current Internet, we are exposed in various ways to spam, identity theft and fraud.”

So the experts is trying to put focus back again on some of the key words for web 3.0: Seamless data exchange, re-use of content, open interfaces. Plus – of course – the challenges raised by the mere fact that we are running out of addresses in IPV4 and need to plan for IPV6 as fast as possible.

The timing of this statement is quite crucial as the hearings for the next ICT Strategic plan – the i2015 – is just about to start, and the more consultations with experts and with country spokesmen that point to these problems, the more likely it will be that EU will see the potential of the semantic web within a short time frame.

In some areas, particularly Health Care, it is obvious that a clear use of semantics is a mere necessity for cross border interoperability and for exchange of electronic patient records, a long lasting problem of standardization even within each member country. (Denmark, for instance, has got at least 5 different standards for EPR’s).

In June 2009 EU organized a seminar for ‘Ontology-driven interoperability in eHealth’. This marked a midterm in a 3-year project trying to define the standards needed and the necessary steps towards interoperability in the eHealth area. The Picture above illustrates the problem. In this context the organisations involved cover the key players for web 3.0: CEN, CENELEC, ETSI, W3C, ISO, OASIS and the more health care oriented standardisation committees CONTINUA and IHE. The eHealth domain as well as the eProcurement domain which I mentioned in my earlier blog on web 3.0, seem to be well on their way towards practical results and methodologies that can be deployed by other domains.

While EU is gaining its forces and collecting input from all over on the upcoming priorities, the web 3.0/Semantic web environment is slowly progressing, for each day and month producing simple-to-use, practical tags and definitions. See for instance these 4 examples of practical microformats enhancing exchange of information on personal identity, calendar inputs and other:

In the meantime, led by David Osimo, a crowd sourcing initiative is taking place to create the People’s declaration on the i2015. Suggest you visit the page and register and vote for the top priorities. As of to day the top rated recommendation is: release government data in free, open, standard, readily available, accessible formats Or – if you want to participate in an upcoming event on the semantic web and it’s potential, register at:

mandag den 22. juni 2009

Towards web 3.0 ???

Last week W3C opened a conference in San Jose, California, on Semantic technology. Also present at the conference were a number of vendors all hoping to commercialize products within the realm of semantics. I got curious and decided to make some research to see how far web 3.0 had really come; I put a question up on Facebook, but the reactions I got were like this: “ Why are we talking about web 3.0? We haven’t even starting exploiting web 2.0?”

So where are we?

We are definitely moving away from web 1.0, connecting computers, over web 2.0 - connecting people - to what may be called web 3.0 – If that means, that web 3.0 is where you use the value of the web 2.0 technologies plus the semantic tools to find your way into all the crab and get the right and/or most likely answers to your questions. In order to do this, of course you need standards and the status of this work was a major part of last week’s conference.

W3C owns some of the core technologies within the semantic domain, and it is seen as a major turning point these days that the maturity of the basic technologies: RDF – Resource Development Framework, and OWL - Web Ontology Language. Based on the standards developed here and also of course the XML-standards, the semantic query language SPARQL has been developed.

Related, but independently maintained standards such as XBRL, is also moving ahead to help clarify definitions and meanings across systems and boundaries.

The whole concept of semantics is particularly important in a multi-language set up like the European Community, and EU as such have since long promoted the use of semantics as a core technology to provide interoperability across the boundaries of Europe. One of the first pan-European areas where principles of the semantic web are being defined and tested in the area of eProcurement. The PEPPOL Conference in January in Copenhagen kicked off the project, where a number of IT companies and representatives from users are seated. The Danish Ministry of Finance and IBM Denmark are both partners here. A general perspective of the PEPPOL Project can be found here:

PEPPOL is still a development project, although the demonstration phase will appear pretty soon.

But as the multitude of globally available solutions presented in San Jose here in June showed, we may now be on the brink of a real break-through to have a multitude of commercial applications available.

Ivan Herman is responsible for W3C’s semantic programme. He gave a lengthy interview that described his opinion of the status. Her stated it like this:

“Web 3.0 is the idea of having data on the web defined and linked in a way that it can be used by machines not just for display purposes, but for automation, integration and reuse of data across various applications." (From San Fransisco W3C Conference)

Other blogs and discussion for a on the net have been dealing with the topic for quite some time.

Phil Wainewright: What to expect from web 3.0? , tries to explain what the main differences and concepts for breakthrough in web 3.0 really are. He explains, that web 3.0 consists of

4 Layers: API-Layer – where service providers give access to content and data. He thinks this layer is pretty mature, with almost no profit left for new comers. The next layer, The Aggregation services Layer contains all the goodies of web 2.0 like RSS Feeds etc., and the third and exciting ‘new’ area is the Application Services layer – Where office, EPR, CMS, and other applications and services are buying offered on demand, software as a service. A fourth layer may consist of Serviced Clients, and this may also be an interesting new business area, according to Phil Wainewright.

As an example of one of the application areas he expects to thrive, is the WebEX Office SaaS:

WebEX – example of a company focusing on delivering SaaS using web 3.0

Searching the web I also found Richard MacManus, lecturing about web 3.0:

“Web 1.0 is characterized by enabling reading, web 2.0 = read/write where everybody becomes a publisher – but web 3.0??”

“Unstructured information will give way to structured information, paving the way to more intelligent computing.”

The essence of his expectations is that web sites will be turned into web services, whether or not this should be considered a brand new paradigm is a matter of taste, but in Rachard MacManus opnion:

“There is a difference in the solutions we are seeing in 2009: More products based on structured data (Wolfram Alpha), more real time – made sadly necessary because of the situation in Iran - (twitter, OneRiot), better filters (FriendFeed and Facebook with copies FF)”

So of web 3.0 is all about structuring data and making data available, then some of the new semantic techniques for storing relations between entities - TripleStore Technology – like ‘Peter is friend with Susan’ – or ‘Muhammed is a member of the AK81 gang’ is the way ahead. Much easier than to describe EDIFACT rules in the 90’ties, and if you could really create this links of links of links and use powerful searching tools across the variety of databases, then we will surely see a new level of intelligent computing.

According to Alexander Korts, (april 2009) in his article on The web of Data, creating Machine Accessible Information gives the following example:

One promising approach is W3C's Linking Open Data (LOD) project. The above image (on top of blog) illustrates participating data sets. The data sets themselves are set up to re-use existing ontologies such as WordNet, FOAF, and SKOS and interconnect them.

The data sets all grant access to their knowledge bases and link to items of other data sets. The project follows basic design principles of the World Wide Web: simplicity, tolerance, modular design, and decentralization. The LOD project currently counts more than 2 billion RDF triples, which is a lot of knowledge. (A triple is a piece of information that consists of a subject, predicate, and object to express a particular subject's property or relationship to another subject.) Also, the number of participating data sets is rapidly growing. The data sets currently can be accessed in heterogeneous ways; for example, through a semantic web browser or by being crawled by a semantic search engine.

This in a way makes it reassuring and at the same time illustrates the amount of work still ahead of us before we will reach the ‘promised’ land of web 3.0. Also it shows, that web 3.0 is a journey and not and end in itself. And finally: that we will have to master Web 2.0 techniques and imbed this into all the traditional services and solutions before we get a user friendly and intuitive way of accessing all these data. But most important: It puts a pressure on all Governments (in particular) but also on private companies to make data available. Tim Berness Lee gave a very interesting and inspiring pitch on this matter in February this year: Tim Berners Lee on the next web

Conclusion is that we are still only stumbling at the foot of the mountain, but we have spotted the way ahead.

(And if you are really interested in the topic of Digital Libraries, maybe you should attend the Conference on Semantic Web for digital Libraries in Trento in September. )