Viser opslag med etiketten security. Vis alle opslag
Viser opslag med etiketten security. Vis alle opslag

fredag den 12. august 2011

Cloud Computing – Are the Security Risks real - or is Legislation lagging behind technical Opportunities?

fig 1. Cloud Computing concepts and threats

In Denmark the discussion between Security freaks and the avant garde computer nerds has raged this summer om the question whether or not the Public Sector in Denmark should pursue the use of Cloud computing aggressively. (My blog – in Danish – covers this discussion). The con's are led by the IT Council for Security, while the pro's – amazingly – is lead by the Minister of Science, Charlotte Sahl-Madsen, who sees the opportunity to involve a number of innovative, small companies in utilizing the vast amount of public data until now stored away in departmental systems, bunkers or behind firewalls.

Gartner Group has recently publicized it's 'Hype Cycle' for Cloud Computing that points to the observation that the concept is probably currently at the very top of the Hype and bound to go through the 'Trough of Disillusionment' before we start to see real large scale, public use of the technology.

The opponents are pointing to 3 reasons why the public sector should be extremely careful to deploy CC at a large scale:

  1. Cloud Computing should comply with all existing legislation concerning data security.

  2. Cloud Computing solutions should obey to 'best practices' and common accepted standards that the providers should be certified to guarantee

  3. Critical data should obey to national and EU standards in emergencies – currently demanding that critical data (Citizen Records, Health Records, State Financial Records, Military and police records) should be guarded and kept within full physical control of the responsible departments/public organizations.

The Minister of Science requests a new look at Data Security, and even EU is expected to produce a suggestion to ease the European Directives on data protection to stimulate use of Cloud Computing.

But what are the facts and what are the real security challenges in Cloud Computing? Are there any new technological developments that will ensure that legislations' intents are obeyed even if the rigorous requirements for 'physical control and inspection' can not be performed in it's original sense?

First we need to agree on what Cloud Computing is and what is isn't. In February 2009 the UC Berkeley Adaptive Distributed Systems Laboratory gave out a technical report describing the phenomenon.

They point out that Cloud Computing refers to both the applications delivered as services over the internet and the hardware and systems software that provide these services. The applications themselves are labelled Software as a Service. The underlying HW and SW is described as the cloud. When a cloud is made available on a pay-as-you-go manner, we call it a Public Cloud. The service being sold is called Utility Computing. In contrast to this, a Private Cloud refers to internal data centers or data centers not made a available to other that one customer (or a coherent set of customers). The Private Cloud is dismissed from the Berkeley sense of Cloud Computing, even if pricing and to a certain degree scalability might be at least similar to 'Real Cloud Computing'.

Now the business model of Cloud Computing in it's original form thus consist of this 'value chain': we have the SaaS user, the end user that buys the services, web applications. He might buy these services from a SaaS-Provider, who can be seen as a Cloud User, that buys the underlining HW and basic SW capabilities from a Cloud Provider. Of course the Cloud Provider and SaaS provider might be one and the same organization.

The various classes of Utility Computing that are offered through individual business models, vary of course (Amazon, Microsoft Azure, Google, IBM etc.) and the various types are reflected in various models for computation models (Virtual Machines running on 'Hypervisors' or a 'farm' of VM's), by various storage models including variations of DB SW provisioning and finally various Network models from pre-defined IP address spaces, dynamic allocation, fixed topologies, 'elastic zones' etc.

This variation reflects the lack of common standards but is of course a result of the intense competition in the marketplace. But is has led to new definitions a part from SaaS that are used commonly: Platform-as-a service – typically major application areas, SAP for instance – and Infrastructure-as-a service. Especially the latter variation leads to the conclusion that the organization that buys IaaS simply buys the basic building blocks and otherwise are left with the responsibility to populate the Cloud by his own applications and development tools. This type of Cloud business model is especially appealing to SW developers, while the PaaS model is aiming at solving enterprise (or Government institutions') problems.

To confuse the matter more, the terms 'Community Cloud' and 'Hybrid Cloud' have risen recently – the former simply refers to a private cloud with many tenants (municipalities, agencies for instance) while Hybrid indicates that some of the applications in the portfolio in running in the Public Cloud while other services within the portfolio remains in the Private Cloud.

The application opportunities that have occurred the last couple of years as a result of this combination of highly scalable, robust 'back end' systems and an aggressive 'pay-as-you-go' pricing model covers interesting areas like mobile interactive applications, large scale batch processing jobs, heavy data analytical computation jobs, to more straight-forward desktop services solutions avoiding tedious updating and installation of new SW at each and every employees desktop.

The economics of Cloud Computing explains why the Berkeley-team did not want to include 'Private Clouds' in the definition: The huge requirements of back-end capacity and the real need for economy of scale makes it almost mandatory that the players are very large companies that can afford the risk of large scale investments – and on the other hand are able to manage and balance load in an extremely efficient way. Sharp changes in capacity demand, seasonal fluctuations, including back-up requirements, fall-over facilities etc. plus of course traffic bottlenecks in communication equipment clearly makes the unit costs of Cloud Providers competitive compared to traditional Data Centers. This is the major argument for US CIO Vivek Kundra's Federal Cloud Strategy 'Cloud First', announced February 2011, where it is stated that at least 20 Billion US $ out of the total 80 billion $ spent by federal agencies in 2011 could be saved by moving to the cloud.

Other economic benefits of moving to the Cloud as stated by Vivek Kundra are the improved productivity in application development, fast response time to rising needs, better linkage to emerging technologies and alignment with private sector innovation.

It is tempting to argue that the huge difference between the European 'Cloud Scepticists' and Vivek Kundra is that US is desperately in need of savings and in any case not very concerned about privacy, the Europeans have a more than 10 year legislation on privacy, now adopted in almost all EU countries and at least in some countries (led by Germany) this plays a very large role in public opinion. This is the reason why the EU Commission has requested an in depth study of legal and technical implications of Cloud Computing in light of the EU 94/46 Directive. These changes are not due to occur before spring 2012.

OK, but where are the REAL risks?

Looking at Figure 1 we can identify 4 classes of risks of malicious, unauthorized access:

  • End user access – non-authorized attempts to gain access to read, steal, modify data, DOS etc.

  • Inter-Cloud Access using external services to hide malware, worms, trojan horses etc.

  • Tenant related 'insider' attempts to gain un-authorized access

  • Cloud Provider administrator 'insider' attacks

Add to these risks the risk of errors in a shared physical environment if instances of VM and in particular data are not kept isolated from other tenant's.

Let us try to address the question of how a secure cloud solution could be established following the 3 items mentioned by the Danish Council for IT Security.

  1. Comply with Data Protection laws.
    Their first concern is that a cloud and Saas-provider should comply with the current legislation. This has as it's root the requirement that it should be guaranteed that only authorized persons get access to data that contains PID (Personally Identifiable Data).

    You could argue that this is first and foremost the responsibility of the department that collects and stores the data to ensure access rights and to ensure that the Cloud provider follows the requirements in this respect: logs access as well as access attempts, prevents unauthorized access to customer data also by his own staff. Ultimately this requires for the government agency that wants to move to the cloud that it maintains a security organization that issues or controls issuance of access certificates and up-to-date Access Control Lists.
    It is likewise a requirement for any procurement and contracts with cloud providers.

  2. Cloud provider should adapt 'best practices' and standards and be certified.
    Best Standards are at this stage only emerging, and it may be a tough requirement, before internal standardization agencies and governments agree on this. But there are practices that could help guide the Cloud Vendor. (See also Google's White Paper on Cloud Security)

    First of all the psychological fears of privacy breaches could be avoided if the cloud provider could guarantee against intruders passing the firewall and accessing user data, we already discussed the first level of this. The second layer attack could be hidden in the cross-application communication that will eventually occur when external clouds exchange services and data with the tenants of one cloud operator. It might be simple things like services for currency exchange calculations, or it might be much more SOA-like, cross domain data analytics – for instance health care research across several EHR-systems.
    The more basic services to protect against things like VLAN-spoofing and firewall configuration errors should in any case be part of a contract with a trusted provider, but the Achilles-heal is the second type: access to primary data across a number of domains.
    We will discuss this in more detail later to see if there are some technological development ahead.

  3. The 3rd requirement from the Security Council concerns national security regulations. These requirements are probably more difficult to deal with if legislation is taken literally: that inspectors should be able to physically identify data bases and storage devices. That is, of course unless you are able to prove that a) The data are protected from any 3rd party reading and interpretation, and b) that even if the case of destruction you are able to re-construct data in a secure way. One would think that Vivek Kundra had this in mind but came to the conclusion that all major cloud providers are deeply rooted in US.

    There are, however, some inconsistencies in the current legislation; while it is understood that no foreign power should gain access to a complete Person Registration system, it is questionable why a financial system and related data should be considered 'strategic' if the data are duplicated and can be reproduced. This is the position of the Danish Minister of Science. But the intention as specified in national legislation to keep strategic data within the country seems to be more like a legal problem – especially if the true meaning of the word 'Control' can be solved in other ways. If you have full back up/recovery of data, full duplication and secured your data by top class (maybe even dynamic) encryption of data you could argue that you, the data owner has full control.

So it seems that there are technical ways around at least most of the issues – but how real is the idea to have all public institutions manage the access rights, secure encryption of the data and still be able to exchange and exploit data across institutional borders in a 'community-like' public cloud environment? Let us assume that all policy aspects are solved, agreements of responsibilities between tenant of cloud provider (and SaaS-provider) are put in place, SLA's established, certification of best practices – then is the idea of a grand scheme of encrypted data at all feasible?

Already in 2009 the IBM researche Craig Gentry suggested a method of what he called Homomorphic computing which would enable users with appropriate access rights to access and calculate on encrypted data across domains in a way where the final result could be decrypted and produce the same result as if the operation had been performed on the 'raw' data. If this is a feasible and economic practical solution, then most of the obstacles of putting sensitive data in the Cloud have disappeared. But as usual, there is no such thing as a free meal. At the current state of development, the computational requirements for this 2-step encryption method are enormous, at least at present enough to turn the economic advantages of Cloud Computing off. So we will still have to wait for the costs of computing to come down – and as it seems we are already getting beyond Moore's law and accelerating developments, it is now believed that homomorphic encryption will become economical feasible within the next 5 years. And when it will be solved, the interim requirements of close control with the Cloud provider might be reduced.

But then we need in any case to agree on and clean the standards, define and refine policies, practice and implement large scale ID-management and authorization. In any case the answer to the question whether the technical risks of cloud computing are real is yes, and it will be solved. Eventually.


lørdag den 28. august 2010

Index to Recent Blogs

I am not totally satisfied with the way wordpress administers tags and index. So for the benefit of my Students pls. check out this overview of recent blogs from me.

Danish eGovernment Strategy

Danmark 2.0 ?

Danske IT visioner 2015+

OECD efterlyser danske IT visioner

IT History – Fractions from a journey

Farvel til Teknologirådet?

eGovernment

Hvad skal vi med eGovernment ?

Om e-Valg

Hvorfor fejler offentlige IT-projekter?

Why Public IT Projects fail (1)

Why Public IT projects fail (2)
2 Days with Chris Pott – and Goodbye to IT as we know it!

The European Union and Web 3.0

Towards Web 3.0 ???

Public Sevices 2.0 – EU Seminar

Government 2020

The State of the Future

Impressions from WMSCI

6th Eastern European eGovernment Days

Privacy

Biometrics – The Danish Perspective

Video Surveillance and Privacy

Security and Privacy in Biometrics

Privacy igen-igen

Swedish Surveillance Law again, again – towards a happy ending ?

Reflections on Media Manipulations..

Nej til STASI metoder – Ja til social ansvarlighed

Swedish Surveillance Case – New Comments

When Neighbours are peeping through your Windows...


Security

Cyber-warfare – Is the IT Infrastructure protected?

User Centric Identity Management

Cybercrime Predictions

Security Outlook for the Middle East – some Comments

Digital Signature – A Danish Perspective

Beredskabssymposium – Når krisen kradser


Cities and eGovernment

Eurocities Annual General meeting 2009

Green Cities – Blue Skies ahead?

Eurocities Anual General meeting in Den Haag

Revisiting Dubai – The Eye of the inancial Crisis


Health & Welfare

Hospitals of the future

Welfare Technology – Architectural Considerations

Welfare Technology – Buzzword or Reality?

Sundheds- og Velfærdsteknologi

Københavns Kommunes projekt om det gode ældreliv

Teknologi i Ældreplejen


Greenland

Greenland – Icy Path to Independence

Safarissoq – 'Where the Currents get strong'

Arctic Challenges (1)

Arctic Challenges (2)


Climate and Global Resources, Macro Economics

G20 – Fears and Expectations

Intelligent Energy Systems

IMF – and Weapons of Financial Mass Destruction

Rethinking Macro Economics

Which Crisis ???

Sustainable Energy for 1000 Years

Global Ressources Statusrapport 2008 – Dystopi eller realitet?



tirsdag den 6. oktober 2009

Security& privacy in biometrics – how do we ensure proportionality ?


A basic principle in the current
European Data Protection Act is to ensure proportionality between the level and amount of personal identifiable data, that you have to reveal to identify yourself has to be proportional to the risk and danger incurred if the identity is faked or stolen.

The recent years have seen a growth in tools for identification, mainly in the biometric area, that has led to the risk of 'overreacting' using easy biometrics where lesser level of authentication could have been used. One of the latest strange cases from Denmark is a night club, that has been
allowed by the data protection agency to take customers fingerprints at the entrance as a means to secure against violent behavior. Horror examples of major collection of biometric data is of course U K's collection of DNA profiles for children, a practice that was started 5 or 6 years ago.

The risks involved are related to the kind of threat you are trying to prevent: Do we need the security tool to reveal the identity and all related information? This may be the case if we have a strong suspicion that a person is directly related in crime or an act of terror. Or do we only need to know if a person is 18 years old so it is legal to sell alcohol to him/her? Similarly, within the health area a nurse and a doctor do not need to have full access to a patients medical record if he has lost his consciousness and need a blood transfusion, only the key information of blood type and current medication.

So the use of biometrics in itself is one dimension of the game - and the other dimension is what the biometric identification gives access to reveal of PII – Personally Identifiable Information - at the same time or as a consequence of using the biometrics.

The first question of proportionality is then solely related to the 'strength' of the biometric method used. A weak solution is a quick, convenient solution which is non-intrusive, non-incriminating and non-discriminating in regard to civil rights and color of skin, sex, race and religion. For this purpose simple biometrics like a
signature (Analog or digitized) may be better than a fingerprint ( traditional, optical electronic scanning using a template to generate a simple bit stream) - because fingerprints may be seen as incriminating, offensive, police-like. while a face recognition reveals race, color of skin and maybe sex, and thus does not meet the other criteria.

Signatures may be faked, fingerprints (simple fingerprints) can be stolen – in bizarre cases it has been seen that criminals have cut off fingers of owners of Mercedes 300S cars to break the fingerprint starting mechanism. (This risk is probably less in Northern Europe, though.) Or it may be
difficult to read the results properly.

When stronger proof is needed, it is acceptable to rely on methods with higher reliability – like the thermal scanning of fingerprints, that measures the distance from the underlying blood, revealing riffs and valleys, again to be transformed by fast fourier transformation to a template consisting of 0's and 1's. This prevents the use of faked fingerprints copied on a strip of tape – and even the rough case of cutting off Mercedes' owner's finger –( presumably the blood has stopped circulating – so no heat difference). Also
Iris recognition has been suggested, whereas 3D face recognition at this point still has a higher rate of errors. It has been suggested to use at least 2 types of biometry, like the US border control where you combine fingerprints with face recognition.
In any case the reliability of the identification methodology applied in every case has to discussed and explained before any solution is deployed. (
See article about reliability)

It may be OK under well-defined circumstances to use higher level of trusted biometrics, even if they are not 100% proof. The second dimension of the question than is what other PII is stored with the template or the face geometry is stored and how these data are protected. This is a question of data stewardship and again should be in proportion to the use of the data. Taking the example from the Danish night club that has been granted permission to store peoples' fingerprints, these should definitely not be store with any other information than the purpose: Is this guy know to have a tendency to quarrel – NOT his name, address etc. Even if this is kept using cryptography, it is not in proportion to the use of the biometric data.

Other types of biometrics are recognition of moving patterns,
voice recognition, pattern of the veins, retina scan – and of course DNA. Whereas the failure rate (both positive and negative) of the first 2 of these types are still relatively high, the 3 other may reveal unwarranted additional details of the health situation of the individual, hence these items should only be used for forensic purposes and not just collected arbitrarily or even – as in the UK DNA case – systematically.

An important aspect of using Biometrics is also how it will be possible to revoke or change the biometrics as the person changes. Whereas fingerprints remain stable for a longer period in life, face geometry changes a lot from childhood to old age, so does walking patterns, voice. And people do have cosmetic operations in their faces, accidents may change the looks and behavior so any system based on biometrics should have a way to allow for changes of this kind and it should be possible to revoke biometrics.

But as the technology improves and computing power is increasing, one solution which could use biometrics and at the same time prevent the data from occurring in the open space or being communicated could be to have an ID card with a number of different domains, each holding the relevant information linked to the person: one domain simply stating the age, another for the bank including bank account numbers, one for driving license use, one for medical/health care use, one for insurance use, one for credit cards, one for public identification purposes.
If this identity card can be activated by a fingerprint reader plus a pin code, the citizen could then select exactly how much PII he wants to reveal in the situation. This is in line with the P
rimeLife recommendations from IBM Zürich Lab, that has just got the German award for forward think identity management solution. This type of solution has the advantage that the user is in full control and that no central database is required for the biometric data.

In a few days I will discuss the use of video surveillance, what we know about it as a crime prevention tool and what may be a more intelligent way of using it.


søndag den 15. februar 2009

Cyberwarfare – Is the IT-infrastructure protected?


Already during his election campaign Barack Obama stressed the need to improve the protection of US against Cyber Warfare and to set up an organisation to improve protection level.

As cited by Wired in July 2008, he said:


“As President, I'll make cyber security the top priority that it should be in the 21st century. I'll declare our cyber-infrastructure a strategic asset, and appoint a National Cyber Advisor who will report directly to me.


Once the election was over Obama and Biden declared at the transition website for the President elect what would be the policy when they took office. The key areas of an increased effort against cyberwar would be (of course) to defeat terrorism Worldwide, to prevent Nuclear Terrorism, To strengthen American bio-security and to protect information networks.

At the same level the importance of improving Intelligence Capacity and at the same time protect civil liberties (a different recepei than George Bush!) plus the objective to protect American citizens from terrorist attacks and natural disasters. Protect American infrastructure and to Modernize the aging American infrastructure.

Particularly the policy to Protect Our Information Networks seems to be a new approach, as the New President is knowledgeable and capable of understanding what Cyber warfare could really do to a modern society, where most of the infrastructure - from communication networks, Media, to trains and airplanes to power stations, to sewers and drinking water - is controlled by IT systems that may be the target for cyber attacks.

As was stated also by the Frontpage in January 2009, this policy was further defined by these focus areas:


Strengthen Federal Leadership on Cyber Security:
Declare the cyber infrastructure a strategic asset and establish the position of national cyber advisor who will report directly to the president and will be responsible for coordinating federal agency efforts and development of national cyber policy.


Initiate a Safe Computing R&D Effort and Harden our Nation’s Cyber Infrastructure:

Support an initiative to develop next-generation secure computers and networking for national security applications. Work with industry and academia to develop and deploy a new generation of secure hardware and software for our critical cyber infrastructure.


Protect the IT Infrastructure That Keeps
America’s Economy Safe:

Work with the private sector to establish tough new standards for cyber security and physical resilience.


Prevent Corporate Cyber-Espionage:

Work with industry to develop the systems necessary to protect our nation’s trade secrets and our research and development. Innovations in software, engineering, pharmaceuticals and other fields are being stolen online from U.S. businesses at an alarming rate.


Develop a Cyber Crime Strategy to Minimize the Opportunities for Criminal Profit:

Shut down the mechanisms used to transmit criminal profits by shutting down untraceable Internet payment schemes. Initiate a grant and training program to provide federal, state, and local law enforcement agencies the tools they need to detect and prosecute cyber crime.


Mandate Standards for Securing Personal Data and Require Companies to Disclose Personal Information Data Breaches: Partner with industry and our citizens to secure personal data stored on government and private systems. Institute a common standard for securing such data across industries and protect the rights of individuals in the information age.


Where did Obama get these ideas from? Undoubtedly from having studied what took place in Estonia in 2007, where a Russian led protest against the movement of a Russian war-memorial from the centre of Tallin to some outskirts initiated a denial of service attack that blocked the better part of the Estonian Financial Sector as well as most of the central and local government web sites for weeks. This in turn led to NATO’s profound interest in extending the defence lines of the alliance to include cyber war, so according to Computerworld, May 2008, NATO is launching a centre to detect, prevent and protect member states against cyber attacks.

The official NATO statement can be found here http://www.nato.int/docu/update/2008/05-may/e0514a.html

But President Obama’s very first task was to launch an in-depth analysis of the security threats against US, due in 2 months from now, according to The Register.:


“This 60-day interagency review will develop a strategic framework to ensure that U.S. Government cybersecurity initiatives are appropriately integrated, resourced and coordinated with Congress and the private sector.”


The importance of the coordination with the private sector is that the threat is not only being seen as directed towards the public net services, and the analysis will be a 360 degrees review. Also it is clearly stated that privacy laws and regulations and respect for personal integrity has to maintained in parallel with an increased level of security against cyber attacks.


Also the German Bundeswehr is building up capacities and a national center consisting of 76 highly skilled specialists to oversee, predict and prevent cyberwarfare. The article in Der Spiegel February 7, 2009, states that the reason for this is an increased number of cyber attacks against German ministries and public websites.

Other very recent examples of cyber warfare was the cyberattack on Israel as a result of the attack on Gaza, and even in the former Russian Republic of Kyrgystan the Government felt the anger of the hackers (presumably either from inside Russia or – worst case – maybe endorsed by Russia) to try to down the feeble Kyrgystani network before consenting anything to US’ wish to have an airbase in the country. As stated by Computerworld UK:

“Since 18 January, the two biggest Internet service providers (ISPs) in Kyrgyzstan have been under a "massive, sustained distributed denial-of-service attack," said Don Jackson , the director of threat intelligence for SecureWorks. “

The attack followed almost similar pattern as a cyberattack against Georgia during the conflict with Russia in 2008. Seems the hackers are getting their act together based on a lot of practice by now!

The Danish Super-blogger Dorte Toft noticed that as 8000 Danish net bank users were locked out of their net bank because they had been affected by the so-called ‘Donadup’ worm this clearly illustrated the vulnerability of the Danish IT Infrastructure. As the work had a clearly identifiable Russian origin, we may be looking into the same family of super hackers as the other recent cases.

In a later blog dated February 8 2009, Dorte Toft notes that Denmark clearly lacks a central organisation to supervise and protect us against massive cyber attacks, and that the way the Financial sector handled the case of 8000 infected PC’s is a clear sign of misunderstood restrictive communication to the general public.

In fact this is not the first time the attention has been brought to the need for a more coordinated defence against cyber attacks in Denmark. In May 2004 the Danish Technology Council published a report on a project called ‘How vulnerable is the Danish IT Infrastructure?’ . This report concluded that the threat is real, it is increasing and it has to be met by a ‘total defence’ view led by a coordinating organisation. A number of other practical suggestions were proposed, but only a few of them seem to have been followed. So it seems that Dorte Toft is absolutely right in her conclusion and it would be about time that the warnings from 2004 are being heard – and followed. It is not enough to rely on a NATO centre, we have to have a coordinating, practical body with sufficient level of expertise and with legal power to coordinate sector organisations like Finansrådet – The Danish Bankers’ Association - but also other sector organisations like the Telesector, the Energy sector, the transportations sector and of course Danish Industry including the IT industry.

It is not a moment too early that the Government and the heavy ministries – Ministry of Defence, Ministry of Justice – together with the Ministry of Technology - start to do something. They have got a clear advice not only from Teknologirådet /The Technology Council but also from the IT industry ITEK and the Danish Industry Security Committee. And from Dorte Toft as well.

torsdag den 15. januar 2009

Cybercrime Predicitions


As is usual around the change of year a number of specialists in all disciplines try to make predictions of what is going to happen in the year to come. And also the ‘security specialists’ have been very active. Before we turn to the actual predictions it may be worthwhile to see what really happened in 2008 in terms of cyber attacks and cyber crimes.

To find evidence we can turn to Bernard Kwok, Senior Vice President, Symantec Asia Pacific

His analysis for what happened during 2008, described as ‘the best of the worst in 2008’ pointed to the following major observations:

  1. New Malware Variants and new Families of Threats – millions of distinct threats that mutate as they spread.
  2. Fake Applications – ‘Scareware’ – fake secure sw applications that install together with a Trojan horse.
  3. Web attacks galore – Symantecs figures point to the web as the prime area for attacks
  4. Underground Economy – hard to estimate the real size of this, the investigators claim that in a year they noticed stolen goods traded on the Internet at the value of more than 276 million $, but this is probably only a very small fraction.
  5. Data Breaches – this of course includes company specific data as well as public stored personal information. Particularly in UK and US some really extraordinary data losses of public data have been noticed.
  6. SPAM – Still very active and annoying, but in fact because of more advanced spam-filters down to some 60-65% of what it was at it’s height
  7. Phishing was also a major threat in 2008 – various techniques and baits used in different economies, Obama’s election was but one example.
  8. Browser and plug in vulnerabilities – cyber attackers are carefully studying loopholes and vulnerabilities and even if Microsoft Explorer’s total domination may be going away, there are still room for improvement also in other browsers.

Symantec Asia’s forecast for threats in 2009 point at 5 key areas:

  1. Explosion of Malware Variants – according to Symantec’s analysis, there is now more malware programs being released on the internet than legitimate programs! This calls for new types of detection programs needed.
  2. Web Threats – It is expected that web services will be the next area for introduction of threats. This points toward the need for some certification scheme for web services.
  3. The Economic Crisis will give rise to new types of fraud promising people to get rid of their loans or offerings , or maybe web sites promising new jobs. These types of fraud typically hits the weaker and poorer.
  4. Social Networks – phishing for username and account info. Based on the rapidly growing success of social networks also as a part of normal enterprise web sites, it is inevitable that more threats will come in this area.

SPAWAR

Other highly competent sources of knowledge of security threats include SPAWAR – The US Space and Naval Warfare Command Centre in San Diego. Among their tasks they will provide: "…revolutionary information superiority and dispersed, networked force capabilities to deliver unprecedented offensive power, defensive assurance, and operational independence to Joint Force Commanders."

SPAWAR also holds a research center for Cyber Warfare which probably is the best equipped research center in the World. Mike Davis is the Information Assurance (IA) Technical Authority (TA) at TEAM SPAWAR, and a Vice President of the Information Society Security Association.

In a recent meeting, he pointed at the following key trends that he could see:

- Growth of Malware and Botnet attacks - SQL Injections, Denial of Service Attacks – especially in the mobile use!

- Increased number of insider attacks

- Focus on protecting the infrastructure – ICS/PCS/SCADA Attacks (See conferences like this ‘Black Hat’ Event)

- Outsourcing Security Threats including Cloud Computing issues

- Focus on Host based Security

- Increased use of encryption in many more areas than previously

- Regularity Compliance tools and information lock down (PCI Compliance, HIPAA, SOX) and requirements for privacy

- Social sites will be new playing fields for DLP – Data Loss Prevention.

- General move towards ‘information centric security’ with data as the focal point as opposed to the traditional user/application rights management focus.

- More focus on inter-domain SOA-technology and federated security.

SC Magazine

The SC Magazine for Security Professionals published a forecast mainly based on inpur from Sophos:

“What seems certain, concludes the Sophos report, Security threat report: 2009,” is that the variety and number of attacks will continue to escalate, compromised PCs will remain the primary source of spam, and web insecurity and SQL injections will remain the primary distribution method of malware. “

Infected web pages are cropping up three times as fast this year as compared to last, while the number of infected web pages increased from one every 14 seconds in 2007 to one every 4.5 seconds in 2008, according to Cluley.

“There's a real challenge in how businesses big and small will manage this problem and ensure that their web sites are properly secured and hardened from SQL injection attacks,” he said.

In addition, malicious emails with a greater proportion of legitimate looking attachments, or web links aiming to infect unpatched users, are likely to be sent. Data leakage and identity theft resulting in decreased customer trust and loyalty will continue to pose problems for enterprises, Cluley said.

On the bright side – security software is getting better and more proactive.

For enterprises, Cluley recommended dealing with the security risks with a tiered defense against attacks, including up-to-date anti-virus software, firewalls, security patches and policy control and user education.

In another article from SC Magazine they described the innovations in cybercrime activities under the heading of: 2008: A year of cybercriminal innovation.

And the Danish TeleCom consultant John Strand also had his go at the threats in 2009:

John Strand not surprisingly, points to Wireless risks first:

Wireless risks continue!
There are so many ways to attack a client system via wireless vulnerabilities, as you can see just by looking at Karma, a set of tools for assessing the security of wireless clients, and karmetasploit, a tool that acts as a wireless access point and responds to all probe requests from wireless clients.

I believe that many organizations are about five years behind the curve when grappling with Wi-Fi threat vectors” ….

“ ..vendors implement new protocols and authentication schemes like TKIP, LEAP and PEAP in different ways. We need to fully research the protocols used by our vendors before implementing them in our organizations.”

But John Strand also predicts that new interest for hackers will be directed to the Operating Systems:

“While operating system attacks have not reached the effectiveness and prominence they had from 2003-2005, malicious hackers will most likely discover operating system vulnerabilities again. There has been a tremendous amount of research over the past few years in browser-based attacks like cross-site scripting (XSS), cross-site request forgery (XSRF) and clickjacking. But what if these techniques were used in conjunction with an operating system vulnerability?”

Finally John Strand points to a possible blind spot in the convergence taking place between Web servers and Browsers.

Ponemon Survey
Also the Ponemon Institution conducted a study – 2008 security Mega Trends Survey – where they concluded the following predictions for 2009:

“Cybercrime and outsourcing were named the top security concerns for 2009. In addition to uncovering a changing view of how IT organizations are becoming less siloed and more collaborative, key findings from the Security Mega Trends Survey include:

  • Outsourcing IT is a Major Concern
  • Data Breaches and Cybercrime
  • Workforce Mobility Contributes to Data Loss
  • Web 2.0, P2P, Virtualization and Cloud Computing are Growing in Prevalence “

The Study was conducted by Lumension Security and can be found here.

So the overall picture from all experts – with some variations – points to the same areas or issues. Or is it just that everybody is picking everybody’s brain that this seems so very identical? I will almost be willing to bet that CyberCriminals are more innovative than the sum of all the experts, and that some – until now unseen – types of threat will occur in 2009. With the growing number of computers in everything maybe a new type of crime will be focused on this ‘internet of things’ phenomenon?

The easiest thing of course is to project threats in the areas where you have some remedies and tools to cope with the known – and to sell!.

What we have to fear most is probably not what we know but those areas where we are not even aware of not knowing anything.

onsdag den 1. oktober 2008

Swedish Surveillance Law again, again - Towards a happy Ending


The Swedish act on surveillance, as described in my earlier blogs, will be changed on 15 major points according to Computerworld.
This is good news, in spite the fact that the EU commissioner Jacques Barrot responded to a question by the Danish Liberal member of the EU Parliament, Karin Riis-Joergensen, that national security surveillance only has to obey national jurisdiction. It seems that the internal Swedish pressure on the Parliament did the trick – including threats from a Swedish Human Rights Watch group to sue the Swedish Parliament.
The law is going to be amended on not less than 15 points, the most important of which is that any surveillance has to be accepted by a Swedish court decision, that it is also the Swedish Court that decides which channels/electronic cables could be tapped, that no traffic can be surveilled if there is a Swedish citizen in one end of the communication, and that data captured must not be kept for more than a year.
Even if it is encouraging and as it seems that the law has been brought into compliance with the Data Directive and Human Rights declaration, it is a pity that the answer by commissioner Jacques Barrot was not challenged, as it seems he hadn’t really understood the international consequences of accepting the original law.
But it seems that the Danish Parliament, a customer of Telia and as such one of the potential ‘customers’ for the Swedish defense, may sleep safely at night. Unless, of course, a state of war between DK and Sweden should happen.
Everybody does not seem to agree on this, and DI - The Association of Danish Industry - is still of the opinion that there will be no restrictions to check out communication between Denmark and other countries even under the new amendments. Therefore the DI organisation will give out information on encryption to prevent Swedish tapping into Danish communication.
According to Computerworld there are still issues. So in spite of what seems to be a happy ending, we may not have seen the last of this legislation.