The Virtual Laboratory is a heterogeneous distributed environment, which allows a group of scientists from different sites to work on one project. As in all other laboratories the equipment and techniques are specific for a given field of activity. Despite some interrelations with tele-immersive applications, the virtual laboratory does not presume the need of sharing the working environment.


Virtual laboratories could implement different components depending on the type of experiments which will be provided within. The common parts of all Virtual Laboratories are:
  • Access via Internet by a Web Portal. Such solution will fulfil the main condition of VL - the global access.

  • Computational server - a high performance computer which can work with large scale simulations and data processing.

  • Data bases which contain application-specific information - such as initial simulations, bound conditions, experimental observations, client requirements or production limitations. Databases also contain distributed, application-specific resources (e.g. repositories of human genotype). The databases content should be changed automatically. Databases could also be distributed. It should be presumed that databases will contain a large amount of information.

  • Scientific equipment connected with the computational networks. For example, it could be data from satellites, earthquake detectors, air pollution detectors, astronomical equipment (like the distributed astronomical research program which is provided by the National Radio Astronomy Observatory).

  • Collaboration and communication tools, such as chat, audio- and video-conferences or tele-immersion.

  • Software. Each virtual laboratory is built on specific software which allows for the simulating process, data analysing or visualization. Most of this software was not developed for the distributed network solutions - it is one of the main problems in VL building.

From the virtual laboratory perspective - especially from the point of view of applications like the tele-immersion - a critical parameter is the delay. This is the reason why a computational center which implements the VL idea should have access to a high throughput network. It will be helpful to connect the tasks scheduling system with the throughput reservation services. Next critical parameters are multicast protocols and technology reliability in collaboration with specific VL experiments where people, resources and computations are highly distributed. In these experiments data streams could be divided into voice, video, computational elements and huge portions of simulation and visualization data which will be delivered in real-time from the scientific equipment. Applications should allow access to data from several heterogeneous sources. For example, such information can come from real experiments or computational simulations. The main source of information will be the mass data storage systems - they are especially important for bioinformatics tasks. These data storage systems could be solved on the basis of dedicated databases implemented in national supercomputing centers and could also include small personal databases from workstations and PC. Because of virtual laboratory, data processing will be controlled by an individual scientist or a distributed research team, working in their own laboratories.


The remote display system (RDS) allows users to view a computing desktop environment not only on the machine where it is running, but from anywhere in the Internet and from a wide variety of machine architectures.

The Virtual Laboratory (VL) System has much more capabilities than RDS. Apart from access to a single device with a possibility to run an experiment, VL provides its users with the following features:
  • single-application access to many laboratory devices, which includes load-balancing algorithms, allowing to choose the less loaded devices in order to speed up the experiment execution time,

  • the possibility to define various dynamic measurement scenarios, consisting of series of operations associated with additional conditions, and when executed, those scenarios will automate the process of obtaining and post-processing experiment results,

  • the experiment results archivization - for the purpose of additional analysis, processing, or sharing with other scientists and laboratories,

  • access to the resources stored in the digital library - electronic publications from a given knowledge domain, or other expensive educational and research materials, which are usually unavailable for a single person, but in case of Virtual Laboratory the cost is divided and therefore they are much more affordable,

  • different ways of communication between laboratory users - the possibility of sharing and consulting experiment results with other scientists from different laboratory, city or even country,

  • the automated user accounting and billing process according to the currently used resources,

  • the increased security level - the user has limited access to the remote desktop and hardware system used to control the (usually very expensive) laboratory equipment - the VL system acts as a firewall between the end-user and the hardware,

  • the tasks validation - all tasks submitted to the system are entered via web-based forms, which allows to check their correctness and validate them as early as on the entry level,

  • the possibility of submitting tasks in the batch mode - allowing all users to send their tasks at any moment without the need to wait for other users to finish their work and free shared resources; furthermore, it is also useful, as some devices or interfaces can be temporarily disabled (due to technical difficulties or maintenance work); additionally, users can be informed about the current status of their tasks via SMS or e-mail,

  • to the end-user, the Virtual Laboratory is independent of the system architecture and installed software - what is required is just an Internet browser.