Tamnun Cluster Expansion Policy

Policy for addition and maintenance of compute nodes to the central parallel computer


This document describes the CIS Division policy, formulated according to Technion management guidelines, and the interaction between the general cluster installed at the Technion, that was acquired with the joint funding by RBNI and Minerva project, and new nodes, purchased by academic staff members from their research budgets.


Central resources usage
A researcher who joins the cluster by purchasing  private compute nodes according to the configurations approved by CIS, will benefit from the following options (some of which are subject to the costs listed below):

  1. Using the Master Node of the existing cluster
  2. All central software resources and licensing
  3. Physical hosting of the aquired nodes at the server farm including:
    1. Location in the racks
    2. Dual power supply
    3. Dedicated air conditioning system
    4. Central UPS system
  4. Management services, system administration, upgrades and software installations
  5. Storage and Backup services [1]
  6. Information security services
  7. Network services
  8. Monitoring and control services
  9. Professional personnel in a variety of disciplines required to run the cluster in general and scientific advice in the field of HPC in particular

Independence of the researcher
1. The researcher will be given the ability and the right to self-manage the queue on the  purchased compute nodes. (The queue can also be operated by a CIS team according to the CIS policy).
2. The researcher  will be allowed to install dedicated software on the purchased compute nodes. Any software purchase should be coordinated with the CIS Division  beforehand, to ensure installation possibility and relevancy.


The principle of cooperation and reciprocity
1. The private nodes owner has the right to use the general cluster, like any other
Technion researcher.
2. Private nodes which are not utilized will be provided for usage by other users of the cluster.


HPC cluster is a complex system and its maintenance is done by many factors and requires the purchase of various equipment and services.
Purchase and services costs will be covered as follows:


Expansion Costs
Central infrastructure costs required by any expansion (communication / racks / cabling  / UPS / electrical work) will be divided between researchers in proportion to the number of nodes each of them added.


Costs of use
1. According to the decision of management from 6.10.2011 usage fees  will not be charged until the end of 2012.  This decision being re-examined.
2. No fees will be charged for the use of  "private" resources (CPU and disk).

Maintanence Costs
At the end of 3 years of warranty, service and maintenance costs of hardware,  20% of the cost of acquisition of each node, will be applied to the researcher who owns the node.

Hosting costs
1. Hosting services expenses for the private servers (nodes packs)  are in accordance to the CIS  pricelist (now ₪ 800 per 1U equipment). This list is updated from time to time on the CIS Division website
2. According to the decision of the Information Technology Steering Commitee from January 2014, new members will bear the cost overhead for hosting  their servers, 20% of the equipment and software purchase cost.
This amount includes all hosting expenses for 3 years.


Software costs
1. The researcher will bear the costs of licensing the  Red Hat operating system according to the vendor price list (currently $ 80 per year) for each compute node.
2. For non-free software and software not centrally licensed at the Technion, the researcher will bear the costs of the license purchase.


All administrative procedures pertaining to the budget billing will be conducted between
the CIS Division and the head of the administration of the faculty to which the researcher belongs.


[1] Beyond the use of the local storage, usage of the central storage and backup systems will be charged fees according to the pricelist issued by the CIS Division.

עודכן: 09/12/2014 , 14:07