Skip to content. | Skip to navigation

Personal tools

Navigation

You are here: Home / Wiki / Pgeninodes

Pgeninodes

ProtoGENI Nodes

ProtoGENI Nodes

This page is a work in progress - check back soon

We're working as part of the NSF GENI project to develop the Emulab software into ProtoGENI, a general "Control Framework" for control of resources in networking testbeds. This effort is documented at http://www.protogeni.net/.

As part of ProtoGENI, we are deploying hardware at a number of sites around the country. The first is a national 10Gbps backbone on Internet2, dedicated specifically to GENI. PCs and NetFPGAs are placed in the POPs, connected to Ethernet switches controlled by the ProtoGENI/Emulab software. The second is a set of nodes at edge sites at University campuses, etc., which simply use their campus's regular network for Internet access.

In both cases, PCs are currently given to one user at a time, and like traditional Emulab nodes, the experimenter has full root access and can install disk images, etc. on the nodes. In the future, we expect to run some of the nodes in "shared mode", using our virtual node and shared node support.

In addition to the ProtoGENI programmatic APIs, this hardware is also available through the familiar Emulab interfaces. When using the Emulab interface, most familiar Emulab tools are supported on the nodes, though the shared filesystem (/users/ and /proj) is not.

National Backbone

As ProtoGENI, we have obtained a 10Gbps "lambda" across the Internet2 national footprint. This lambda is directly connected to HP Procurve switches, which are controlled by the Emulab software. Essentially, they look to Emulab very much like switches in our machine room - the wires connecting them just happen to be 1,000 miles long. We are able to make VLANs on this switching infrastructure, just as we do inside of our lab. This infrastructure is shared with other GENI projects, which are connected to our switches, but we have no plans to over-subscribe it; our goal is to, as with traditional Emulab, promise you the bandwidth you request.

In each POP, directly connected to the switches that terminate the 10Gbps wave, are two PCs, each of which hosts two NetFPGA cards. Each PC has from two to five experimental net interfaces, a control network interface that is routable to any Internet2-connected institution, and a management interface (for serial console, power control, etc.) We've made every effort to make these machines appear as much like "local" Emulab cluster nodes as possible; the only major feature they are missing is mounting of shared filesystems.

More information about this backbone can be found on the ProtoGENI Backbone page. Descriptions of the hardware available in each POP can be found from the links to the POPs from the GENI Integration page.

Putting Backbone Nodes in Your Experiment

Backbone PCs may be requested just like regular Emulab PCs, with a few special cases and exceptions. See the backbone.ns file attached to this page as an example.

The status of these nodes can be found (if you are logged in...) from the ProtoGENI node status page.

Hardware Type

ProtoGENI nodes in the Interntet2 POPs have the type pcpg-i2, and names like `pgXX'. The hardware type must be explicitly set in your NS file using the tb-set-hardware command. A description of the hardware in these nodes can be found on the pcpgi2 page.

These nodes have a DRAC5 management card, which means they can be remotely power cycled and remote serial console access is available (but not yet provided to experimenters through the console command). So, there is a fair amount we can do to regain control of these nodes if they get wedged; still, please be nice to them.

Node Locations

The following table lists the nodes currently installed:

Emulab name Internet2 POP City
pg40 WASH College Park, MD (Washington DC area)
pg41 WASH College Park, MD (Washington DC area)
pg42 KANS Kansas City, MO
pg43 KANS Kansas City, MO
pg44 SALT Salt Lake City, UT
pg45 SALT Salt Lake City, UT

More information about each POP can be found on the GENI Integration page.

To get a node in a particular POP, you may either use the tb-fix-node command (not recommended) to request a specific node, or the add-desire command, which will let you request any node in a POP. To require that a node be in the WASH pop, you would put $node add-desire "i2-POP-WASH" 1.0 in your NS file.

We will also install one of these nodes in the Emulab cluster as pg38. A similar, but not quite identical (different clock speed, RAM, a number of disks) is in the Emulab cluster as pg39.

OS Images

The only standard image that currently fully supports the hardware on these nodes is the FEDORA8-STD image, which must be explicitly requested via the `tb-set-node-os` command.

If you wish to create an image of your own, we suggest that you look at the pcpgi2 page to make sure you are installing all of the correct drivers, and build and load the image first on pg38; this node (will soon be) in the Emulab cluster, and will be faster to image, load, and debug image problems on.

All NICs in the PCs currently support gigabit Ethernet - asking for 100Mbps will result in a delay node being inserted, which will fail to swap in. Links must also be given a delay of 0ms in the NS file to prevent Emulab from trying to add further latency.

As with the Emulab cluster experimental net, the experimental-net interfaces on the backbone nodes see only the experiment's own traffic, and may use any IP addresses (or non-IP protocols) that they wish.

It is not currently possible to use links in your NS file to connect nodes in the backbone to nodes in Emulab. For the time being, these machines may only talk to Emulab nodes using their publicly routable "control net" interface. We are, however, working on getting a dedicated 10Gbps connection from Emulab into this backbone, which will connect the Emulab experimental net to the backbone with an Ethernet-layer link.

Note: Document tunnels here

Access to nodes

Nodes are accessed through ssh as with normal Emulab nodes. Usual Emulab "physical" (eg. pg42.emulab.net) and "fully qualified" (eg. nodeA.myexp.myproj.emulab.net) DNS names may be used to refer to them.

Because these nodes use Internet2's IP space for their control network interfaces, and Internet2 generally does not transit traffic from the commercial internet, these addresses are only reachable from Internet2-connected institutions.

NetFPGA nodes

Note: Allocation of NetFPGAs through the Emulab interface is not currently set up - we expect to do this soon.

Edge Nodes

We are also in the process of deploying ProtoGENI nodes at edge sites. These nodes are simply connected to the campus network, and have no special connectivity.

Like the backbone nodes, these nodes are currently used in "exclusive mode", in which they are given to one experimenter at a time. Experimenters have full root capabilities, can re-image the disks, etc. As with the backbone nodes, we expect to run one PC at each site using our virtual node and shared node support in the future.

Putting Edge Nodes in Your Experiment

Edge PCs may be requested just like regular Emulab PCs, with a few special cases and exceptions. See the edge.ns file attached to this page as an example.

The status of these nodes can be found (if you are logged in...) from the ProtoGENI node status page.

Hardware Type

ProtoGENI nodes in the Interntet2 POPs have the type pcpg, and names like `pgXX'. The hardware type must be explicitly set in your NS file using the tb-set-hardware command.

These nodes have only limited remote console and power-cycling support, which has proven to be unreliable. So, please do your best not to wedge them, as local operator intervention may be required.

Node Locations

Locations of the edge nodes are listed on the ProtoGENI node status page. At the present time, the best way to get a node at a particular site is to request the node by name using the tb-fix-node command.

OS Images

The only standard images that currently fully support the hardware on these nodes are the FEDORA8-STD and FBSD63-STD images, which must be explicitly requested via the `tb-set-node-os` command.

Links

Because these nodes have no special connectivity, it is not possible to make VLANs to them. However, Emulab can automatically set up tunnels between the nodes in your experiment so that you still have some control over topology.

Note: Document tunnels here

Access to nodes

Nodes are accessed through ssh as with normal Emulab nodes. Usual Emulab "physical" (eg. pg42.emulab.net) and "fully qualified" (eg. nodeA.myexp.myproj.emulab.net) DNS names may be used to refer to them.

ADMIN NOTES

Access to the node serial console

Only Flux.utah.edu staff members can access the serial consoles on the pcpg-i2 nodes. Access is done through the out-of-band DELL Drac management interface. Users must have ssh keys registered on the Drac.

Step 1. Find the IP address of the 'Management IP'.

Step 2. ssh too the IP address

Expect the following output:

Dell Remote Access Controller 5 (DRAC 5) Firmware Version 1.45 (Build 09.01.16) Type "racadm help" to get racadm subcommands. Type "smclp" to invoke SMCLP interface. $

Step 3. Enter the command: "connect com2" Expect out put:

Connected to com2. To end type: '\'

VGA Buffer Console Access

Vist the link https://<"Management IP">

DRAC reset

If the drac is acting correctly, it can be reset. ssh into the drac and issue the command racadm racreset

How to boot from the two different usb dongles

Since the pcpg-i2 node where going into I2 pops and access to the machines would be limited we decided that the machines needed to be able to recover from a usb corruption. Each machine has two usb drives. Normally the machine will boot off a Read/Write enabled usb drive. The second usb drive is hardware write protected <need to finsh>

RACADM CLI (add user)

To add a new user to the RAC configuration, a few basic commands can be used. In general, perform the following procedures: 1. Set the user name. 2. Set the password. 3. Set the user privileges. 4. Enable the user.

Example The following example describes how to add a new user named "John" with a "123456" password and LOGIN privileges to the RAC.

racadm config -g cfgUserAdmin -o cfgUserAdminUserName -i 2 john racadm config -g cfgUserAdmin -o cfgUserAdminPassword -i 2 123456 racadm config -g cfgUserAdmin -i 2 -o cfgUserPrivilege 0x00000001 racadm config -g cfgUserAdmin -i 2 -o cfgUserAdminEnable 1 To verify, use one of the following commands:

racadm getconfig -u john racadm getconfig -g cfgUserAdmin -i 2

Bit Masks for User Privileges User Privilege privilege Bit Mask Log In To DRAC 5 0x0000001 Configure DRAC 5 0x0000002 Configure Users 0x0000004 Clear Logs 0x0000008 Execute Server Control Commands 0x0000010 Access Console Redirection 0x0000020 Access Virtual Media 0x0000040 Test Alerts 0x0000080 Execute Debug Commands 0x0000100

Emulab Geni Example for a Privileged user First find an empty slot

racadm config -g cfgUserAdmin -o cfgUserAdminUserName -i 8 elabman racadm config -g cfgUserAdmin -o cfgUserAdminPassword -i 8 xxxx racadm config -g cfgUserAdmin -i 2 -o cfgUserAdminPrivilege 0x00000011 racadm config -g cfgUserAdmin -i 2 -o cfgUserAdminEnable 1

Removing a DRAC 5 User When using RACADM, users must be disabled manually and on an individual basis. Users cannot be deleted by using a configuration file.

The following example illustrates the command syntax that can be used to delete a RAC user:

racadm config -g cfgUserAdmin -o cfgUserAdminUserName -i <index> ""

A null string of double quote characters ("") instructs the DRAC 5 to remove the user configuration at the specified index and reset the user configuration to the original factory defaults.