Need more Citrix test preparation?

For more Citrix test preparation, including PDF Study Guides and PDF Resource Guides with extra content, try 7 DAYS FREE of the Exam Prep membership. New content added multiple days each week.

Find out what's new this week.

1Y0-A24 Citrix XenServer 5.6 Study Guide

1Y0-A24 Citrix XenServer 5.6 Administration Study Guide

Download version in free section.

#Pre-Deployment Planning

To use StorageLink, the environment requires a supported storage array and one or more hypervisor hosts (Hyper-V or XenServer) running Windows 2008 or 2008 R2.

Internal networks have no connection to the outside world, as they have no association to a physical network interface. Internal networks provide connectivity only between virtual machines on a host.

External networks have an association with a physical network interface and provide a bridge between a virtual machine and the physical network interface connected to the network, enabling a virtual machine to connect to the outside world.

NIC bonding allows two physical NICs to create a single, high-performing channel between virtual machines and the network.

NIC bonding can improve XenServer host performance and resiliency by using two physical NICs as if they were one. NIC bonding balance traffic between the bonded NICs. Also, if one NIC within the bond fails, the traffic will automatically be routed over the second NIC.

When XenServer is installed, one NIC is chosen as the management interface. The management interface is used for XenCenter connections to the host and for host-to-host communication.

The physical network interface on the XenServer host will connect the DHCP/PXE virtual machine to the external network where it can provide IP addresses and the bootstrap file information to target devices.

#Configuring XenServer Enterprise Edition

To license XenServer paid editions, perform the following tasks: 1. Create a Citrix license server. 2. Download and add the XenServer license file to the Citrix License Server. 3. Specify the name of the license server on each XenServer host.

To create NIC bonds in a resource pool: 1. Select the host that you want to be the master. 2. Create the NIC bond on the master. 3. Join other hosts to the pool.

Best practices include: Create NIC bonds in the pool prior to joining additional hosts. This will allow the bond configuration to automatically replicate to the joining hosts. Do not attempt to create NIC bonds while high availability is enabled.

Before dedicating a network interface as a storage interface for use with iSCSI or NFS SRs, ensure that the dedicated interface uses a separate IP subnet which is not routable from the main management interface. If this is not enforced, then storage traffic may be directed over the main management interface after a host reboot, due to the order in which network interfaces are initialized.

When configuring two physical NICs on a XenServer host with separate management and storage networks, if the storage device allows secondary IP addresses on the storage network interface, you may wish to configure an IP address from the management subnet for the purposes of management.

The ISO type handles CD images stored as files in ISO format. This SR type is useful for creating shared ISO libraries. For storage repositories that store a library of ISOs, the content-type parameter must be set to iso. For example: xe sr-create host-uuid= content-type=iso type=iso name-label=<”Example ISO SR”> location=

When installing Workload Balancing on a virtual machine, it is essential that the time on the physical server hosting the WLB VM and the WLB VM itself match. You might want to consider pointing both the host and VM to a Network Time (NTP) server. Determine the port over which you want the WLB server to communicate. The default is 8012, but it can be changed. If the server on which you are installing Workload Balancing is a member of a Group Policy Organizational Unit, ensure that current or scheduled, future policies do not prohibit Workload Balancing or its services from running.

#Creating Virtual Machines

XenServer Tools must be installed for each virtual machine in order for the VM to have a fully supported configuration and to use the XenServer management tools. XenServer Tools allow operations including: cleanly shut down a VM, cleanly reboot a VM, suspend a VM, migrate a running VM (XenMotion), use the checkpoint and roll back feature, dynamically adjust the number of vCPUs to a running Linux VM.

XenServer Tools allow operations including: cleanly shut down a VM, cleanly reboot a VM, suspend a VM, migrate a running VM (XenMotion), use the checkpoint and roll back feature, dynamically adjust the number of vCPUs to a running Linux VM.

For existing physical instances of Windows servers, use XenConvert. XenConvert runs on the physical Windows machine and converts it live into a VHD-format disk image or an XVA template suitable for importing into a XenServer host. The physical host does not need to be restarted during this process, and device drivers are automatically modified to make them able to run in a virtual environment.

When creating a Windows XP image using Provisioning services, if it will be used as a template to provision multiple target devices, switch the image to Standard Image Mode and make sure to delete the local storage. To optimize Provisioning services, disable TCP checksum and Large Send Offload.

The Windows tools include a XenServer VSS provider that is used to quiesce the guest filesystem in preparation for a VM snapshot. To enable the Windows XenServer VSS provider, install the Windows PV drivers, navigate to the directory where the drivers are installed and double-click install-XenProvider.cmd to activate the VSS provider. Using the Citrix VSS provider for Windows virtual machines results in an application-consistent snapshot.

After upgrading the XenServer hosts in a resource pool, it is important to upgrade the XenServer Tools on all of the virtual machines. This will enable new functionality and ensure the stability of the virtual machines.

To create XenApp virtual machines that will each deliver different applications in a live XenServer environment, the virtual machines should be optimized for XenApp workloads to provide the best performance.

Fast copy is designed to save disk space and allow fast clones, but will slightly slow down normal disk performance. A template can be fast-cloned multiple times without slowdown, but if a template is cloned into a VM and the clone converted back into a template, disk performance can linearly decrease. Full copy can be performed with expected levels of disk performance.

If a template is created on a server in a pool and all virtual disks of the source virtual machine are on shared storage repositories, the operation of cloning that template will be forwarded to any server in the pool. However, if you create the template from a source virtual machine that has any virtual disks on local storage, then the clone operation can only execute on the local server.

#Working With Virtual Machines

The virtual machine export/import feature can be used to backup the virtual machines (export) and import them to another XenServer host in the event of disaster.

Standard Image mode allows multiple target devices to use a single vDisk at the same time; greatly reducing the amount of vDisk management and storage requirements.

Workload Balancing set to Maximize Density ensures virtual machines have adequate computing power so you can reduce the number of hosts powered on in a pool, but doesn’t necessarily help consolidate storage.

To import a virtual machine from a previously exported file, use vm-import.

Without Dynamic Memory Control (DMC), when a server is full, starting further virtual machines will fail with “out of memory” errors. To reduce the existing virtual machine memory allocation and make room for more virtual machines you must edit each virtual machine’s memory allocation and then reboot the virtual machine. With DMC enabled, even when the server is full, XenServer will attempt to reclaim memory by automatically reducing the current memory allocation of running virtual machines within their defined memory ranges.

When Dynamic Memory Control is enabled and the host’s memory is plentiful, the virtual machines receive their Dynamic Maximum Memory level. When the host’s memory is scarce, the virtual machines will receive their Dynamic Minimum Memory level.

After upgrading from XenServer 5.5 to XenServer 5.6, XenServer sets all virtual machines memory so that the dynamic minimum is equal to the dynamic maximum. An administrator should adjust the memory settings so that they are at the minimum recommended level.

To add an Active Directory user to Role Based Access Control, run the command xe subject-add subject-name=. This does not assign the user any roles, it only adds them to RBAC. Give the appropriate role with Active Directory.

A user’s role can be changed in two ways: 1. Modify the subject -> role mapping. 2. Modify the user’s containing group membership in Active Directory.

StorageLink allows administrator to get new servers into production more quickly by taking on the virtual machine snapshot and clone operations.

Use XenCenter to move virtual machines from one resource pool to another with the ‘Export as backup’ and Import options.

If you receive the error message “PXE-E53: No boot filename received”, it’s because the client received at least one valid DHCP/BOOTP offer, but does not have a boot filename to download. To resolve the issue, configure option 60, 66 and 67 on Microsoft’s DHCP.

#Managing and Maintaining XenServer Hosts

Some of the requirements for using Active Directory authentication for XenServer are that the servers can be in different time-zones, but to ensure synchronization is correct, use the NTP servers and the XenServer Active Directory integration uses the Kerberos protocol to communicate with the Active Directory servers.

XenServer’s Role Based Access Control depends on Active Directory for authentication services. Specifically, XenServer keeps a list of authorized users based on Active Directory user and group accounts. As a result, you must join the pool to the domain and add Active Directory accounts before you can assign roles.

To implement Role Based Access Control, configure Active Directory authentication by joining the resource pool to the domain. Add an Active Directory user or group to the pool. Assign roles.

To add a physical NIC to a XenServer host, after installing the NIC, the administrator must run pif-scan, to scan for new physical hardware, pif-introduce, to introduce the new NIC to the XenServer host, and pif-plug, to plug the new UUID.

To dedicate a network interface (NIC) to storage traffic, the NIC, storage target, switch and/or VLAN must be configured such that the target is only accessible over the assigned NIC. Routing the storage network interface from the management network interface is not recommended but can be done.

To change the NIC used for the management interface, usee the pif-list command to determine which PIF corresponds to the NIC to be used as the management interface. The UUID of each PIF is returned. Use the pif-param-list command to verify the IP addressing configuration for the PIF that will be used for the management interface. Use the host-management-reconfigure command to change the PIF used for the management interface. If this host is part of a resource pool, this command must be issued on the member host console: xe host-management-reconfigure pif-uuid=

To install a new update to the hosts in a XenServer resource pool, high availability must be disabled. Apply the update to the pool master using XenCenter or the CLI and reboot. Do the same with each slave in the pool. HA must not be re-enabled until after the last host in the pool is rebooted. XenCenter’s “Automatic Mode” may not be compatible with the update.

XenCenter can be used to gather XenServer host information. Click on Get Server Status Report… in the Tools menu to open the Server Status Report wizard. Select from a list of different types of information. The information is compiled and downloaded to the machine that XenCenter is running on.

To backup virtual machine metadata only, run the command: xe vm-export vm= filename= –metadata

To backup a virtual machine, ensure that the virtual machine is offline and run the command: xe vm-export vm= filename=

To backup host configuration and software, run the command: xe host-backup host= file-name=

To backup pool metadata, run the command: xe pool-dump-database file-name=. Also, to check the target host has the correct number of appropriately named NICs, which is required for the backup to succeed, run the command: xe pool-restore-database file-name= dry-run=true

To configure multipathing in a XenServer implementation that only has local storage, in XenCenter: 1. Enter Maintenance Mode on the server. 2. Enable multipathing. 3. Exit Maintenance Mode. Repeat steps 1, 2, and 3 on each XenServer host in the pool. Next, create new storage repositories (SRs). New SRs will use multiple paths automatically.

Using basic Windows or Linux scripting tools, it is possible to automatically import and export your virtual machine metadata. Create scheduled tasks (cron jobs) to automatically export and import this data on a regular basis. The xe commands can be sent from either the XenServer console or the XenCenter workstation.

When using multipathing, after configuring a storage controller in a XenServer environment, restart the multipath service and show multipath information to verify all paths are active.

#Creating and Managing Storage

Add an NFS storage repository to a XenServer host with the CLI command: xe sr-create content-type= type= name-label=<”Example SR”> shared= device-config:server= device-config:serverpath= The content-type is used to distinguish the use of the storage repository (if it were a CD image repository, content type would be iso). type refers to the type of storage. name-label is the name the administrator chooses to give it. device-config:server refers to the hostname or IP address of the NFS SR host and deviceconfig:serverpath refers to the path of the share name on the host server. Since shared is set to true, the shared storage will be automatically connected to every XenServer host in the pool and any XenServer hosts that subsequently join will also be connected to the storage. The Universally Unique Identifier (UUID) of the created storage repository will be printed on the screen.

After extending the size of a vDisk with the VHDResizer, the newly extended free space appears as unallocated space in the vDisk. However, Windows XP or Windows 2003 vDisks cannot see the extended size. This is not applicable to Vista and Windows 2008. To workaround this, mount the vDisk through Provisioning Server Console and use the Disk-Part tool from Windows to extend the size of the vDisk to cover the new space created by VHDResizer.

All iSCSI initiators and targets must have a unique name to ensure they can be uniquely identified on the network. iSCSI targets commonly provide access control using iSCSI initiator IQN lists, so all iSCSI targets/LUNs to be accessed by a XenServer host must be configured to allow access by the host’s initiator IQN. Similarly, targets/LUNs to be used as shared iSCSI SRs must be configured to allow access by all host IQNs in the resource pool.

When an administrator tries to attach a fibre channel storage repository and receives an error message saying “Exceeded max retries…”, the administrator should check the cabling between the host, switch and storage array, look at the storage volume on the storage array and ensure that the LUN zone is set up correctly and make sure that the setting for protocol for the storage repository matches the protocol setting for the storage volume. In case of iSCSI, ensure that the storage volume is mapped to the correct host initiator and check the network connections and routes from the host to the array.

device-config:serverpath refers to the path of the share name on the host server. The correct syntax is device-config:serverpath= (the forward slash “/” is part of the ).

One of the cons of using a fibre channel LUN storage repository with XenServer is that it uses zones and LUN masking, configured on the fibre channel switches, to control access to LUNs. Some of the pros are: SATA drives can be replaced with SCSI or SAS drives, additional bandwidth can be provided using multiple network connections, and drives can be grouped and configured to operate in a RAID array.

To connect StorageLink to a storage system, provide storage adapter credentials to the Gateway.

On launching StorageLink Manager, a dialog box appears to prompt for the IP address and credentials of the machine running the Gateway.

Before collecting logs, open a command prompt and set the trace level in the registry as follows: REG ADD HKLM\SOFTWARE\Wow6432Node\Citrix\StorageLink\1.0\Server\Trace /v TraceLevel /t REG_DWORD /d 5 net stop StorageLink net start StorageLink. Next, make a directory and copy the logs there: mkdir c:\forCitrix copy c:\Program Files\(x86)\Citrix\StorageLink\Server\cslsa_smis_vendor_options.cfg c:\forCitrix c:\ProgramData

The installation process for Citrix StorageLink: 1. Install and apply Citrix License Server. The StorageLink Gateway and StorageLink Manager require that you install Citrix License Server and apply that license to your host (Platinum and Enterprise editions only). 2. Install StorageLink Gateway on Windows 2008 or Windows 2008 R2. 3. Install StorageLink Manager, the graphical user interface used to manage StorageLink Gateway.

To extend file systems on an extended Linux disk you must use file system tools that correspond to a particular file system type. If you use ext3, after resizing the disk in XenServer, start the operating system and use the resize2fs tool with the name of the partition that should be extended.

A NIC bond provides failover for the storage network and provides a high amount of bandwidth.

#Working with Pools

Citrix recommends configuring networks using a separate dedicated network interface each for the guest, management, and storage traffic. Separate VLANs for the purposes of management and storage is not supported.

Using XenServer to mask the CPU features of the new servers, so that it will match the features of the existing servers in a pool, requires the following: The CPUs of the servers joining the pool must be of the same vendor as the CPUs on servers already in the pool. Specific type need not be. The CPUs of the servers joining the pool must support either Intel FlexMigration or AMD Enhanced Migration. The features of the older CPUs must be a sub-set of the features of the CPUs of the servers joining the pool. The servers joining the pool are running the same version of XenServer software, with the same hotfixes installed, as servers already in the pool. An Enterprise or Platinum license is required.

If you are sure that the XenServer host’s server that you are trying to join is acceptable in your environment, then the pool joining operation can be forced by passing a –force parameter: xe pool-join –force

To join XenServer hosts host1 and host2 into a resource pool using the CLI 1. Open a console on XenServer host host2. 2. Command XenServer host host2 to join the pool on XenServer host host1 by issuing the command: xe pool-join master-address= master-username= master-password=. The master-address must be set to the fully-qualified domain name of XenServer host host1 and the password must be the administrator password set when XenServer host host1 was installed.

To select a new pool master, either run the CLI command pool-emergency-transition-to-master after taking out the current pool master or run pool-designate-new-master while the current master is still online.

If a pool master is unreachable and unrecoverable in a XenServer resource pool with high availability enabled, first disable HA using the host-emergency-ha-disable command with the force switch. Next, force a slave to reboot as a pool master using xe pool-emergency-transition-to-master. Re-enable HA using: xe pool-ha-enable heartbeat-sr-uuid=

#Using the XenServer Command Line Interface

An administrator would use xe pif-list, xe pif-reconfigure-ip to assign a new IP address to a physical NIC on a XenServer. PIF objects can be listed with pif-list. Pif-reconfigure-ip modifies the IP address of the PIF. Pif-scan scans for new physical NICs on a XenServer host. Pif-unplug attempts to bring down a physical interface. Pif-introduce creates a new pif object representing a physical interface on a XenServer host.

host-management-reconfigure reconfigures the XenServer host to use the specified NIC as its management interface. host-set-power-on enables Host Power On functionality on XenServer hosts that are compatible with remote power on functionality. host-set-hostname-live changes the hostname of the XenServer host. host-syslog-reconfigure reconfigures the syslog daemon on the specified XenServer host.

To protect virtual machines, for each virtual machine, set a restart policy using vm-param-set and then enable HA in the pool by running pool-ha-enable.

When planning to pull the pool master out of the resource pool, run either of the command-line commands, pool-emergency-transition-to-master or pool-designate-new-master. pool-emergency-transition-to-master is only accepted by the
XenServer host if it has transitioned to emergency mode, meaning it can’t be contacted. pool-designate-new-master performs an orderly hand over of the role of master host to another host in the resource pool. This command only works when the current master is online.

A host may become unreachable in a XenServer resource pool with high availability enabled. To recover the XenServer installation, it may be necessary to disable HA using the host-emergency-ha-disable command with the force switch. If the host was the pool master, it should start up as normal with HA disabled. If slaves cannot contact the master, then it may be necessary to force a slave to reboot as a pool master (xe pool-emergency-transition-to-master). To let the resource pool members know which host is the new master, it may be necessary to run the command: xe poolemergency-reset-master. Reset HA using: xe pool-ha-enable heartbeat sruuid=

In a resource pool where HA is already disabled, if a pool master becomes unresponsive, thus bringing down connectivity to the pool, an administrator should force a pool slave to become the pool master using the command: xe pool emergencytransition-to-master uuid=

When a resource pool has HA enabled, virtual machines that are shutdown will automatically restart. An administrator should execute the vm-param-set command with the ha-always-run=false to prevent the VM from restarting when it needs to be shut down for whatever reason.

To restart the XAPI service with affecting other services on the machine, use either xe toolstack-restart or service xapi restart.

Find the UUID of a XenServer host in a resource pool by executing the xe host-list CLI command.

Find the UUID of a XenServer resource pool by executing the xe pool-list CLI command.

Using the command line, an administrator can use the command service xapi status to check the XAPI service, which connects XenServer and XenCenter over port 443 (or 5900 using VNC and Linux).

Find the UUID of a virtual machine by executing the xe vm-list CLI command.

#Business Continuity (High Availability and Virtual Machine Recovery)

To use high availability with XenServer, shared storage using either one iSCSI or Fibre Channel LUN is required.

Setting Workload Balancing to Maximize Performance ensures that each virtual machine will have the maximum amount of resources that are available for it in the resource pool. Setting a resource, such as CPU utilization, to More Important will ensure that virtual machines will not be automatically migrated, using XenMotion, to hosts that are low on that particular resource.

Virtual machines are assigned a restart priority and a flag that indicates whether they should be protected by high availability or not. When HA is enabled, every effort is made to keep protected virtual machines live. If a server fails then the virtual machines on it will be started on another server. Best effort is not part of the failover plan and does not guarantee the virtual machines be kept running. The host Failures to tolerate value is defined as part of high availability configuration. It determines the number of servers that are allowed to fail without any loss of service. The virtual machines will automatically be restarted on other hosts.

If virtual desktops will sometimes be shut down when not being used, they should not be configured to automatically restart. The virtual desktops can be configured as ‘Do not restart’ in XenCenter.

To use host fencing in a resource pool, Citrix strongly recommends the use of a bonded management interface on the servers in the pool if HA is enabled, and multipathed iSCSI or Fibre Channel LUN storage for the heartbeat SR.

Using snapshot-export-to-template, a complete copy of the virtual machine is stored as a single file on your local machine, with a .xva file extension, making it quick and easy to create multiple copies. It’s also a convenient backup method, allowing quick recovery of a virtual machine, and a convenient way to move a virtual machine from one server to another.

Full memory snapshots can be taken using the ‘Take Snapshot’ button in XenCenter or using the CLI command xe vm-checkpoint vm= new-name-label=.

To create a virtual machine template with no memory state and that exists in the current resource pool only, us the CLI command: xe snapshot-copy new-name-label= snapshot-uuid=

To create a consistent point-in-time snapshot (quiesce snapshot) of a virtual machine, install the Xen VSS provider in the Windows Server 2008 virtual machine and run the CLI command: xe vm-snapshot-with-quiesce vm= new-name-label=

Setting Workload Balancing to Maximize Performance ensures that each virtual machine will have the maximum amount of resources that are available for it in the resource pool. Setting a resource, such as Network Writes, to Less Important will ensure that virtual machines will be automatically migrated using XenMotion as usual when triggered by Workload Balancing.

Workload Balancing allows an administrator to apply optimization modes, for the best performance or highest density, to servers all of the time using the Fixed setting. The Scheduled setting applies the optimization modes for specified times of the day.

Workload Balancing set to Maximize Density ensures virtual machines have adequate computing power so an IT department can reduce the number of hosts powered on in a pool.

1 comment to 1Y0-A24 Citrix XenServer 5.6 Study Guide

Leave a Reply

 

 

 

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>