Google News
logo
CISCO UCS Interview Questions
A Cisco Unified Computing System (UCS) is a data center server computer product line composed of server hardware, virtualization support, switching fabric, and management software, introduced in 2009 by Cisco Systems.
 
UCS products are designed and configured to work together effectively.The goal of a UCS product line is to simplify the number of devices that need to be connected, configured, cooled and secured and provide administrators with the ability to manage everything through a single graphical interface.
 
The term unified computing system is often associated with Cisco. Cisco UCS products have the ability to support traditional operating sytem (OS) and application stacks in physical environments, but are optimized for virtualized environments. Everything is managed through Cisco UCS Manager, a software application that allows administrators to provision the server, storage and network resources all at once from a single pane of glass. Similar offerings to Cisco UCS include HP BladeSystem Matrix, Liquid Computing's LiquidIQ, Sun Modular Datacenter and InteliCloud 360.
Unified ports are ports on the 6200 series fabric interconnect that can be configured to carry either Ethernet or Fibre Channel traffic.
These ports are not reserved. They cannot be used by a Cisco UCS domain until you configure them.
The Unified Computing System (UCS) fabric interconnect is a networking switch or head unit where the UCS chassis, essentially a rack where server components are attached, connects to. The UCS fabric interconnect is a core part of Cisco’s Unified Computing System, which is designed to improve scalability and reduce the total cost of ownership of data centers by integrating all components into a single platform, which acts as a single unit. Access to networks and storage is then provided through the UCS fabric interconnect.
 
The high-end model is the UCS 6296UP 96-port fabric interconnect, which is touted to promote flexibility, scalability and convergence. It has the following features:
* Bandwidth of up to 1920 Gbps
* High port density of 96 ports
* High performance and low-latency capability, lossless 1/10 Gigabit Ethernet and Fiber Channel over Ethernet
* Reduced port-to-port latency to only 2 ms
* Centralized management under the Cisco UCS Manager
* Efficient cooling and serviceability
* Virtual machine-optimized services through the VM-FEX technology, which enables a consistent operational model and visibility between the virtual and the physical environments
Cluster : The Cisco UCS cluster is a grouping of hypervisors that can be distributed across multiple hosts. In a KVM system, the cluster is analogous to the distributed virtual switch (DVS) in a VMware ESX system.

In the current Cisco UCS KVM implementation, the cluster defines the scope of the port profile and is the boundary of the migration domain. When multiple KVM hosts are associated to a cluster, you can migrate a VM from one host to another within the cluster.
 
Note : In the current Cisco UCS implementation of VM-FEX for KVM, only one cluster, the default cluster, is used. Although you can create additional clusters, you can specify only the default cluster for a VM on the KVM host.
 
 
Port Profiles : Port profiles contain the properties and settings that are used to configure virtual interfaces in Cisco UCS. The port profiles are created and administered in Cisco UCS Manager. After a port profile is created, assigned to, and actively used by a cluster, any changes made to the networking properties of the port profile in Cisco UCS Manager are immediately applied to the cluster with no need for a host reboot.
 
Port Profile Client : The port profile client is a cluster to which a port profile is applied.
 
Note : In the current Cisco UCS implementation of VM-FEX for KVM, the default cluster is the only available port profile client.
 
Hypervisor : The hypervisorsupports multiple VMsthat run a variety of guest operating systems by providing connectivity between the VMs and the network. The hypervisor for KVM is a host server with Red Hat Enterprise Linux (RHEL) installed. The earliest supported release for VM-FEX is RHEL 6.1, but some features (such as SR-IOV) require a later version.

The hypervisor must have a Cisco VIC adapter installed.

For more information about virtualization using Red Hat Enterprise Linux, see the Red Hat Enterprise Virtualization for Servers Installation Guide available at the following URL: https://www.redhat.com/.
 
libvirt : Libvirt is an open source toolkit that allows you to manage various virtualization technologies such as KVM, Xen, and VMware ESX. Libvirt, which runs on the hypervisor as a service named libvirtd, provides a command-line interface (virsh) and provides the toolkit for a graphical user interface package (virt-manager).
Each virtual machine created and managed by libvirt is represented in the form of a domain XML file.

* For more information about the libvirt virtualization API, see the following URL: https://libvirt.org/.
* For more information about the virsh CLI, see the following URL: https://linux.die.net/man/1/virsh
 
MacVTap : MacVTap is a Linux driver that allows the direct attachment of a VM's vNIC to a physical NIC on the host server.
For more information about the MacVTap driver, see the following URL: https://virt.kernelnewbies.org/MacVTap.
 
VirtIO : The VirtIO paravirtualized network driver (virtio-net) runs in the guest operating system of the VM and provides a virtualization-aware emulated network interface to the VM.
For more information about the VirtIO driver, see the following URL: https://wiki.libvirt.org/page/Virtio
Unified Computing System(UCS) blade servers are certified for the following OS:
 
* Red Hat Enterprise Linux 4.8
* Red Hat Enterprise Linux 5.3
* Novell SUSE Linux Enterprise Server 11
* Microsoft Windows Server 2008
* Microsoft Windows Server 2003 R2
* VMware vSphere 4
* VMware Infrastructure 3.5 Update 4
Cisco UCS supports several hypervisors including VMware ESX, ESXi, Microsoft Hyper-V, Citrix XenServer and others. Guest operating systems are limited to 255 GB of vRAM and 8 virtual processors in vSphere 4.x, upgraded to 1 TB of vRAM and 32 vCPUs in vSphere 5.0. Additionally, the Cisco UCS Virtual Interface Cards incorporate VM-FEX technology that gives virtual machines direct access to the hardware for improved performance and network visibility.
The Cisco UCS Blade Server Chassis is called the Cisco UCS B5108 chassis. It is stupid, and meant to be this way. It provides power to your blade servers, and has slots to add FEXes for connectivity. That is its only job, as it should be. There are 8 slots in a UCS B5108 Chassis.
There are two types of Cisco FEXs or Fabric Extenders in the Cisco UCS world:
 
* Blade Chassis FEX
* Rack Mount FEX

Now, we are going to take a look at each of them.
 
Blade Chassis Cisco FEX : If you are using blade servers, you must have FEXes in the back of it to provide connectivity to the Cisco UCS system. The FEXes are connected to the Fabric Interconnects. There are two FEX slots in the Blade Chassis, one for A side connectivity and one for B side connectivity. FEX A connects to Fabric Interconnect A, and FEX B connects to Fabric Interconnect B.
 
The Fabric Interconnects are not cross connected, like your instinct may tell you to do. Failure of a FEX or Fabric Interconnect is handled at the software layer, by either the software running on your blades, or by UCS Manager, it is your choice!
 
Rack Mount Cisco FEX : This operates with a similar principle as the Blade Chassis FEX, except it is mounted in a rack. The purpose of this is to add more ports to your Fabric Interconnect so you can take advantage of the Cisco UCS’s full potential. 54 ports can get eaten up pretty quickly, and this is Cisco’s answer to that problem.
* Cisco UCS showed a lower initial cost and much lower management cost. Both CapEx and OpEx costs are lower initially and ongoing.
 
* While the servers compared are based on similar specs, the Cisco UCS provides a lower hardware cost by deploying simplified intra-chassis switching for blades and minimizing the number of adapters and ports required for rack servers.
 
* In blade configuration, Cisco UCS showed a 38% lower power and cooling cost. In rack configuration, Cisco showed a 2% higher power and cooling cost.
 
* The HPE rack environment used 14 network switches compared with two in the Cisco environment.
 
* The HPE rack environment used 520 cables compared with 48 with Cisco.
Reboot the server and press F8 which prompts to the CIMC configuration where config nic redundancy, IP config and password for CIMC and press F10 for save and reload the server.
scrub policy is applied when there is a diassociation in service profile there should not be going any changes made on the hardware it has DISK and BIOS settings, No for to remain the settings and YES for erase.
Maintenance policy is done for any hardware changes or any upgradation done on the hardware for reloading the device.
 They are of three types
 1.User Ack
 2.Timer
 3.Immediate
UCS express is a compact, all-in-one computing and networking system targeted at branch offices. UCS Express is built on the concept of a lean branch office which reduces the branch-office infrastructure footprint, equipment and operating costs, and management complexity.

The main components of UCS express are the ISR Generation2 series router and Services Ready Engine (SRE) multipurpose x86 blade servers. Cisco ISR G2 functions as a blade-server enclosure for SRE blade servers, which in themselves run branch office applications like DNS, DHCP, etc. The system comes with VMware vSphere Hypervisor for virtualization and CIMC Express for blade server management. 
UCS E-Series Server modules are next-generation, power-optimized, x86, Intel® Xeon® 64-bit blade servers designed to be deployed in Cisco Integrated Services Routers Generation 2 (ISR G2). The Cisco UCS E-Series Servers extend the Cisco UCS product portfolio to meet the needs of customers who want to deploy a virtualization-ready computing infrastructure in the branch-office environment while maintaining a lean branch-office architecture. E-series servers are a good replacement for UCS express SRE.
The Cisco UCS B-Series Blade Servers are crucial building blocks of the Cisco Unified Computing System, delivering scalable and flexible computing for today’s and tomorrow’s data center while helping reduce TCO.
The Cisco UCS B-Series Blade Servers are based on industry-standard server technologies and provide :
 
● Up to two Intel Intel Xeon Series 5500 multicore processors
● Two optional front-accessible, hot-swappable SAS hard drives
● Support for up to two dual-port mezzanine card connections for up to 40 Gbps of redundant I/O throughput
● Industry-standard double-data-rate 3 (DDR3) memory
● Remote management through an integrated service processor that also executes policy established in Cisco UCS Manager software
● Local keyboard, video, and mouse (KVM) access through a front console port on each server
● Out-of-band access by remote KVM, Secure Shell (SSH) Protocol, and virtual media (vMedia) as well as Intelligent Platform Management Interface (IPMI)
 
The Cisco UCS B-Series offers two blade server models: the Cisco UCS B200 M1 2-Socket Blade Server and the Cisco UCS B250 M1 2-Socket Extended Memory Blade Server (Figure 2). The Cisco UCS B200 M1 is a half-width blade with 12 DIMM slots for up to 96 GB of memory; it supports one mezzanine adapter. The Cisco UCS B250 M1 is a full-width blade with 48 DIMM slots for up to 384 GB of memory; it supports up to two mezzanine adapters. 
B250 M1 Blade Servers
UCS C-Series Rack Servers deliver unified computing in an industry-standard form factor to reduce TCO and increase agility. Each server addresses varying workload challenges through a balance of processing, memory, I/O, and internal storage resources.
The following are the key features of the Cisco UCS :
 
1.  Cisco UCS offers smart play bundles with new bundle packs
 
2.  Second level boot order support for blade and rack servers
 
3.  User-space NIC support for blade and rack servers
 
4.  Firmware download from the local file system
 
5.  Direct connect rack Configuration feature in the Emulator GUI
 
6.  SD card firmware upgrades support for supported computer servers.
Port mapping from server to fabric interconnect is automatic in UCS. If the number of links from fabric extender to fabric interconnect is only one then all of the 8 ports are pinned to it. If the number of links from the fabric extender to the fabric interconnect is two then ports 1,3,5,7 are pinned to the first link and ports 2,4,6,8 are pinned to the second link. If the number of links from fabric extender to fabric interconnect is four then ports 1,5 are pinned to the first link; port 2,6 are pinned to the second link; port 3,7 are pinned to the third link, and port 4,8 are pinned to the fourth link. Remember that the three link topology is not supported in UCS.
The server interfaces that are affected will either lose connectivity or failover to another fabric extender, depending on if an interface is created as a HA interface. UCS M71KR-E/ UCS M71KR-Q Converged network adapter has the capability to failover Ethernet interfaces if so configured. UCS 82598KR-CI 10 Gigabit Ethernet Adapter does not have this capability.
 
Fibre Channel interfaces that are pinned to failing fabric extender link will just fail and their HA capability depends purely on the host side multipathing driver. If HA/multipathing is not configured for Ethernet/Fibre Channel then servers connected to the failed link will lose connectivity but the other three links will be working as usual. Remember that no automatic re-pinning will happen. You can manually re-pin the servers using two link topology since three link topology is not supported.
The maximum number of users that can be created on UCS is 48, this includes any kind of user. The maximum number of GUI sessions (accessing UCS manager via HTTP) supported is 256. The maximum number of CLI sessions (telnet and SSH combined) supported is 32. Remember that CLI and GUI sessions are treated as separate; hence you can have at the max 256 GUI sessions and 32 CLI sessions at the same time.
If you have forgotten the password, you can set a new password. Connect to the console of UCS 6100 series fabric interconnect. Reload the fabric interconnect and when it boots up hit ctrl+L or ctrl+shift+R to go to loader prompt then boot the kickstart image; if you have two fabric interconnects connected in HA then first reload the subordinate fabric interconnect and bring it in loader prompt and then reload the primary and bring it in loader prompt, after this load kickstart image on the primary.
 
Configure the admin password using the “admin-password” command by going to config terminal mode. Now load the system image on the fabric interconnect or if you have HA load system image on the primary fabric interconnect. The password is now set to the new password you have just configured; if you have HA then now you can load the kickstart and system images on the subordinate fabric interconnect.
Cisco UCS Central allows you to manage multiple Cisco UCSdomains or through a single management point. Cisco UCS Central works with Cisco UCS Manager to provide a scalable management solution for a growing Cisco UCS environment. Cisco UCS Central does not replace Cisco UCS Manager, which is the basic engine for managing a Cisco UCS domain. Instead, it builds on the capabilities provided by Cisco UCS Manager and works with Cisco UCS Manager to effect changes in individual domains.
 
For a Cisco UCSdomain to be managed by Cisco UCS Central, you must first register that domain with Cisco UCS Central.
 
Cisco UCS Central allows you to ensure global policy compliance, with subject-matter experts choosing the resource pools and policiesthat need to be enforced globally or managed locally. With a simple drag-and-drop operation, service profiles can be moved between geographies to enable fast deployment of infrastructure, when and where it is needed, to support business workloads.
 
You can use Cisco UCS Central to view and manage data that is distributed over a large number of individual domains. For example, you can do the following in Cisco UCS Central:
 
* View the hardware inventory in one or more registered domains.
* Launch the KVM Console to view an individual server in a registered domain.
* Launch Cisco UCS Manager in a registered domain.
* View faults, events, and audit logs in one or more registered domains.
* Handle one-to-many functions, such as global ID pools, global policies, and firmware management
across all registered domains.
 
Cisco UCS Central does not reduce or change any local management capabilities of Cisco UCS Manager, such as its API. This allows administrators to continue usingCisco UCS Manager the way they did before even in the presence of Cisco UCS Central and also allows all existing third party integrations to continue to operate without change. Selectively they can allow polices to be globalized providing them with an easy transition to centralized management.
Cisco UCS Central includes the following features:
 
Centralized inventory : Manual inventory spreadsheets are no longer needed. Cisco UCS Central automatically aggregates a global inventory of all Cisco UCS components, organized by domain, with customizable refresh schedules. Cisco UCS Central provides even easier integration with ITIL processes, with direct access to the inventory through an XML interface.
 
Centralized fault summary : Quickly and easily view the status of all registered Cisco UCS domains with a quick-look global fault summary panel, a fault summary organized by domain and fault type, with views into individual Cisco UCS domains for greater fault detail and more rapid problem resolution.
 
Centralized policy-based firmware upgrades : Take the guesswork and manual errors out of updating infrastructure firmware. You can download firmware updates automatically from the Cisco.com website to a firmware library within Cisco UCS Central. Then you can update the firmware for registered domains, globally or selectively, on an automated schedule or as your business workloads allow. Managing firmware centrally helps ensure compliance with IT standards and makes reprovisioning of resources a point-and-click operation.
 
Global ID pooling : Eliminate identifier conflicts and help ensure portability of software licenses with Cisco UCS Central. Centralize the sourcing of all IDs, such as universal user IDs (UUIDs), MAC addresses, IP addresses, and worldwide names (WWNs), from global pools and gain real-time ID use summaries. Centralizing server identifier information makes it simple to, for example, move server identifiers between Cisco UCS domains anywhere in the world and reboot an existing workload to run on the new server.
 
Domain grouping and subgrouping : Simplify policy management by creating domain groups and subgroups. A domain group is an arbitrary grouping of Cisco UCS domains that can be used to group systems into geographical or organizational groups. Each domain group can have up to five levels of subdomains, which makes it easy to manage policy exceptions when administering large numbers of Cisco UCS domains. Each subdomain has a hierarchical relationship with the parent domain.
 
Global administrative policies : Help ensure compliance and staff efficiency with global administrative policies. These policies are defined at the domain group level and can manage anything in the infrastructure, from date and time and user authentication to equipment power and system event log (SEL) policies.
 
Cisco UCS Central XML API : Cisco UCS Central, just like Cisco UCS Manager, has a high-level industry-standard XML API for interfacing with existing management frameworks and orchestration tools. The XML API for Cisco UCS Central is similar to the XML API for Cisco UCS Manager, making integration with high-level management software very fast.
 
Cisco UCS Manager backups : The backup facility in Cisco UCS Central enables you to quickly and efficiently back up the configuration from Cisco UCS Manager in registered Cisco UCS domains. You can configure automated backups to occur on a specific schedule, or perform manual backups as your business needs require.
Cisco UCS Central creates a hierarchy of Cisco UCS domain groups for managing multiple Cisco UCS domains. You will have the following categories of domain groups in Cisco UCS Central:
 
* Domain Group : A group that contains multiple Cisco UCS domains. You can group similar Cisco UCS domains under one domain group for simpler management.
 
* Ungrouped Domains : When a new Cisco UCS domain is registered in Cisco UCS Central, it is added to the ungrouped domains. You can assign the ungrouped domain to any domain group.
 
If you have created a domain group policy, a new registered Cisco UCS domain meets the qualifiers defined in the policy, it will automatically be placed under the domain group specified in the policy. If not, it will be placed in the ungrouped domains category. You can assign this ungrouped domain to a domain group.
 
Each Cisco UCS domain can only be assigned to one domain group. You can assign or reassign membership of the Cisco UCS domains at any time. When you assign a Cisco UCS domain to a domain group, the Cisco UCS domain will automatically inherit all management policies specified for the domain group.
 
Before adding a Cisco UCS domain to a domain group, make sure to change the policy resolution controls to local in the Cisco UCS domain. This will avoid accidentally overwriting service profiles and maintenance policies specific to that Cisco UCS domain. Even when you have enabled auto discovery for the Cisco UCS domains, enabling local policy resolution will protect the Cisco UCS domain from accidentally overwriting policies.
 
Cisco UCS Central acts as a global policy server for registered Cisco UCSdomains. Configuring global Cisco UCS Central policies for remote Cisco UCS domains involves registering domains and assigning registered domains to domain groups.

In addition, the policy import capability allows a local policy to be globalized inside of Cisco UCS Central. You can then apply these global policies to other registered Cisco UCS domains.

You can define the following global policies in Cisco UCS Central that are resolved by Cisco UCS Manager in a registered Cisco UCS domain :

* Backup Policies
* Call Home Policy
* Capability Catalog
* Core Files Export Policy
* Fault Collection Policy
* Firmware Image Management
* Host Firmware Package
* Management Interface Monitoring Policy
* Role-Based Access Control and Remote Authentication Policies
* SNMP Policy
* Syslog Policy
* Time Zone and NTP Policies
* Equipment Policies 
Pools are collections of identities, or physical or logical resources, that are available in the system. All pools increase the flexibility of service profiles and allow you to centrally manage your system resources. Pools that are defined in Cisco UCS Central are called Global Pools and can be shared between Cisco UCSdomains. Global Pools allow centralized ID management across Cisco UCS domains that are registered with Cisco UCS Central. By allocating ID pools from Cisco UCS Central to Cisco UCS Manager, you can track how and where the IDs are used, prevent conflicts, and be notified if a conflict occurs. Pools that are defined locally in Cisco UCS Manager are called Domain Pools.

You can pool identifying information,such as MAC addresses, to preassign rangesforserversthat hostspecific applications. For example, you can configure all database servers across Cisco UCS domains within the same range of MAC addresses, UUIDs, and WWNs. 
Cisco UCS Director is a heterogeneous platform for private cloud Infrastructure as a Service (IaaS). It supports a variety of hypervisors along with Cisco and third-party servers, network, storage, converged and hyperconverged infrastructure across bare-metal and virtualized environments.
Orchestration engine : 
* On-demand rollout of infrastructure components, bare-metal servers, and virtualized resources with automated configuration, deployment, and management of data center infrastructure stacks.
* Library of more than 2500 multivendor tasks for creating end-to-end infrastructure services.
* Detection of resource changes coupled with resource movement to update workflows.
* Extended automation of tasks such as Bare-metal server software installation; physical network and storage provisioning; Storage-Area Network (SAN) zoning; virtual machine provisioning with software stack and database installation; disaster-recovery failover; and decommissioning of servers, hosts, and virtual machines.
 
Cisco Intersight enabled CI/CD for Cisco UCS Director :
* Ability to automatically download periodic software enhancements upgrades, bug fixes and updates to UCS Director installation through Cisco Intersight.
* Enables Continuous Integration Continuous Delivery (CI/CD) to deliver Software-as-a-Services (SaaS) benefits for on premise UCS Director installed inside customers Data Center.
* The entire UCS Director application, including Base Platform Pack, System Update Manager and infrastructure specific Connector Packs, can be automatically downloaded through Cisco Intersight.
 
Cisco Intersight enabled Connected TAC support :
* Enables proactive notifications and "one-click" diagnostics collection for UCS Director installation from Cisco Intersight platform.
* Provides a consistent means of supporting all Cisco UCS and HyperFlex systems through Cisco Intersight.
 
Self-service portal :
* Allows end users to order and deploy new infrastructure instances conforming to IT-prescribed policies and governance with approval processes, budget validation, and lifecycle management.
* With integrated service capabilities, delivers IaaS in shared service model (VPC Shared); users can provision virtual machines and applications from a pool of assigned resources by using predefined policies and workflow service requests.
* Extensible service platform creates new service offers with ability to catalog services.
* Offers metering of resource usage and consumption for showback reports that can be exported to thirdparty billing systems.
 
Broad heterogeneous support :
* Ability to automate a wide array of tasks and use cases across a broad variety of supported Cisco and thirdparty hardware and software data center components (refer to Table 2).
* Automated setup of VCE VxBlock, NetApp FlexPod, IBM VersaStack, and Pure Storage FlashStack converged infrastructure.
* Automation for Cisco Unified Computing System™ (Cisco UCS® ) servers, Cisco HyperFlex™ hyperconverged infrastructure, and Cisco Nexus® switches.
* Provisioning and management of physical and virtual switches and dynamic network technologies.
* Automated provisioning and management of storage virtual machines, filers, virtual filers, Logical Unit Numbers (LUNs), and volumes.
* Discovery, mapping, and monitoring of physical and logical data center topologies.
 
Optimized Multi-Node deployment for scaling :
* Multi-node configuration has been optimized to handle scale with one primary node and one database node.
 
Support for Cisco Application Centric Infrastructure (Cisco ACI® ) and ACI Anywhere :
* Workflows orchestrate the Application Policy Infrastructure Controller (APIC) configuration and management tasks.
* Support for multitenancy, which enables policy-based and shared use of the infrastructure, and the ability to define contracts between different container tiers allow you to apply rules between tiers.
* Cisco ACI Multi-Site Controller Automation enables centralized automation, configuration, and management, including Policy-Based Redirect (PBR).
 
Developer functions :
* Native Java editor builds custom tasks with Cisco UCS Director Java libraries that enable orchestration operations.
* Native execution of commands or scripts is performed on Microsoft’s PowerShell agent.
* More than 400 predefined and certified code samples are available from a free community site. 
HyperFlex systems combine software-defined storage and data services software with Cisco UCS (unified computing system), a converged infrastructure system that integrates computing, networking and storage resources to increase efficiency and enable centralized management.
1.None : No management IP address is assigned to the service profile. The management IP address is set based on the CIMC management IP address settings on the server.

2.Static : A static management IP address is assigned to the service profile, based on the information entered in this area.

3.Pooled : A management IP address is assigned to the service profile from the management IP address pool.
 
Step 1 :  In the Navigation pane, click the Equipment tab.
Step 2 :  On the Equipment tab, expand Equipment > Chassis > Chassis Number > Servers.
Step 3 :  Click the server for which you want to configure an IP address.
Step 4 :  In the Work pane, click the Inventory tab.
Step 5 :  Click the CIMC subtab.
Step 6  : In the Actions area, click Create/Modify Static Management IP
Process to failover

1. Login to the primary 6120 via the UCS cluster IP address.
2. Verify the current primary 6120 in the cluster.
3. Enter local management on the primary via “connect local-mgmt” command.
4. Issue the “cluster lead x” command to make the subordinate switch become the primary. Replace “x” with the correct switch letter.
5. Verify that the role has changed on the previously subordinate switch by SSH’ing into it and issuing the “show cluster state” command.
 The switch show now show up as the primary.
 
 1. from FI-B to FI-A
  UCS-B# connect local-mgmt b
  UCS-B(local-mgmt)# cluster lead a
 
 2. from FI-A to FI-B
  UCS-A# connect local-mgmt b
  UCS-B(local-mgmt)# cluster lead b
* The UUID is a 128-bit number (32 hex digits, 16 groups of 2 hex digits). It is supposed to uniquely identify a component worldwide.
* UUID pool UUID is a 128-bit number assigned to every compute node on a network to identify the compute node globally.
No, Nexus 5000 is a general purpose L2 switch and UCS 6100 series is an integral part of the unified computing system. The software that runs on the UCS 6100 is different from the Nexus 5000, as the UCS 6100 has both control and management planes.
A service profile typically includes four types of information:
 
a) Server definition : It defines the resources (e.g. a specific server or a blade inserted to a specific chassis) that are required to apply to the profile.
 
b) Identity information : Identity information includes the UUID, MAC address for each virtual NIC (vNIC), and WWN specifications for each HBA.
 
c) Firmware revision specifications : These are used when a certain tested firmware revision is required to be installed or for some other reason a specific firmware is used.
 
d) Connectivity definition : It is used to configure network adapters, fabric extenders, and parent interconnects, however this information is abstract as it does not include the details of how each network component is configured.
 
A service profile is created by the UCS server administrator. This service profile leverages configuration policies that were created by the server, network, and storage administrators. Server administrators can also create a Service profile template which can be later used to create Service profiles in an easier way. A service template can be derived from a service profile, with server and I/O interface identity information abstracted. Instead of specifying exact UUID, MAC address, and WWN values, a service template specifies where to get these values. For example, a service profile template might specify the standard network connectivity for a web server and the pool from which its interface's MAC addresses can be obtained. Service profile templates can be used to provision many servers with the same simplicity as creating a single one.
There are two types of service profiles in a UCS system:
 
a) Service profiles that inherit server identity : These service profiles are similar in concept to a rack mounted server. These service profiles use the burned-in values (like MAC addresses, WWN addresses, BIOS version and settings, etc.) of the hardware. Due to the nature of using these burned-in values these profiles are not easily portable and can’t be used from moving one server to the other. In other words these profiles exhibit the nature of 1:1 mapping and thus require changes to be made to them when moving from one server to another.
 
b) Service profiles that override server identity : These service policies exhibit the nature of stateless computing in UCS system. These service profiles take the resources (like MAC addresses, WWN addresses, BIOS version, etc.) from a resource pool already created in the UCS manager. The settings or values from these resource pools override the burned-in values of the hardware. Hence these profiles are very flexible and can be moved from one server to the other easily and this movement is transparent to the network. In other words these profiles provide a 1: many mapping and require no change to be made to them when moving from one server to another.

Sources : Cisco, and more..