Post

Wednesday, 30 August 2017

PSC Configuration Maximums


PSC Configuration Maximums




I hope you will enjoy these posts. Thanks for Reading!!!.
Be Social and share it in social media, if you feel worth sharing it.

Sayed

PSC Deployment model

The Platform Services Controller (PSC) Deployment Models: 
Deployment model plays an important role for further enhancement for you environment.

Basically PSC can be deployed in 2 ways.
  • Embedded PSC
  • External PSC
vCenter Server with Embedded PSC:
The embedded PSC is meant for standalone sites where vCenter Server will be the only SSO integrated solution.



  • Sufficient for most environments. Easiest to deploy and maintain
  • Aimed at minimizing fault domains. Use in conjunction with only one of VMware Product or Solution.
  • Multiple standalone instances supported
  • Replication between embedded instances not supported
  • Supports Windows & Appliance

vCenter Server with External PSC:
This configuration basically allows multiple vCenter Servers to link to a PSC.





  • Recommend this if deploying/growing to multiple vCenter Server instances that need to be linked
  • Reduces footprint by sharing Platform Services Controller across several vCenter Servers
  • Deploy more than one PSC to provide resilience within the environment
  • Supports Windows & Appliance



We are going to Discuss in detail about PSC our next post would be on High Availability of PSC . I hope you will enjoy these posts.
Thanks for Reading!!!.

Be Social and share it in social media, if you feel worth sharing it.


Sayed

Platform Services Controller (PSC) Walkthrough

Platform Services Controller (PSC) is a component of the VMware Cloud Infrastructure Suite. PSC deals with identity management for administrators and applications that interact with the vSphere platform.  It is available on both the Windows vCenter Server ISO or within the vCenter Server Appliance (VCSA) ISO.

Key Features and Advantages:
  • PSC 6.0 remains a multi-master model, as was introduced in vSphere 5.5 in the form of vCenter Single Sign-On.
  • It can be deployed either in an Appliance-based or Windows-based flavor, both able to participate in multi-master replication. (With vSphere 5.x, the vCenter Server Appliance's embedded SSO was not supported to replicate with other SSO nodes)
  • Both Appliance-based and Windows-based PSCs can interoperate with Appliance-based or Windows-based vCenter Servers.
  • Handles the storing and generation of the SSL certificates within your vSphere environment.
  • Handles the storing and replication of your VMware License Keys
  • Handles the storing and replication of your permissions via the Global Permissions layer.
  • It can handle the storing and replication of your Tags and Categories.
  • It has a built-in feature for automatic replication between different, logical SSO sites.
  • There is only one single default domain for the identity sources.
Complete list of components installed along with platform services controller are mentioned below VMware Identity Management Service
  • VMware Appliance Management Service (only in Appliance-based PSC)
  • VMware License Service
  • VMware Component Manager
  • VMware HTTP Reverse Proxy
  • VMware Service Control Agent
  • VMware Security Token Service
  • VMware Common Logging Service
  • VMware Syslog Health Service
  • VMware Authentication Framework
  • VMware Certificate Service
  • VMware Directory Service
PSC 6.0 is supported with:
  • VMware vCenter Server
  • VMware vCenter Inventory Services
  • VMware vSphere Web Client
  • VMware Log Browser
  • VMware NSX for vSphere
  • VMware Site Recovery Manager
  • VMware vCloud Air
  • VMware vCloud Director
  • VMware vRealize Automation Center
  • VMware vRealize Orchestrator
  • VMware vSphere Data Protection
  • VMware vShield Manager
 Basically PSC can be deployed in 2 ways. 
  • Embedded PSC
  • External PSC 
We are going to Discuss in detail about PSC Deployment Model in coming post.
I hope you will enjoy these posts. Thanks for Reading!!!.
Be Social and share it in social media, if you feel worth sharing it.


Sayed


Sunday, 28 May 2017

vCenter Log Files and Locations


vCenter running on windows the log files will be located in C:\ProgramData\VMware\VMware VirtualCenter\Logs
vCenter running on virtual appliance the log files will be located in /var/log/vmware/vpx

Log
Description
vpxd.log
The main vCenter Server log, consisting of all vSphere Client and WebServices connections, internal tasks and events, and communication with the vCenter Server Agent (vpxa) on managed ESXi/ESX hosts.
vpxd-profiler.log
Profiled metrics for operations performed in vCenter Server. Used by the VPX Operational Dashboard (VOD) accessible at https://VCHostnameOrIPAddress/vod/index.html.
vpxd-alert.log
Non-fatal information logged about the vpxd process.
cim-diag.log and vws.log
Common Information Model monitoring information, including communication between vCenter Server and managed hosts’ CIM interface.
drmdump
ctions proposed and taken by VMware Distributed Resource Scheduler (DRS), grouped by the DRS-enabled cluster managed by vCenter Server. These logs are compressed.
ls.log
Health reports for the Licensing Services extension, connectivity logs to vCenter Server.
vimtool.log
Dump of string used during the installation of vCenter Server with hashed information for DNS, username and output for JDBC creation.
stats.log
Provides information about the historical performance data collection from the ESXi/ESX hosts
sms.log
Health reports for the Storage Monitoring Service extension, connectivity logs to vCenter Server, the vCenter Server database and the xDB for vCenter Inventory Service
eam.log
Health reports for the ESX Agent Monitor extension, connectivity logs to vCenter Server
catalina.date.log
Connectivity information and status of the VMware Webmanagement Services.
jointool.log
Health status of the VMwareVCMSDS service and individual ADAM database objects, internal tasks and events, and replication logs between linked-mode vCenter Servers


ESXi Log Files and Locations

For troubleshooting any issue it is very important to locate and analyse the logs let's have a look at log with there location and use.

Log
Description
/var/log/auth.log
ESXi Shell authentication success and failure
/var/log/dhclient.log
DHCP client service, including discovery, address lease requests and renewals
/var/log/esxupdate.log
ESXi patch and update installation logs
/var/log/lacp.log
Link Aggregation Control Protocol logs
/var/log/hostd.log
Host management service logs, including virtual machine and host Task and Events, communication with the vSphere Client and vCenter Server vpxa agent, and SDK connections
/var/log/hostd-probe.log
Host management service responsiveness checker
/var/log/rhttproxy.log
HTTP connections proxied on behalf of other ESXi host webservices
/var/log/shell.log
ESXi Shell usage logs, including enable/disable and every command entered
/var/logsysboot.log
Early VMkernel startup and module loading
/var/log/boot.gz
A compressed file that contains boot log information
/var/log/syslog.log
Management service initialization, watchdogs, scheduled tasks and DCUI use
/var/log/usb.log
USB device arbitration events, such as discovery and pass-through to virtual machines
/var/log/vobd.log
VMkernel Observation events
/var/log/vmkernel.log
Core VMkernel logs, including device discovery, storage and networking device and driver events, and virtual machine startup
/var/log/vmkwarning.log
A summary of Warning and Alert log messages excerpted from the VMkernel logs
/var/log/vmksummary.log
A summary of ESXi host startup and shutdown, and an hourly heartbeat with uptime, number of virtual machines running, and service resource consumption
/var/log/Xorg.log
Vide acceleration


VMware NSX

                                

                         VMware NSX Overview

VMware NSX is the network virtualization and security platform for the Software-Defined Data Center (SDDC), delivering the operational model of a virtual machine for entire networks. With NSX, network functions including switching, routing, and firewalling are embedded in the hypervisor and distributed across the environment.
This effectively creates a “network hypervisor” that acts as a platform for virtual networking and security services. Similar to the operational model of virtual machines, virtual networks are programmatically provisioned and managed independently of underlying hardware. NSX reproduces the entire network model in software, enabling any network topology—from simple to complex multitier networks—to be created and provisioned in seconds. Users can create multiple virtual networks with diverse requirements, leveraging a combination of the services offered via NSX to build inherently more secure environments.

VMware NSX contains following components which will be deployed in vSphere environment 
NSX vSwitch
The NSX vSwitch is the NSX Data Plane. On a ESXi host, the NSX vSwitch is based on the vSphere Distributed vSwitch, whilst on other hypervisors it is based on Open vSwitch. The NSX vSwitch is installed as a set of .vib files which update the ESXi kernel to allow for advanced network features such as distributed routing, distributed firewall and VXLAN capabilities, along with providing access-level switching within the hypervisor. The NSX vSwitch allows logical networks to be created, independent of underlying networking/VLANs, and as such is a core component of network virtualization.
NSX Controller
The NSX controller is deployed as a ‘cluster’ of highly available virtual appliances which are responsible for the programmatic deployment of virtual networks across the entire NSX architecture. The controller is essentially the ‘control plane’. Traffic doesn’t pass through the controller, instead the controller is responsible for providing configuration to other NSX components such as the NSX vSwitches and gateways. It’s worth noting that any failure in the control plane will not affect data plane operations.

NSX Manager
The NSX manager is a web-based management tool which is used to interact with the NSX controllers using NSX APIs.. The NSX manager allows you to configure, administrate and troubleshoot NSX components and their configuration. NSX manager intergrates fully with vCenter, and provides a single point of administration for NSX.

NSX Gateways/Edge
NSX Edge services and gateways provide the path in and out of the NSX defined logical networks. NSX gateways are normally deployed as highly available pairs/clusters and provide services such as routing, tunnelling, firewall and load balancing at the edge of one or more virtual NSX defined networks. NSX gateways are managed by the NSX controller.


Additional Functional Components


VXLAN

It is an encapsulation protocol which runs on overlay (virtualized) network on existing Layer 3 infrastructure. It creates a tunnel between physical hosts. It does it using VTEP (VXLAN Tunnel End Point). In simple terms it creates VMKernel Port Groups on the vDS and uses them to create tunnels. Number of VMKernel Port Groups will be decided based on the Teaming/Failover policies & Number of NICs . For e.g.  Let assume we have 2 NICs per ESXi assigned to the vDS and if we use Default Policy i.e. "Route based on originating virtual port" then it will need to have 2 IPs per host for VMKernel Port Groups

TRANSPORT ZONE

A transport zone controls to which hosts a logical switch can reach. It can span one or more vSphere clusters. Transport zones dictate which clusters and, therefore, which VMs can participate in the use of a particular network. Most commonly people create a single Transport Zone for all Clusters within vCenter to keep it simple

SEGMENT ID  

It is a pool of segment ID which is assigned for each and every VXLAN Network. When a Logical Switch is created it will assign segment ID from the Pool. Pool range will decide number of logical switches we can create

Reference:




Wednesday, 24 May 2017

Raw Device Mapping (RDM) in detail

RDM is mapping file used to map a LUN directly to VM bypassing VMFS layer. This LUN can be formatted using any File System (NTFS or FAT32) without the need to format it using VMFS and placing VMDK file on top of it.

It is method to provide direct access to a LUN on an iscsi or fibre channel storage system for a virtual machine. RDM is basically a Mapping file acts as a proxy for a raw physical storage device placed in a VMFS volume.

RDM is located with the virtual machine directory. Here is an example of RDM mapping file for a VM called VM-01 which has pRDM LUN and vRDM LUN (will cover them later in details).


~ #  cd /vmfs/volumes/SAN-DATASTORE/VM-01

/vmfs/volumes/507fd84d-e9b583ac-b370-001b244d6cb2/VM-01 # ls -la
drwxr-xr-x    1 root     root          2800 Nov  3 12:18 .
drwxr-xr-t    1 root     root          2100 Nov  2 21:33 ..
-rw-r--r--    1 root     root            73 Oct 20 10:21 VM-01-e61f5754.hlog
-rw-------    1 root     root     2147483648 Nov  3 11:39 VM-01-e61f5754.vswp
-rw-------    1 root     root     42949672960 Nov  3 12:49 VM-01-flat.vmdk
-rw-------    1 root     root          8684 Nov  3 11:40 VM-01.nvram
-rw-------    1 root     root           541 Nov  3 11:38 VM-01.vmdk
-rw-r--r--    1 root     root             0 Oct 19 15:10 VM-01.vmsd
-rwxr-xr-x    1 root     root          3679 Nov  3 12:18 VM-01.vmx
-rw-------    1 root     root             0 Oct 20 19:04 VM-01.vmx.lck
-rw-r--r--    1 root     root           260 Nov  3 12:18 VM-01.vmxf
-rwxr-xr-x    1 root     root          3680 Nov  3 12:18 VM-01.vmx~
-rw-------    1 root     root     32212254720 Nov  3 12:18 VM-01_1-rdm.vmdk ---------------------> vRDM mapping file
-rw-------    1 root     root           482 Nov  3 12:18 VM-01_1.vmdk ----------------------> Descriptor File
-rw-------    1 root     root     32212254720 Nov  3 12:18 VM-01_2-rdmp.vmdk ---------------------> pRDM mapping file
-rw-------    1 root     root           494 Nov  3 12:18 VM-01_2.vmdk ----------------------> Descriptor File
-rw-r--r--    1 root     root        579252 Oct 20 10:21 vmware-1.log
-rw-r--r--    1 root     root        139522 Oct 20 19:03 vmware-2.log
-rw-r--r--    1 root     root        307364 Nov  3 12:19 vmware.log


Types of Raw Device Mapping
1. Virtual compatibility mode:
·        All SCSI commands from the VM to LUN will pass through VMFS layer except READ & WRITE. Those will be passed directly to the LUN.

·        If you check the device manager of the VM you will see the RDM LUN listed as VMware Virtual Disk SCSI Disk Device. This is exactly the same as VMDK Drive added to the VM. The physical characteristics of the LUN are hidden from the VM.

·        It preserves the ability to perform virtual machine snapshots.

2. Physical compatibility mode:
All SCSI commands are passed to RDM LUN directly from the VM except REPORT command virtualized by VMkernal.

If you check the device manager of the VM you will see the RDM LUN listed based on the physical name/characteristics of the storage array.

The advantage of the full exposure of the LUN characteristics to the VM can be seen when using SAN Management Tools and MSCS services. Also, RDM LUNs in physical mode can have a size greater than 2TB.

Another important feature inherited with RDM is Dynamic Name Resolution. Each LUN is having unique ID in RDM mapping file used by VMFS. In case the path to the LUN modified due to failover, for example, the ID of the LUN will be change. Dynamic Name Resolution feature will automatically discover the new name and associate it in the RDM mapping file to resume operations.


RDM Restrictions

1. RDM can't be used with NFS datastore.
2. SVM can't be used with NPIV


Common task for the RDM

In case RDM is greyed out:

1. Make sure that you have a free LUN which support RDM 
2. Make sure that RDMFiler Advanced parameter is unchecked

Note: In case a LUN is used by ESXi host (even if partially), you can't use it as RDM to VM and vice versa. The reason is that OS or Hypervisor will apply lock into the LUN which will prevent it to be used by other system.


If you have a suitable controller, you can configure the local device as an RDM. Configuration of a local storage device as an RDM must be done using the Command Line Interface (CLI), it cannot be done through the vSphere Client.

To configure a local device as an RDM disk:
1.      Open an SSH session to the ESXi/ESX host.
2.      Run this command to list the disks that are attached to the ESXi host:

# ls -l /vmfs/devices/disks

3.      From the list, identify the local device you want to configure as an RDM and copy the device name.

Note: The device name is likely be prefixed with 
t10. and look similar to: 

t10.F405E46494C4540046F455B64787D285941707D203F45765


4.      To configure the device as an RDM and output the RDM pointer file to your chosen destination, run this command:

# vmkfstools -z /vmfs/devices/disks/diskname /vmfs/volumes/datastorename/vmfolder/vmname.vmdk

For example:

# vmkfstools -z /vmfs/devices/disks/t10.F405E46494C4540046F455B64787D285941707D203F45765 /vmfs/volumes/Datastore2/localrdm1/localrdm1.vmdk

Note: The size of the newly created RDM pointer file appears to be the same size and the Raw Device it it mapped to, this is a dummy file and is not consuming any storage space.

5.      When you have created the RDM pointer file, attach the RDM to a virtual machine using the vSphere Client:


a.      Right click the virtual machine you want to add an RDM disk to.
b.      Click Edit Settings.
c.      Click Add.
d.      Select Hard Disk.
e.      Select Use an existing virtual disk.
f.       Browse to the directory you saved the RDM pointer to in step 5 and select the RDM pointer file and click Next.
g.      Select the virtual SCSI controller you want to attach the disk to and click Next.
h.      Click Finish.

6.      You should now see your new hard disk in the virtual machine inventory as Mapped Raw LUN.
  • Notes: As this virtual machine now has an attached local disk migration, using vMotion is not possible.

To list all virtual disks pointing to an RDM device using PowerCLI:

This operation is generally time-consuming, as PowerCLI must iteratively inquire about the disk type of every VMDK file on the remote hosts. 
1.      Open the vSphere PowerCLI command-line. For more information, see the vSphere PowerCLI Documentation.

2.      Run the command:

Get-Datastore | Get-HardDisk -DiskType "RawPhysical","RawVirtual" | Select "Filename","CapacityKB" | fl

Example output:

Filename    : [DatastoreNameDirectoryName/virtualrdm.vmdk
CapacityKB  : 5760

To list all virtual disks pointing to an RDM device using the local console:

This operation is generally quick, as there is no overhead of network communication.
1.      Open a console to the ESX or ESXi host.
2.      Run the command:

#find /vmfs/volumes/ -type f -name '*.vmdk' -size -1024k -exec grep -l '^createType=.*RawDeviceMap' {} \; > /tmp/rdmsluns.txt

#for i in `cat /tmp/rdmsluns.txt`; do vmkfstools -q $i; done


Example output:


o   Virtual Mode RDM:

Disk /vmfs/volumes/.../virtualrdm.vmdk is a Non-passthrough Raw Device Mapping
Maps to: vml.02000000006006048000019030091953303030313253594d4d4554

o   Physical Mode RDM:

Disk /vmfs/volumes/.../physicalrdm.vmdk is a Passthrough Raw Device Mapping
Maps to: vml.02000000006006048000019030091953303030313253594d4d4554


Thanks for Reading !!!!