Specs:
Phenom II X4 965
8gb DDR3 1333 (soon to be 12-16gb)
Biostar TA970
2x Intel PCI gigabit NICs (adding PCIe cards later)
Right now I have Plex loaded up in a Win7 64bit VM with just two cores shared to it.
This seems fine for my TV shows, but blue ray rips and DVD rips are left to my media server (in sig) with its Core i3 2100 (and its intel quick sync). One blue-ray rip casting to a Chromecast eats the cpu alive sometimes.
My streaming devices involve the 3 Chromecasts, multiple android tablets and phones with the Plex app and Chrome browser itself. usually 2-3 streams max here. The biggie is the chromecast in the living room.
My idea is to, turn my media server into strictly a file host and upgrade the ESXi host to a 8 core AMD FX. And give Plex 8 core of the FX cpu.
My question is, will I see an improvement going from the Core i3 2100 to 4 cores of the FX cpu inside of ESXi?
Old server hosted the following ESXi with Server 2016, Server 2012r2, redhat linux for some dev work and plex vm for media. What is the sweet spot for a used server these days. Doesn't seem to be any on hot deals lately. I'm running Plex on a Dell R710 with ESXi 6.7. I'm running dual Intel X5670s which do not support Intel Quick Sync Video. Further, since I'm running ESXi, my Plex server is inside a Windows 10 virtual machine. If I were to try hardware transcoding, I would need: A newer Intel. Dear all, I am trying to decide whether I should have one or two VM`s on my ESXi 6.7. I know that the best practice would be to separate my Plex Media Server VM and OpenMediaVault NAS VM and have 2 VM`s. There are many advantages. What I am affraid of and do not have that much experience with is. Get a Plex (Player) App While you're on the Downloads page, you may wish to download a player app as well. The server includes the browser-based Plex Web App, but you'll probably want to download an app for other devices. There are many available.
Hardware Acceleration with Plex
So I ran into Using Hardware-Accelerated Streaming post from plex. And I found it intriguing, but then I saw this section in the post:
Can I use Hardware-Accelerated Streaming inside of a virtual machine?
Hardware-Acceleration Streaming is not currently possible inside of virtual machines, as virtual machine hosts do not expose low-level video hardware to the guest operating system. While some virtual machines expose generic 3D acceleration to the guest OS as a virtual driver, this does not include support for accelerated video decoding or encoding.
But I wanted to find out what would happen if I just passthough the Video Card to the VM?
Enable Passthrough on Video Card/GPU in ESXi 6.5
The instructions for accomplishing this are laid out in:
I have the following Video Card on my ESXi Host:
Looking at other forums it looks like people didn't have much luck with that video card:
But I guess I got lucky. In the vSphere Web Client, I just went to Host -> Manage -> Hardware -> PCI Devices -> Toggle passthough on the Video Card:
Then I rebooted the host and under the Passthough column I saw Active. I didn't have to do any extra configuration for that. After I rebooted I thought the ESXi was hung cause it just showed the following on the boot up:
Plex Vmware Esxi
vmkapi_v2_2_0_vmkernel_shim successfully loaded
This is actually expected since the Hypervisor enables the passthough at this point and you won't see the boot up process any more. This was discussed in stuck after reboot: 'vmkapei loaded successfully'. I did have SSH enabled on the host and I was able to SSH to the host and confirm it booted up fine and the auto start process started booting up all the VMs.
Another FYI is that with PCI Passthrough you can't take snapshots of the VM (this could impact your Backups). From VMware KB 2142307:
While VMDirectPath I/O can improve performance of a virtual machine, enabling it makes several important features of vSphere unavailable to the virtual machine, such as Suspend and Resume, Snapshots, Fault Tolerance, and vMotion.
Adding Video Card/GPU to VM
This was pretty easy as well. I just shutdown the VM and then in the Web Client I went to Virtual Machines -> VM -> Actions -> Edit Settings. Then added a new PCI Device and selected the Video Card from the drop down list:
After I powered on the Linux VM, I actually still saw the console through Vsphere Web Client and the monitor that was connected to the ESXi host didn't show anything. I then realized the VM now had two Video Cards:
and here are the two PCI devices:
Disabling Primary Video Card on a VM
At this point I wanted to make sure the VM only has one Video card. As I was looking around for a way to accomplish this, I ran into a parameter in the vmx file called svga.present. I also some folks try to use that parameter:
So I decided to try it out. Virtual Machines -> VM -> Actions -> Edit Settings -> VM Options -> Advanced -> Configuration Parameters -> Edit Configuration -> Set svga.present to FALSE:
Then after another reboot of the VM, I saw the VM boot up process on the monitor that is connected to the ESXi host and inside the VM only one VGA device showed up:
Fraunhofer diffraction at a single slit. And here is the information on the PCI device:
Can I use Hardware-Accelerated Streaming inside of a virtual machine?
Hardware-Acceleration Streaming is not currently possible inside of virtual machines, as virtual machine hosts do not expose low-level video hardware to the guest operating system. While some virtual machines expose generic 3D acceleration to the guest OS as a virtual driver, this does not include support for accelerated video decoding or encoding.
But I wanted to find out what would happen if I just passthough the Video Card to the VM?
Enable Passthrough on Video Card/GPU in ESXi 6.5
The instructions for accomplishing this are laid out in:
I have the following Video Card on my ESXi Host:
Looking at other forums it looks like people didn't have much luck with that video card:
But I guess I got lucky. In the vSphere Web Client, I just went to Host -> Manage -> Hardware -> PCI Devices -> Toggle passthough on the Video Card:
Then I rebooted the host and under the Passthough column I saw Active. I didn't have to do any extra configuration for that. After I rebooted I thought the ESXi was hung cause it just showed the following on the boot up:
Plex Vmware Esxi
vmkapi_v2_2_0_vmkernel_shim successfully loaded
This is actually expected since the Hypervisor enables the passthough at this point and you won't see the boot up process any more. This was discussed in stuck after reboot: 'vmkapei loaded successfully'. I did have SSH enabled on the host and I was able to SSH to the host and confirm it booted up fine and the auto start process started booting up all the VMs.
Another FYI is that with PCI Passthrough you can't take snapshots of the VM (this could impact your Backups). From VMware KB 2142307:
While VMDirectPath I/O can improve performance of a virtual machine, enabling it makes several important features of vSphere unavailable to the virtual machine, such as Suspend and Resume, Snapshots, Fault Tolerance, and vMotion.
Adding Video Card/GPU to VM
This was pretty easy as well. I just shutdown the VM and then in the Web Client I went to Virtual Machines -> VM -> Actions -> Edit Settings. Then added a new PCI Device and selected the Video Card from the drop down list:
After I powered on the Linux VM, I actually still saw the console through Vsphere Web Client and the monitor that was connected to the ESXi host didn't show anything. I then realized the VM now had two Video Cards:
and here are the two PCI devices:
Disabling Primary Video Card on a VM
At this point I wanted to make sure the VM only has one Video card. As I was looking around for a way to accomplish this, I ran into a parameter in the vmx file called svga.present. I also some folks try to use that parameter:
So I decided to try it out. Virtual Machines -> VM -> Actions -> Edit Settings -> VM Options -> Advanced -> Configuration Parameters -> Edit Configuration -> Set svga.present to FALSE:
Then after another reboot of the VM, I saw the VM boot up process on the monitor that is connected to the ESXi host and inside the VM only one VGA device showed up:
Fraunhofer diffraction at a single slit. And here is the information on the PCI device:
Very cool.
Enabling Hardrware Acceleration on Plex
The original link has all the instructions:
- Enable hardware accelerationTo use Hardware-Accelerated Streaming in Plex Media Server, you need to enable it using the Plex Web App.
- Open the Plex Web app.
- Navigate to Settings > Server > Transcoder to access the > server settings.
- Turn on Show Advanced in the upper-right corner to expose advanced settings.
- Turn on Use hardware acceleration when available.
- Click Save Changes at the bottom.
Plex Esxi Free
You do not need to restart Plex Media Server after saving the changes.
Not too difficult.
Confirming Hardrware Acceleration is utilized
I ran into Hardware-Accelerated Streaming not working on 64-bit Ubuntu 16.04, i7-3770 and PMS 1.9.5.4339 which show sample logs of HW Acceleration. So when I checked out my logs I saw this:
And also this:
Adding Plex User to be part of the Video group
I then ran into a couple of posts which had similar logs about the hardware transcoding failing:
And they recommended adding the plex user to be part of the video group. So I did that:
And restarted the plex service:
And after that I saw the following in the logs:
And also this:
You will also see HW when a video is playing in the Plex Web App:
Confirming GPU Utilization
Initially I ran top to make sure the CPU is not getting utilized as much. And I did see that when plex was transcoding:
I then ran into a tool called intel_gpu_top and that is available with the intel-gpu-tools package on CentOS 7.
Xpenology Plex Esxi
When I ran that tool I did see GPU utilization:
Lastly I had a zabbix agent collecting CPU information on the box, and here are the results from before: Network signal guru alternative.
Freenas Esxi Plex
And here are the results watching the same thing after:
Plex Esxi Software
It looks pretty good.