VENDOR: We use an Access database for our backend, so we'll need MS Access installed on the cluster too.
And this is a 5+ billion dollar company's product. Sigh.
And this is a 5+ billion dollar company's product. Sigh.
NIOC.
We share our uplinks between all services, and leverage NIOC to push vMotion to the bottom, VMs in the middle, and iSCSI on top.
dvSwitches pretty much "Just work".
What remote agent are you using? I know MS RDP does that unless it's a Windows remote help request. I remember Kaseya (from my time at a MSP) had the option to do so, but that was not the default.That when I pull up another users screen it automatically locks their PC. Makes troubleshooting an adventure.
Is there a failure condition or any guidance from VMWare on that? Just looking for examples. Since I can't get 10G gear in any reasonable time frame, anything I can do that will remove a failure point while improving performance is worth looking into.LACP is almost never a good idea with dvSwitches. Simply set them to "route based on physical NIC load", which is another dvSwitch only option.
What remote agent are you using? I know MS RDP does that unless it's a Windows remote help request. I remember Kaseya (from my time at a MSP) had the option to do so, but that was not the default.That when I pull up another users screen it automatically locks their PC. Makes troubleshooting an adventure.
Is there a failure condition or any guidance from VMWare on that? Just looking for examples. Since I can't get 10G gear in any reasonable time frame, anything I can do that will remove a failure point while improving performance is worth looking into.LACP is almost never a good idea with dvSwitches. Simply set them to "route based on physical NIC load", which is another dvSwitch only option.
Dedicated for our BU? 0, but it's under my umbrella. The parent org as a whole has ~3-4.
dvSwitches are exceedingly simple, and provide huge benefits over standard switches. There is very little reason to buy E+ and not use them.
Assuming you have some spare vmnics for the standard switch.Isn't a standard vswitch for the vcenter a simpler solution?Pretty much. Non ephemeral port groups need vCenter to be up in order to create a port. So if vCenter gets killed somehow and needs to be restarted on another host on the same dvs you are kinda boned.
Here is Chris Wahl's article about it:
http://wahlnetwork.com/2015/01/30/vds-ephemeral-binding/
It is.
Yep. We use a pair of the onboard NICs (1G) for the management interfaces, and then everything else (guest, iscsi, vmotion) goes via the 10G/40G dvSwitches.
TIL one of our conference rooms has an unmanaged switch in the leg and some idiot decided to loop two of the table ports. Awesome.
Yep. We use a pair of the onboard NICs (1G) for the management interfaces, and then everything else (guest, iscsi, vmotion) goes via the 10G/40G dvSwitches.
TIL one of our conference rooms has an unmanaged switch in the leg and some idiot decided to loop two of the table ports. Awesome.
I had the ADD CEO of a company do exactly that in a training room. He knocked out C row which was across from the room and on the same switch but BPDUGuard kept it from taking down the rest of the company. Luckily the CPU wasn't yet massively overloaded when I consoled in so I could ID the offending ports. We ended putting labels on all the training desk saying plug only into laptops (saving face for CEO).The table has 8 network jacks embedded in it, looks like someone got bored in a meeting.
https://i.imgur.com/Jy2eQur.jpg
Also in hunting this down I found out my linecards don't support broadcast suppression. *sigh*. I think we're going to try and budget for new 9k access switches next year.
The table has 8 network jacks embedded in it, looks like someone got bored in a meeting.
https://i.imgur.com/Jy2eQur.jpg
Also in hunting this down I found out my linecards don't support broadcast suppression. *sigh*. I think we're going to try and budget for new 9k access switches next year.
Wow, that card is cool, both because of the port density (though the RJ21 card did the same AFAIR) but also because you can do the splitting at the wall plate. Being able to run two lines off of one switch port without active components and without running additional cable could be a major win in a lot of situations.Yup, good guess. It's not really that big of an issue, but annoying when it happens. Our parent org is likely going to pay or give us new Nexus 9k's, so I'm not too worried about the current solution.
One of my 6500's has 2 x WS-X6148X2-RJ-45 which I didn't even know was a thing before this job. They go to 2U port expanders and give 96 x 100Mb + PoE in 1 card slot.
We've talked to the offending user, and have a SG300-20 on order to replace the netgear.
We've talked to the offending user, and have a SG300-20 on order to replace the netgear.
Offending user? I see no fault in the non-IT person. It's really all on the IT org who left this in place when it didn't need to be. It may have made the IT folk happy to talk down to someone with no reason to know any different, but the failing is having a network deployed in that config.
We've talked to the offending user, and have a SG300-20 on order to replace the netgear.
Offending user? I see no fault in the non-IT person. It's really all on the IT org who left this in place when it didn't need to be. It may have made the IT folk happy to talk down to someone with no reason to know any different, but the failing is having a network deployed in that config.
TIL, the key to smooth transition from windows vCenter to VCSA is making sure the windows vCenter is solid and simple. Back before the upgrade to 6.0 (U3), I went through and cleaned up any/all extensions/plugins/WTF that were giving off error alerts in the GUI.
That included a non removable Nexus 1000v install.
While the migration took some time, everything went by the book. Woulda been faster on faster storage, but that's always how it goes.
The Media Services (A/V) group is under a different division/VP. Their stuff is essentially crap and they haven't kept up on maintenance or equipment replacements, so they have a horrible rep.
One of their "brilliant" schemes was to use small unmanaged switches everywhere with their gear, which is fine if it's on their own private network within each room and not connected to the rest of the network.
And then they tried to push that as being the campus network standard.
Uh, no.
Same. I actually have to do NAT for a boardroom because all the gear is 192.168.1 on its own dumb switch. The switch got removed but we can't re ip everything without blowing it up or paying a bunch for reprogramming
TIL, the key to smooth transition from windows vCenter to VCSA is making sure the windows vCenter is solid and simple. Back before the upgrade to 6.0 (U3), I went through and cleaned up any/all extensions/plugins/WTF that were giving off error alerts in the GUI.
That included a non removable Nexus 1000v install.
While the migration took some time, everything went by the book. Woulda been faster on faster storage, but that's always how it goes.
Do you have links to a doc on how to purge 1000v if it's been left as artifacts because I have one.
TIL, the key to smooth transition from windows vCenter to VCSA is making sure the windows vCenter is solid and simple. Back before the upgrade to 6.0 (U3), I went through and cleaned up any/all extensions/plugins/WTF that were giving off error alerts in the GUI.
That included a non removable Nexus 1000v install.
While the migration took some time, everything went by the book. Woulda been faster on faster storage, but that's always how it goes.