Last year I met some amazing people I was able to meet Tim Carman and Matt Allford while in Australia speaking. They introduced me to a project they had been working on. A project called As Built Report, this was an automated way to document your environment. When I looked at it, I was hooked at its possibilities and amazed by the framework they put into place. It was a great place to start but it was lacking reports for the EUC side of the world. So the thought was why don’t I just start building them. And that is how the story started.
With that little bit of intro out of the way, I am able to announce that after a few months of work, and some persistence and a bunch of coding I have completed the “As built Report” for VMware Horizon, AppVolumes and UAG’s. This has been a long road coming, and a huge learning opportunity. On the “As Built Report” for horizon I had the opportunity to work with Karl Newick on building this report. It was a mess of work and learning some new ways of doing somethings. But later on broke out and started building the AppVolumes and Universal Access Gateway (UAG)
The As Built Report for Horizon is a tool that will document your entire VMware Horizon Site. This will capture the information of the VMware Cloud Pod and all the joined sites but only the details of the site its ran from. For a complete details of each of the sites its suggested this is ran at each site to get a holistic view. The report will pull the info from the vCenter using the legacy API. It will gather info in the category’s of:
Users and Groups
Entitlements
Home Site Assignments
Unauthenticated Access
Inventory
Desktop
Applications
Farms
Machines
vCenter
RDSHosts
Others
Settings
Servers
vCenter Servers
vCenter
ESXi Hosts
Datastores
Composers
AD Domains
Instant Clone
Product Licensing
Global Settings
Registered Machines
Administrators
Administrators and Group
Role Privileges
Role Permissions
Access Group
Cloud Pod Architecture
Sites
Event Configuration
Global Policies
JMP Configuration
This will gather the info form the above category’s and put them into a word document so you have full Documentation of your Horizon Environment run from a script. This will allow you to keep up to date documentation on a everyday basis if you choose.
From there I took the framework of the Horizon report, and built out the AppVolumes and UAG reports. Learning the API for the App Volumes was a fun experience. Thankfully Chris Halstead created an amazing blog article that gave a great head start. I started the AppVolumes report in the 2.x days so as of this moment its only displaying the 2.x appstacks. Over the next few months I will be adding the feature to include the 4.x appstacks. The AppVolumes report is broke up into the following Catagorys:
General
AppVolumes Managers
License
AppStacks
AD Users
AD Groups
Writeables
Applications
Storage Locations
Storage Groups
AD Domains
Admin Groups
Machine Managers
Storage
Settings
I also started work on the UAG (Universal Access Gateway) As Built Report about the same time as the AppVolumes report. I had already spent a ton of time learning the API for the UAG and its pretty well documented. This report was kind of a pain to create, as most people only use the parts of the UAG needed for there environments so I had to build out the UAG in many random ways in order to get all the info needed to create this report. The report is broken out in the following Catagorys:
General Settings
Edge Service Settings
Authentication Settings
Advanced Settings
System Configuration
Network Settings
TLS Settings
JWT Settings
Account Settings
Identity Bridge Settings
Support Settings
Stats
Log Settings
So with all the hardwork and huge thanks to Karl Newick and his help in building out the As Built Report for Horzion. And Tim and Matt for building the framework of the As Built Report. As of this moment the new Horizon Suite As Built Reports are avalable for download either the following:
Install-Module -Name AsBuiltReport
or if you already have As Built Report installed
Update-Module -Name AsBuiltReport
Those of you who have been in the technology world for a while may remember a product called WhatsUp Gold. Well, it never really died, it has kind of been there this whole time and improved tremendously. When I started my IT career I used WhatsUp Gold to monitor site data. We had small remote stores connected via MPLS and needed a way to monitor performance and uptime. Also needed a way to monitor vendor websites to try to be proactive. It worked well for this. But wow over the last 12 years, it really has changed. I was asked to do a review on the product now. So let’s jump into this thing.
In the early days WhatsUp Gold has always been a basic monitoring tool with an amazing self-explanatory user interface, but today many things have changed. It’s an in-depth tool to help you with all your monitoring needs, and more, all keeping an amazing easy-to-use user interface. WhatsUp Gold has an Interactive Demo that is great! It can be found here. Alternatively, if you want to install the product and take it for a whirl for 14 days you can go here.
First Impressions jumping back in. From the first look of the Interactive Demo, you are shown a decent customizable “Home Dashboard” that most of the things are clickable. You can click on an object say Down devices. You can click on the 86 below and it will show a network map of the devices
But if you go to click on the “Device Role” Pie chart….Nothing other than it moves the piece of the pie out a bit. This would be one nice thing to fix, is to make this clickable and bring you into your Router devices if you click on it.
Moving on, from there, that is the only little thing I have found as of writing this.
Key Features:
Application Monitoring
Automated Discovery
Bandwidth Monitoring
Cloud Monitoring
Configuration Management
Distributed Monitoring
Failover Manager
Flowmmon NPMD/NDR
Log Management
MSP Monitoring
Network Mapping
Rest API Monitoring
Server Monitoring
Traffic Monitoring
Virtualization Management
Wireless Monitoring
Let’s touch on a few of the features.
Application Monitoring had me impressed at the beginning. not only are you monitoring the devices involved, but also monitoring the applications from end to end. In the below example just picking the device in the network map shows the application status in the details card.
Cloud Monitoring: Like most of you out there I am spending more and more time in different cloud environments. Sometimes just one, but for some reason, I keep bouncing around amongst many. Well good news, WhatsUp Gold can monitor all the stats out of there also. If you are using AWS or Azure, any of the stats that can be monitored can be!
Rest-API monitoring: This one really stands out to me. And seems to be a growing trend. There are so many SAS products out there that have API’s but no real way to monitor if things are working. So you are relying on the SAS vendor to let you know well Application x died today. Well with the RestAPI you can for sure tell when the application went down, and how long. Not to mention gather additional metrics. A good example is if you are using Duo for MFA auth, well and it goes down, that could cause a real impact. Now you can monitor the Duo auth service with RestAPI and proactively alert the users that there is an ongoing issue with Duo MFA auth.
Automated discovery, This has to be the simplest thing to do, Just give it an IP range and tell it to search connected networks and it’s off to find all the things. You just sit back and wait and add some info to discovered devices.
One thing that WhatsUp Gold has always fallen a bit short on in the past was reporting. It was never really robust, but now things are quite a bit different. It’s not completely customizable, but there are many out-of-the-box reports from Asett Inventory to Software Update reports.
Conclusions: After taking a refreshed look at WhatsUp Gold I’m quite impressed by their progress over the years. Yes in the past only had a small use-case, but now looking at this now the progress is just elegant and has added tons of functionality. I’m quite impressed with its out of box solution.
If you are interested in more please follow the link below for an On-Demand demo! On Demand Demo
And for an interactive demo follow the link below: Interactive Demo
While the opinions in this article are my own and are not related to the company I currently work for. This blog post was sponsored.
What is Metallic? Let’s start here…..As I had no idea really when I started writing this article.
Metallic is a Data Protection Vendor Commvault’s Data-Management-As-a-Service offering. It enables the protection of SaaS applications. Currently, its portfolio has the broadest number of SaaS apps and cloud partners but it’s growing: Microsoft Office 365, Microsoft Dynamics 365, Salesforce, Azure, AWS, and Oracle. In each of these Cloud, partners allow you to do many things from backup VMs, Databases to backing up unstructured data and containers. Oh did I mention this works on Premises also? It does!
Three years ago in 2019, Commvault launched Metallic, This brought Commvault’s industry-proven technology to the cloud. Bringing them into the modern Cloud age and giving them a DMaaS (Data Management as a Service) stack. In turn, becoming the first service of its kind.
Why is this such a big deal? Well as you all remember back in 2020 when we all decided to go home for no reason at all, and all the hardware we all needed to support this work-from-home movement we could not find, could not source, and if we could source it took MANY months to go get onsite. We all had to shift gears to moving things to the cloud.
And this cloud migration was done quickly, sometimes with little to no thought. Racking up some additional Technical Debt, due to some shortcuts that were taken to speed up the process. In hopes that everyone would come back around and fix Tech Debt. Well, we all have been in IT long enough to know that day never comes, until it breaks something or something worse happens…A breach or corruption that needs to restore. Another challenge with moving to the cloud is your traditional way of things, or your Traditional software stack may or may not work in the cloud. This is where Metallic comes into play. Allowing you to secure a way to back up your data in the cloud hybrid world, as most of us will never see a full cloud.
In today’s modern world there are constant security vulnerabilities hitting daily and honestly seem like hourly anymore. Just the systems I manage I seem to see a new one every other day! What does that mean for us that work in the business side of things, we need to get faster at remediation with the same resources we have now, and we must get better at the detection and defending of our environments and data.
What makes Metallic DMaaS Different from other offerings in the space?
Metallic safeguards your data from threats and delivers a high level of business continuity. All due to comprehensive and robust capabilities in the cloud such as its simple to configure and cloud-optimized strategy, with tools built to deliver ransomware-resilient protection. If you are interested in taking a look at this, feel free to take it on a test drive! https://metallic.io/trial
So what’s new with Metallic? ThreatWise! This new service introduced on the 21st of September 2022 is Commvault’s latest data security service as part of its Intelligent Data Services Portfolio
ThreatWise: What does Metallic ThreatWise actually do? It was really interesting to learn about this part. “Leveraging patented deception technology, Metallic® drops threat sensors around valuable assets (such as file servers, databases, VMs, and even backups etc…), creating decoys within customer environments and serving as an early warning system.”
Unlike traditional honey pots, Theat decoy tactics are to actively engage the bad actor, giving the bad actor the true sense they operating in the real environment. This gives them a unique way to combat Zero-Day threats without the bad actor knowing any different! This enabled you as an administrator or a business stakeholder peace of mind. Knowing that when an attacker enters the network, they are looking for things to export out, and when they find file shares it’s a treasure trove they have been searching for. They will export the data out, and usually, the business never knows until they find their data on the internet, or if it’s all been encrypted. This is where Metallic ThreatWise’s early detection system for critical business data comes into play. Giving you an early warning system for data threats before the bad actor reaches your data.
Containing threats and data impact through early warning • Mimic – Dilutes the attack surface by deploying indistinguishable fake decoys, at scale • Trip – Draws bad actors into compromising false customer resources • Alert – Exposes malicious activity with real-time, high-fidelity alerts • Respond – Works seamlessly with security technology to accelerate remediation and contain threats before leakage, encryption, or exfiltration
What Makes Metallic ThreatWise different?
Unlike most other Data Protection solutions, Metallic ThreatWise is focused on early detection, not after-the-fact recovery! And give the organization an easy-to-deploy early detection system as a service. Also, its “Patented Deception Technology” gives you a way to confuse or intercept the bad actors before they can harm the system. Saving you time and money as the point of the system is not to have to restore from backups. And because is it SaaS-based, it is easy to deploy. Below are a few Screenshots of the product picking up an active attacker, and the coverage:
My thoughts: Jumping into this, was not sure what to think, I was just thinking this was another backup product, but was wrong. This has morphed into a Next Gen product, that gives you as a customer to protect data inside your borders and outside your borders. Allowing you the freedom to run Hybrid Cloud Environment and use the same product across all cloud environments. Commvault has always been a huge player in the backup industry, and now learning what they have done with Metallic and ThreatWise is great. The ability to enable an early detection system for your critical business data is huge.
A few years ago I ran across the issue of replication of Applications from AppVolumes from one site to another with AppVolumes. I spent some time figuring out the 2.x API to work with this. But had to rewrite the configuration to support the 4.x replication. As most of you are aware there is no published API document for AppVolumes. A bunch of people told me well you cant do this, and that more or less drove me to accomplish this stuff. So thank you for pushing me!
Problem: You have multiple sites using 4.x applications via VMware AppVolumes and you want to replicate the Applications, Packages, Entitlements, Lifecycle, and Stage info from one site to another or one site to many. We were looking to use a hub and spoke method and using a central AppVolumes environment to be the source of all truth, and replicate its data to other sites. The concept was that I did not want to have to package applications in each site, and did not want to have to do a manual copy of data from one to the other.
Using a Hub and Spoke replication looking something like below:
Requirements: For this script to work you must have a Pure storage setup at the source and target site. And must have Async Replication setup, and replicating a “Replication” LUN from one site to the other via scheduled snapshots. Looking something similar to:
So, did you ever want to replicate AppVolumes 4.x Applications, Packages, Assignments, and lifecycle status from one site to another with help of Pure storage?
Solution: I have created a script that will run through the processes below. Allowing you to start to build your Hub and Spoke configuration. By adding a few ForEach loops you can enable this to replicate from one source site to many! Allowing you to only have to create changes in the source site.
Process:
Connect to Target vCenter
Connect to target Pure storage array
Copy Replicated Snapshot to Replication LUN
Scan Storage
Mount Replication LUN
Resignature LUN
Scan Storage
Disconnect from vCenter
Connect to Source AppVolumes Server via API
Connect to Target AppVolumes Server via API
Force Rescan of AppVolumes Storage LUNs to discover Replication LUN on Target AppVolumes Server.
Mark Replication LUN as UnMountable on Target AppVolumes Server.
Find Storage Group for Replication LUN on Target AppVolumes Server.
Force Rescan of Storage Group on Target AppVolumes Server.
Force Replication of Storage Group on Target AppVolumes Server.
Force Import of Packages on Target AppVolumes Server.
Collect Source and Target Assignements
Collect Source and Target Packages
Collect Source and Target Products
Collect Source and Target Lifecycle Status
Unassign all Target Assignments not set in Source
Assign Source Assignments to Target, including Lifecycle status.
I have been using this in a 2.x fashion for a few years now, and now have updated and made some advances to publish this 4.x replication script. The core of the script is the same as the 2.x process, it’s just a change in API calls for AppVolumes.
Understanding VMware AppVolumes 4.x API Using Google Chrome or Microsoft Edge (Credge) Developer tools and using PowerShell to run API calls.
Over the last few years I have been spending a ton of time with the VMware AppVolumes APIs in the 2.x and 4.x builds. An amazing article was written by Chris Halsted (follow him on twitter or @chrisdhalstead ) Link way back in 2015. This is where I started working with the API’s. By no means do I think I’m some Jedi API person. It took me many FAILURES to get and understanding of things, and still fail a ton working through this. I fail more than I succeed, but when it works, I don’t touch it! Keep in mind this stuff is pretty much all I have done is just came up with a problem (AKA: Need to automate AppVolumes stuff) and just figure this stuff out. I am the type of person that learns a ton more if you just point me in the right direction and give me some examples and let me go. As said I failed a ton but stuck with it and managed to make some amazing things work. Along the way I did have to reach out to Chris Halstead and a few others to help on a few things and he was able to point me in the right directions. So many thanks for the help Chris. So here is my attempt of translating my learning to a blog post.
Now off to the VMware AppVolumes 4.x API’s. As said Chris did an amazing job of covering the 2.x but the 4.x are a bit of a hidden black bag of tricks you have to jump through. So here is the results of many hours of trial and error and a bunch of failures and eventual success. Most of the credit to this flows off Chris Halstead’s hard work. I am taking his layout and just adjusting it with Powershell commands and the Chrome / Edge / Credge interface.
Assumptions: That you have and understanding of what API’s are specifically Rest. You can find more info here. Also you have an understanding of what Chrome or Credge (Microsoft Edge) is and how to use it. If not this blog post will not help you much. Understanding and access to a VMware AppVolumes Environment. And have a understanding of what how to use PowerShell.
For me working with Google Chrome is the easiest thing to work with. Chris Halsted used postman maybe I just don’t understand it enough to feel comfortable with it but I prefer Chrome. Either works as long as you get the results you want. Chrome allows me to see the process as I do it in the WebGui and allows me to decode it, and then work backwards to figure out how to do it in PowerShell or any REST call.
Lets get started! If you open Chrome, browse to the AppVolumes server you can click on the three dots in the upper right, go to More Tools, and then Developer Tools. Or Ctrl+Shift+I.
Once You have Developer mode open find the network tab. This is where I spend a ton of my time. This will show all the web calls of the webpage and allow you to decode how things are happening and learn how to decode the process. Should look like below:
The red button on the left is the Record button and the one next to it is the clear button. They will become your best friends. The other button is the Preserve Log check box. I like to have that on so I can see the whole chain as when things switch from frame to frame or from page to page it clears the log and you have a tendency to miss the thing you are wanting to see.
Now on your AppVolumes Logon screen type your creds and hit log on with developer mode on and preserve log enabled. You will see all kinds of things. But really this does not help you with the logon process but shows you some of the process.
A few things we should cover is RestAPI Methods or Verbs. There are 5 verbs but I have only found we use 3 in AppVolumes, but here are the 5.
POST = Create
GET = Read Data
PUT = Update or replace
PATCH = Partial Update or Modify Data
DELETE = Voodo / Delete (Don’t use unless you are sure!)
Well going back to 2015 Chris Halstead posted that in order to start a session with AppVolumes manager via API you needed to follow a few things:
So in PowerShell you would use something like this
Invoke-RestMethod -SessionVariable AppVolSession -Method Post -Uri "https://(AppVolumesServerFQDN)/cv_api/sessions" -Body $AppVolRestCreds
Lets break this command up a bit. Invoke-RestMethod is a built in function of Powershell introduced in 3.0. It allows you to run rest commands directly from powershell. More Info Here!
-SessionVariable = This is the command we are using to save the session cookie. Notice we are not using a defined variable. But it is creating the variable called “AppVolSession” for use going forward. And future commands you just add the $AppVolSession varrible for the -SessionVariable
As we dig into the API viewer in Chrome you will see things displayed a bit different in the URL as it wont include cv_api. That is because by default the URL for Sessions is https://(AppVolumesServerFQDN)/sessions but that will do you no good for doing anything useful with the data. As its being displayed as HTML code.
But if you add the “cv_api” to it, it will change it from HTML to JSON/Text output allowing you to read and parse the data for other automations. So same URL https://(AppVolumesServerFQDN)/cv_api/sessions with the “cv_api” in it will allow you to get JSON data out of it.
-Body = Oh wait forgot to tell you about the creds. I found this out the hard way after trial and error a bit. You must use Clear text Username and Password (if someone knows of another way please let me know) so this is what I do typically.
This will form the credentials portion of the body correctly and allow you to authenticate session.
Now you can authenticate like below. Once you establish a session cookie it will look like below slowing:
Success
OK
From this point forward you can start doing other things in the AppVolumes Manager as you have an authenticated session cookie saved. If you check the session cooking by typing: $AppVolSession as that was what we used as a variable to save it in it will look something like below:
Getting AppVolumes version with API’s and Powershell
Here is my method. As explained I like the function of Invoke-RestMethod so to get Version data in powershell it would look something like this:
Invoke-RestMethod -WebSession $AppVolSession -Method Get -Uri https://(AppVolumesServerFQDN)/cv_api/version
Notice this time we called the variable “$AppVolSession as we want to use the session cookie, and we did not use the credentials body as we already have an authenticated session. And last thing you notice we switched the Method to “Get”. As we are getting data from the API. For more info on Rest Verbs Look here! Your output should look like below:
Most of the stuff Chris has called out in his blog about this still works today in the 4.x, the core calls work and the 2.x functions work but the 4.x they decided to change a ton of this stuff. And working with PowerShell you can just substitute the URL found in his blog with the Invoke-Reset command used for getting version data. For example Get AD Settings:
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri https://(AppVolumesServerFQDN)/cv_api/ad_settings
But as you see below the format is in JSON so easy way to deal with it is to save the contents to a variable.
Saved contents to variable $ADsettings and Now you can see the data below:
We are just changing the last portion of the URL to reach different things.
Directory Tab
# Get Online Entities
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri “<a href="https://(AppVolumesServerFQDN)/cv_api/online_entities">https://(AppVolumesServerFQDN)/cv_api/online_entities</a>”
You Know you can take my word these records exist but that is not what this post exist for. Now lets open your AppVolumes URL to Directory and then Online. Now open your developer tab. Then click on the online tab to refresh the page. It should look something like below:
From here on the Developer Tab on the right side click on “Online_Entities” Like below
From here you can see the web details on the rendering of the webpage. Hmmm…… That URL looks a bit like what we are using for our API calls except we are adding “/cv_api” in there.
By paying attention to this you can use Get calls to get all the data in the tabs in the same fashion. Just going to the WebGUI and finding out the URL and just adding the /cv_api page infront of it.
Below is the data in my variable $OnlineEntities.online.records
Now with he short learning out of the way, lets get to figuring out what all the others are. Below is a good list but not all of the API calls.
Get Users
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/cv_api/users"
Get Computers
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/cv_api/computers"
Get Groups
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/cv_api/groups"
Get OUs
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/cv_api/org_units"
Infrastructure Tab
Get Storage
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/cv_api/storages"
Get Storage Groups
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/cv_api/storage_groups"
Get Managed Machines
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/cv_api/machines"
Inventory Tab
Get AppStack Applications (Products)
Invoke-RestMethod -WebSession $AppVolSession -Method get -Uri "https://(AppVolumesServerFQDN)/cv_api/app_volumes/app_products"
Get Packages
Invoke-RestMethod -WebSession $AppVolSession -Method get -Uri "https://(AppVolumesServerFQDN)/cv_api/packages"
Get Packages with all the data
Invoke-RestMethod -WebSession $AppVolSession -Method get -Uri "https://(AppVolumesServerFQDN)/app_packages?include=app_markers%2Clifecycle_stage%2Cbase_app_package%2Capp_product"
Get Lifecycle Data
Invoke-RestMethod -WebSession $AppVolSession -Method get -Uri "https://(AppVolumesServerFQDN)/cv_api/app_volumes/lifecycle_stages"
Get All4.x Programs
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/cv_api/programs"
Get App Assignments
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/cv_api/app_assignments"
Get App Assignments All the details
Invoke-RestMethod -WebSession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/app_volumes/app_assignments?include=entities,filters,app_package,app_marker&"
Get App Attachments
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/cv_api/app_attachments"
Get Writable Volumes
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/cv_api/writeables"
Config Tab
Get License Usage
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/cv_api/license"
Get Ad Domains
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/cv_api/domains"
Get AD Domain More info
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/cv_api/ldap_domains/$DomainID"
Get Administrators
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/cv_api/administrators"
Get Machine Managers
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/cv_api/configuration/hypervisor"
Get Storage
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/cv_api/configuration/storage"
Get Managers
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/cv_api/configuration/manager_services"
Get Settings
Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/cv_api/configuration/settings"
Okay lets see if anyone ran into issues. I bet some of you ran into an issue like below:
If you look at the URL in the command you see that “Users” is capital. That is the issue. You must do these things to match case. I think they are all lower case, so when in doubt, make all the case lower case. It will work much better for you. I made this mistake a ton!
Dive into Put and Post
Now that we have covered all the boring Get info stuff lets go off the deep end a start causing damage. Lets head off to the Put and Post world. This where things begin to get interesting.
The variable $GroupID is the ID number of the group you want to import. So if you use the command from above to get Storage groups put it into a variable this will allow you to parse it. Using something like below:
$StorageGroups = Invoke-RestMethod -Websession $AppVolSession -Method Get -Uri "https://(AppVolumesServerFQDN)/cv_api/storage_groups"
From there you can type $StorageGroups.Storage_Groups and it will tell you all the storage groups with full details. From there you can use $StorageGroups.storage_groups.id and it will tell you all the storage group ID’s out there. And from there when you now the Group IDs you can do things like Import, Replicate and Rescan.
You can see below the name and ID. AppVolumes calls all the changes via Group ID. As you see my Group ID is “1”
For the examples below $GroupID is the ID of the storage group you want to do something with.
Import from Storage Group
Invoke-RestMethod -Websession $AppVolSession -Method Post -Uri https://(AppVolumesServerFQDN)/storage_groups/$GroupID/import
Replicate Storage Group
Invoke-RestMethod -Websession $AppVolSession -Method Post -Uri "https://(AppVolumesServerFQDN)/storage_groups/$GroupID/replicate"
Rescan
Invoke-RestMethod -Websession $AppVolSession -Method Post -Uri “https://(AppVolumesServerFQDN)/cv_api/storage_groups/$GroupID/rescan”
And if you want to do something with all the storage groups you can throw in my favorite thing the For loop. So example like this:
foreach($GroupID in $StorageGroups.Storage_Groups.ID){
Invoke-RestMethod -SessionVariable $AppVolSession -Method Post -Uri "https://(AppVolumesServerFQDN)/storage_groups/$GroupID/import"
Invoke-RestMethod -SessionVariable $AppVolSession -Method Post -Uri "https://(AppVolumesServerFQDN)/storage_groups/$GroupID/Replicate"
Invoke-RestMethod -SessionVariable $AppVolSession -Method Post -Uri "https://(AppVolumesServerFQDN)/cv_api/datastores/rescan"
}
This will Import, Replicate and Rescan each of the storage groups.
Working with Applications. In working with applications you need to get a bunch of data before you can do things with it. Using your learning from above you need to get things like “Package ID” and “Lifecycle ID”
Get Package Data from Source and Target
$Packages = (Invoke-RestMethod -WebSession $AppVolSession -Method get -Uri "https://( AppVolumesServerFQDN)/app_volumes/app_packages?include=app_markers%2Clifecycle_stage%2Cbase_app_package%2Capp_product").data
Noticed I put “().data” around the command. That is to get your right inside the data subset. If you look at the variable “$Packages” it will hold all the data of your packages. Will look something like below, Notice my package ID is “2”.
Next thing you will need is Lifecycle Data. Really this is the Lifecycle master DB. Yes it changes between versions and is growing.
$Lifecycle = (Invoke-RestMethod -WebSession $AppVolSession -Method get -Uri "https://(AppVolumesServerFQDN)/app_volumes/lifecycle_stages").data
Again going right into the data and saving to a variable.
Below is the base command to set the Lifecycle of any Package.
Set Lifecycle Data
Invoke-RestMethod -WebSession $AppVolSession -Method put -Uri "https://(AppVolumesServerFQDN)/app_volumes/app_packages/$($Package.id)?data%5Blifecycle_stage_id%5D=$($Lifecycle.id)"
But lets take this to my example. I want to set my package “2” to Published. So in order to do that we can do that with variables or with just numbers. Below is the example with variables. Variables work nice when you are want to do many of these things. And if you want to do a huge set just throw it in a for loop.
Below we are setting the Package “7zip” or ID “2” to “Published” that equals “3” in the AppVolumes DB.
Now pull the package’s again, and it will show updated Lifecycle Stage ID. Like below:
Now lets take the same Package, And lets set it as current as its our production package. The below command will set it as current.
Set Source Current Status
Invoke-RestMethod -WebSession $AppVolSession -Method put -Uri "https://(AppVolumesServerFQDN)/app_volumes/app_products/$($Package.app_product_id)/app_markers/CURRENT?data%5Bapp_package_id%5D=$($Package.id)"
Well congrats you just set the Package as Current. Now if you Get the package data again you will see something like below: (The Current marker is hidden under the “App_Marker”)
Below is broken out “App Markers”.
Now lets work on the fun mess of Assigning users. This one threw me through a loop for a few hours then it just clicked. It did take some help from Chris. I think the email said something like “Im so confused on assigning a user and can’t make it work. HELP!, can you give me a hint on how to make this work, not the answer!” Something to that extent, as cant find the email anymore. But for assigning users there are some specific data you need. The key parts you need to assign a user are the following:
App Product ID
Entities
Path
Entity Type
App Package ID
App Marker ID
-App Product ID – This comes from the Products. Below is the command to get Products.
Get Product Data from Source and Target
$Products = (Invoke-RestMethod -WebSession $AppVolSession -Method get -Uri "https://$AppVolServer/app_volumes/app_products").data
You will need to use a for loop to get this to a single product. Or you can just use the $Products.ID value. Below is an excerpt of my $Product variable.
-Entities (Who you are assigning and has to match specific form) This is a fun one, but in order to assign a user it HAS to be in this format:
CN=AD User,CN=User,CN=Users,DC=corp,DC=local
Now if you have OU’s you need to add all that fun stuff in there too.
Easy way to see this is to do the assignment with the Development tab open, once you hit complete find the call named “app_addignments” and click on that scroll all the way to the bottom and expand all the stuff under “Request Payload”
It will look something like this:
-Path = Meaning is it a User or Group
-Entity Type – This is either “User” or “Group”
-App Package ID – Can use “null” if you are not assigning to a package ID and just using current. For 90% of the people out there you are not assigning groups or users directly to a Package you are assigning it to the package marked as “Current”. If that is your case you can use “null”.
-App Marker ID – This is the Application Marker ID. This comes from the Packages, and we are just using the value $Package.app_markers.id
Below we are building the Body for the assignment. This is the part that messed me up with the whole thing. Until I realized that in the Developer tab it gave me the answer. If you look at the Payload Break down it gives you the answer. Then you just need to translate it a bit.
From there you build this:
# Assign User to AppStack
$AssignUserOrGroup = “CN=AD User,CN=Users,DC=corp,DC=local”
$EntityType = “Group”
$AssignmentJsonBody = "{""data"":[{""app_product_id"":$($Product.id),""entities"":[{""path"":""$AssignUserOrGroup"",""entity_type"":""$EntityType""}],""app_package_id"":null,""app_marker_id"":$($Package.app_markers.id)}]}"
Now that you have the body built you can assign the user! That part is simple.
Then to unassign you can just find out the Assignment ID from the commands we covered early and boom. User is unassigned.
Un-assign user to AppStack
Invoke-RestMethod -WebSession $AppVolSession -Method Post -Uri "https://(AppVolumesServerFQDN)/app_volumes/app_assignments/delete_batch?ids%5B%5D=$($Assignment.id)"
There are many more things that can be accomplished with this. But this is just the start. I hope this helps you to understand how I figured out the API, and how the little things help you going forward. Look for some more blog posts to do with this.
A coworker of mine and I where chatting about VCSA Replication Agreements and the agreement is that there should be a fling for mapping replication agreements. So that night I thought about it. Really there is nothing stopping me from doing this. I pulled out the computer and in a few lines of code I had it working. I took the time over the next few nights making it more robust, and work through getting it to accept a seed node and and find all its VCSA replication partners and report back replication status. So turns out that the seeding loop was a little more than I had bargained for. Reached out to the community and talked with my friend Joe Houges @jhoughes and worked out the issues with the looping issues. Also helped with reminding me about SSH code stream.
Below is an example of the report the script with return. This shows the Source vCenter, Replication Partners, Replication Sequence Numbers, and validate the numbers are not off.
Source
Partner
Host_Available
Status_Available
Last_Change_Number
Partner_Change_Number
Replication_Satus
TestvCenter1.chrislab.com
TestvCenter2.chrislab.com
Yes
Yes
171728
171728
Good
TestvCenter1.chrislab.com
TestvCenter3.chrislab.com
Yes
Yes
171728
171728
Good
TestvCenter2.chrislab.com
TestvCenter1.chrislab.com
Yes
Yes
166648
166648
Good
TestvCenter2.chrislab.com
TestvCenter3.chrislab.com
Yes
Yes
166648
166648
Good
TestvCenter3.chrislab.com
TestvCenter2.chrislab.com
Yes
Yes
168975
168975
Good
TestvCenter3.chrislab.com
TestvCenter1.chrislab.com
Yes
Yes
168975
168975
Good
Same as above Just Image as seemed easier to see.
The Script will log into VCSA, enable bash, Pull replication status, clean up the data, and report the info and export it to a CSV, then disable bash. The code lives here:
Over the two plus years we have transformed our businesses over to AppVolumes for publishing applications for VDI deployments. But there are a few things you take for for face value.
Like when you read the VMware docs here that tell you that you are supposed to use the URL https://AppVolServerFQDN/health_check for gathering the health check info. If that health check URL shows any thing other than a 200 OK status something is broke. Well this is where i break open the fact that does not actually work as you were made to believe.
Below is a screenshot of a common issue of a AppVolumes server and a patch cycle where the AppVolume server starts up and has no connectivity because the DB is down. No big deal, restart the AppVolumes service and you are good to go. But you would hope your health monitor would tell you this. As you have your monitoring solution to send alerts when its something other than 200. Well you are wrong, it wont send you anything.
As you see above the Status shows 304. Well according to the HTTP standards, a 304 is labeled as: “304 Not Modified” and “If a 304 response indicates an entity not currently cached, then the cache MUST disregard the response and repeat the request without the conditional.” Also meaning that a 300 status code is not seen as something being broken. Also you see above the AppVolumes server is in deed BROKEN. So pull the same HTTP status code via Powershell and you get this:
As you see here its returning 200 OK. Hmm. But that is expected due to 300 status codes are pretty much just another 200 code. But again its broken but it says its not.
Below is another example, where in fact the AppVolumes server is really broken, but the Status code still shows 200 OK even though its broke.
And again its showing 200 OK and its no doubt broken. The only difference here is its not to the point where the AppVolume service has loaded the cert so you will get a cert error, but still reports 200 OK. Below the same thing with Powershell used to get the webrequest.
Now I bet you are thinking well just monitor the AppVolume service on the OS. Well yes you should, but will not help in these two cases. The service is in a running state. So does no good.
I have in fact opened a VMware case on this and still waiting on a plan of action, but sure it will be the next build update before its fixed. So in the mean time I have dug through every page to try to figure a way to monitor these services so we can show true up and down status. I have come up with this solution. Monitor for the status code of this URL :
If it reports back 404 then things are good. If it reports anything other than 404, then something is broken. This does not fit all other use cases, as if the service is not started it will respond as a 404. So double edged sword. For me we are still monitoring the same Health_Check URL https://AppVolServerFQDN/health_check but also monitoring this. With the combination of both, you can see if its up and down. Just not with one URL like you hoped.
Best of luck to everyone on monitoring these till we get a full solution. And will update the blog once I receive feedback on the status.
As a bunch of us are starting to convert their environments over to Online SharePoint and Teams we are having to adjust how we do things. Now that teams uses the SharePoint Online as its back end file structure.
Problem: Need to find a way to add files to a teams site from an automated script.
Doing some research found there was a simple way to do this. The simplest easy way to add files to a Teams site is just to send a email to the teams email address. Odds are you are already emailing the report or task results to yourself or a team, why not just add the teams site Address. How do you find the email address? Well inside your teams site and the channel you want to send it to. Click on the three dots. It will look something like below.
Click on the Get Email address button and you will now have the email. This works if you are just sending email results to Teams. But what if you have an attachment. Well that also works. But its listed as an email attachment.
What if I just want to send a file on its own with no email? This is where things got fun. As teams has no real API that you can really use for stuff like this. (I may be wrong, as I could not find one.) Only way I could figure to do this was to add the files direct to the back end SharePoint site.
This requires you to install the PowerShell Module: SharePointPnPPowerShellOnline. This will give you the functions of where you can connect to SharePoint site and upload files directly to where you want them to go.
If you are trying to drop a single file into a teams file share you can run the command below:
$SharepointURL = "https://youteamssite.sharepoint.com/sites/TeamsName"
$OutPath = "C:\Temp\ToSharepoint\Test.txt"
Connect to Sharepoint
Connect-PnPOnline $SharepointURL -Credentials $(Get-Credential)
Send File to Teams Sharepoint site
Add-PnPFile -Folder "Shared Documents/Reports/FolderName" -Path $OutPath
If you want to add contents of a folder you can just add a loop and copy all files from that folder to the teams site.
$SharepointURL = "https://youteamssite.sharepoint.com/sites/TeamsName"
$OutPath = "C:\Temp\ToSharepoint"
Connect to Sharepoint
Connect-PnPOnline $SharepointURL -Credentials $(Get-Credential)
Send Files to Teams Sharepoint site
$Files = Get-ChildItem "$OutPath"
foreach($File in $Files){
Add-PnPFile -Folder "Shared Documents/Reports/FolderName" -Path $File.FullName
}
Add one of the two above to any of your reporting scripts and now you will have reports directly dumped into your Teams Site “Files” directory.
I have been having trouble for a while dealing with credentials and how to store them when using them in scripts. I have done the whole just do Get-Credential and enter it every time I run the script but that is a great for one off scripts but not for scheduled tasks. In the past i had just been using the Import-Clixml and importing the creds saved from a txt file. This works well but now you have to deal with actual txt files. I ran across a article somewhere reading on something else and remember someone saying something about saving credentials to the Windows Credential manager. After doing some research and some digging and reading found this Gem of a Powershell module. CredentialManager Module is a easy module to use, and simplistic with only 4 commands.
With these 4 commands you can now save credentials and call credentials from the credential manager. This is a huge win for me. No more having to deal with cred files, trying to remember what account created the txt file and fighting that mess.
The command will get you the a password that is 20 characters long with 4 special characters. This is a quick way to generate a password for the needs.
With this little bit of info has saved me a huge amount of time. I am not claiming that Credential manager the most secure method, but its way better than saving the passwords in clear text in the script. And much more manageable than having to deal with txt files.
If you have setup your logon monitor this is great. But its lacking a ton. How do you look at this as a holistic basis? How do you start to look at trends? Well there is noting out of the box for you. You pretty much have to build the solution on your own. Well you are in luck. I had some time on a flight to Dallas to kind of throw something together pretty quick.
What I built was a tool that will query the remote Logon Monitor folder, look through each of the log files and collect the following:
Logon Date
Logon Time Stamp
Session Users
Session FQDN
Logon Total Time
Logon Start Hive
Logon Class Hive
Profile Sync Time
Windows Folder Redirection
Shell Load Time
Total Logon Script
User Policy Apply Time
Machine Policy Apply Time
Group Policy Software Install Time
Free Disk Space Avail
I would pull this from the each of the log files and put in a table view and export to a CSV. Yes this is noting to fancy, but from here you an publish the results to a SQL database instead, create a Web front end to show fancy graphs and if you are lucky you can put it behind Microsoft’s Power BI.
To use this you need to follow my previous post and setup Horizon Log on and configure the Remote Logon Monitor path.
Once you get this setup, you can set this script to run as a scheduled task to collect log data. This script is more setup as a framework and will continue to kind add to it as I have the time.
You can access the script here. Or can just be found on my GitHub site.
If you download this and fill in the remote log path and where and what you want to name the CSV. When you run the script you will get a CSV like below.
I have completed some major updates to this script. I have added the ability to turn on and off features. Also added the ability to clean up old log files so you are not filling up drives.
I have incorporated an email function that will attach the days CSV file with the performance stats, and it will also include a bar graph with the average logon times of the last 14 days organized by day. The chart will look like below. It will highlight the lowest time the color Green and the highest one the color Red. The email will also have a breakdown of the Averages for the day.
Also added the SQL functions so you can export the data to a SQL Data Base. As you run the script it will export the data to a SQL table. In a SQL server you have already stood up. Inside the Git Repo is the SQL script to create the Table, and also the script to run for De-Duplication of the data, you should not run into duplicate records but for me and testing I ran into a ton.
Now from here the possibilities are pretty limitless. You can build a PowerBI site, you could build your own Webpage graphing the stats, or many other options.