What next, MDOP?

Microsoft Desktop Optimization Pack (MDOP) is a subscription add-on used by many enterprises with Microsoft Enterprise Agreements (EA) that provides a number of software benefits. The software has recently been released on a twice-a-year basis, making the next version due sometime soon. Microsoft typically does not pre-announce release dates, so we can only guess that, based on the last release being November 1, we should be due for a release around May 1.

In addition to providing upgrade benefits, meaning you can upgrade PCs to the latest version of windows as long as the MDOP subscription is in place, MDOP provides additional software to help with enterprise deployments of Windows Desktops. In each release of MDOP, Microsoft typically updates a subset of these applications, and sometimes adds new ones. The last released version of MDOP (2013 R2) included updates to support Windows 8.1, plus a few extra things. The biggest was a major update to the Application Virtualization product (App-V 5.0 SP2 and 4.6 SP3) and smaller updates to Microsoft BitLocker Administration and Monitoring (MBAM 2.0 SP1), Advanced Group Policy Management (AGPM 4.0 SP2), and Diagnostics and Recovery Toolset (DaRT 8.1). Other components in MDOP include the User Environment product (UEV 2.0), and Med-V and were apparently unchanged from the previous MDOP release.

Given that Windows XP is no longer under support, I am guessing that the next version of MDOP will drop MED-V. MED-V is the managed version of Windows 7 “XP Mode”, in which a copy of Windows XP is virtualized on top of Windows 7 using the older “Virtual Computer” hypervisor. If XP goes away, I can’t see Microsoft continuing to support MED-V as a product. Probably more important than the virtualization piece of that product, what Microsoft should find a way to re-use is the innovative user interface integration that came with the acquisition that created MED-V. In a world where people are using virtual machines, whether local or remote, integration of the user experience on a per-application basis remains interesting. Citrix has long had such capabilities, and other than RemoteApp, Microsoft does not. The value of having a single user interface experience, start menu, file associations, and “seamless application windows” on a single desktop is the end-user’s nirvana. This might or might not be an MDOP thing, but I hope they don’t forget about it if MED-V is dropped.

But if MED-V is removed from MDOP in the next release, Microsoft will feel a little pressure from customers to add something more into MDOP. Which opens up the fun guessing game of what Microsoft could or should add!

I think that everyone’s top item, a license allowing you to for host VDI images on a shared hosting provider infrastructure, ain’t gonna happen. MDOP might not be the right vehicle for that anyway, but anyway we can get it would be better than the current situation.

Next on my list would be things to improve image management. Something like the capabilities of the Microsoft Deployment Toolkit (MDT) but with a better user interface would be great. I love MDT, but the UI is something out of the 1980′s. It can be a pain to do things like make a copy of a task sequence to tweak it.

Or maybe attack image management from the other end. Something along the lines of FsLogix is doing. I like their idea of a single image tweaked at runtime by policy, in essence enabling apps on the fly, so much that last year I joined their advisory board. You still need App-V to handle application conflict, but a combination of something like FsLogix, App-V, and a better User Environment product would be a solid combination. Microsoft might be better to buy the company and add it to MDOP rather than develop on their own, but one way or another the capability would be a great addition to MDOP.

Further down my list is something to help the user and IT organize all of their remote “stuff”. A single interface where remote machines, remote apps, storage repositories, and even website credentials are managed. This could be simply a UI that accesses things, or more of a secure repository that also holds the credentials (safely) with centralized backup. This would require Microsoft to move their thinking beyond “just buy Office 365″, which might be too much to ask, however.

Given the lack of noise, I doubt any of this is happening soon. What would be on your list for MDOP?

Disclaimer: Although I am a Microsoft MVP, this article contains no “inside knowledge” or NDA material. My contacts at Microsoft are not talking about this and clam up if asked.

Streaming Theory – Should you Launch?

A small comment by Microsoft’s Thamin Karim at the European AppV User Group event in Amsterdam last week caused me to re-think streaming in App-V 5. It was one of those “duh!” moments that occur when something so obvious hits you and you realize that you hadn’t really thought about it and were operating on wrong assumptions.

The comment, which in essence was that you get slightly better performance by not preparing the App-V package for streaming and using “fault streaming” [in certain situations] with App-V 5. Here are my thoughts on this.

Streaming and Performance
When stuff needs to be transmitted over networks, we use the term streaming to indicate a difference in transmission/consumption. Rather than transmitting the entirety of the stuff, and then operating on it, the operation portion is allowed to start prior to receiving the entirety.

Streaming in App-V is a major feature to allow apps to run without being present locally. The native operating system is built with an assumption that the entirety of a file is present locally on disk, or on a remote share and loaded fully into memory prior to enabling operations on it – meaning allowing the application to run as a process. Streaming is used in App-V both to improve performance (allowing the application to start sooner or using fewer local resources), and to save on local resources (by avoiding locally caching of things not really needed).

When an application starts without App-V, a detailed analysis of the start-up period of the application tends to bounce back and forth between CPU and Disk I/O. Initially, there is disk I/O to read the WinPE header, and other portions, of the exe. This is followed by some processing, then more I/O. And this bounding back and forth occurs for some time.

Side-note: App Pre-Fetch

To explain this, let me start with an example using another Microsoft performance boosting technique — “App Pre-Fetch”. Microsoft uses a method they call “application pre-fetch” to improve performance of this critical period somewhat. App Pre-Fetch pays attention to the loading order of dlls by an applications during the first 10 seconds the process is running and writes the locations of these to a “.pf” file stored under a subfolder under the windows folder. This .pf file is automatically read by the system whenever a new process is spawned to run an exe, and in the background these dlls are queued to be read even before the exe gets around to reading them. This speeds up the application startup by eliminating the wait time that occurs without it. In the depiction below (illustrative, not actual measurements) you can see how, without App Pre-Fetch, these operations are serialized and take longer than when pre-fetch is used.

This pre-fetch only affects only dll file reading and not additional files, but you can measure the impact by locating and deleting the .pf file on a system and running the app.

Note that App-V does not prevent this pre-fetch from improving launch performance, however this effect is in place only for the second, and subsequent launches at the client.

App-V Stream Training
When we perform stream training activities inside the sequencer by launching the application in the Streaming Configuration phase of the sequencer, we are trying to achieve a similar effect for the first launch of the application. The difference here is that we consider ALL file I/O activity, including the exe file pages, dlls, and other asset files read in.

A big difference, however, is that the App-V design is that all of those files are read in first, and then the app is allowed to start consuming CPU.

In App-V 4.x, these portions of the files are placed contiguously in the .SFT file and streamed using a single request from the client. Assuming the actual disk placement of the SFT file on the server is not fragmented, this produces an optimal stream transfer time to get the files into the local cache (and on first launch, as a byproduct of streaming, into memory). Once the pieces are in place the small startup CPU completes quickly.

In App-V 5.x, the “jigsaw file system” (SFT) is no longer used, and files are stored as complete files inside the App-V file unrelated to streaming training. Streaming training affects only an XML metadata file that dictates what portions are needed. The App-V client and drivers use this metadata to make multiple requests to stream over required content. While ensuring that the AppV file is not fragmented will speed up reading of a single file, it tends to not improve the multiple-file requests as much.

The depiction below (as before, this is illustrative and not taken from any real measurements) you can see how the launch performance with training could be slightly different.

In the App-V 5.x with Stream Launch Training example, the small gaps shown in file transfer represent a small delay that occurs as the client processes additional requests in the list. This delay is probably small enough that it shouldn’t be visible on a drawing of this scale, so I exaggerated it.

The case for not launching during training
In App-V 5, when Shared Content Store mode is in use, all launches appear similar to the “first launch” scenario.

While I do not have any test numbers to prove this, it is reasonable to assume that with App-V 5 In a scenario where Shared Content Store mode is in use performing a launch during training could actually slow down the launch by requiring that no CPU is expended until everything is in place. This is especially the case when the training portion becomes larger.

I expect that the difference in the amount of time before the application is ready for user interaction to be quite small. Without launch training, certainly the user would start to see some application UI elements earlier than when launch training is performed. And while I am not convinced it matters much whether you launch or not in this scenario, the point here is that it probably doesn’t affect performance much and you should probably stop performing the extra work of training the streaming.

But keep in mind that if you let the user see the streaming indication (a configuration option added back in during SP2 that adds a streaming percentage progress bar above the icon tray) then performing streaming training can be useful, not for performance but just to provide the user some feedback that something is happening prior to application GUI displays. Under different scenarios, such as an SCCM distribution using HTTP streaming off of the DP and without SCS mode enabled, this can be quite useful.

So I’m not going to answer the question “should you launch”? I don’t even touch some of the other considerations involved in this post. The answer can only be “it depends”. It is a complicated question that must take into consideration how you distribute, how you configure the clients, and whether or not clients can go offline. But please stop doing it because we told you to do it in App-V 4.x.

Request for new VDI Term: “Semi-Persistent”

VDI is often categorized as either Non-Persistent or Persistent.

Non-persistent VDI is where you use a shared common image. Only one image to maintain. When the user logs off, the image is destroyed and the next time the user logs on they get the original image.

Persistent VDi is where the complete image is retained upon logout and the next time the user logs on they get the exact same image they had when they logged off.

The reality is that usually Non-persistent VDI implementation brings along some user data from the prior session. This is handled by Roaming Profiles at a minimum, but may also have folder redirection or a user environment add-on product to manage the user-related-data, either app related (UEV, AppSense, RES, TriCerat, Norskale, etc) or layering (Unidesk, Citrix PVD, 2012R2 “User Layer”).

I think we need a different term for this, segregating it from Non-persistent. I’m going to start calling this “Semi-Persistent”. What do you think?

The Paint.Net App-V 5 connection group solution

OK. So a simple solution for every connection group situation isn’t going to happen any time soon. See recent posts in this series “Who Are You”? and App-V 5 Script Error 534 and A collection (so far) of #APPV 5 Client file visibility and blocking information. But here is one situation that I solved, and maybe that might help you with others as well???

The requirement
You want to provide Paint.Net as a virtualized application package using App-V 5. You package it up and out it goes. Paint.Net supports plug-ins, and let’s say that you start getting requests for the plug-ins. You could update the original package in the sequencer and give everyone the plug-in, but this is App-V so connection groups are a possibility.

There are hundreds of these plug-ins, some having the same name so you can’t have both. And they are independently released so you never know when someone is going to need a new one. All of which makes this a good candidate for using connection groups so that you don’t impact users that don’t need the new or updated plugin.

The problem
Except that Paint.Net doesn’t see the plug-ins when you use connection groups.

Some background on plug-ins in general
Sometimes apps work together as separate processes, but I don’t call those plug-ins. I use the term plug-in to indicate a scenario where a product’s exe process actively supports the addition of extra code (or sometimes data) which is independently developed and released, typically by a third party, that is to be loaded by the application directly into its own windows process.

Note: wikipedia uses the term “add-on” as a category that includes “plug-in”, “skinning”, and “theme” customization of an app. It also defines a separate category for “extensions”, but those are pretty much plug-ins with a more complicated integration.

The extra code (or sometimes data) usually is in the form of a dll, but possibly other WinPE format files such as tlb.

The app supporting the plug-in can locate plug-ins dynamically using a scheme that the developer chooses. And sometimes the developer thinks he/she is smarter than he/she really is. The two most common choices are:

  • The app reads an app specific registry key where plugins are registered. The plug-in registration may be a simple REG_SZ entry added to this key, or may be a sub-key with additional data, but ultimately the plug-in installation must add the dll component to the file system and then register it in the registry so that the app loads it in. In this registration, you might find just the name of the dll, or a full path to it.
  • The app looks to a specific folder and just loads all of the dlls it finds. How the app locates that folder is unimportant to the developer, but how he writes that code to find them has a huge impact on connection groups. There might be a registry key containing the location of the plug-in folder. There might be a registry key containing the location of the install folder of the app, and the plug-in loading logic runs relative to that location. The app might use a location relative to the current working directory. The app might use a location relative to the folder of the where the primary exe was actually loaded into memory from, as read from the process block. The app might use a hard coded folder (I mean, who wouldn’t install to C:\Program Files\Paint.Net?).

Or the app might do something else bizarre that I haven’t run into yet.

Oh, and when an app loads a dll, it can provide a full path to the dll, or just give the name of the dll and let Windows find it. Most of the time, the latter method is used. When an exe is built and references are added to it’s import table to automatically load when the exe is loaded, it must use the latter method and we are used to seeing this behavior in procmon traces: Windows first looks in the current working directory, and then follows the path variable. A really good developer would register a AppPath for the executable to pre-pend the additional folders via a registry setting and use, but apparently there are no good developers out there. A developer writing an app to load in plug-in dlls after launch could choose to just ask for the dll by name, or could choose to provide the path.

What the developer tells us about Paint.Net and Plug-ins
For Paint.Net, plug-ins sometimes have actual installers, or more often you just get a dll file.

The developer of Paint.Net tells plug-in developers to just drop these dlls into a specific sub-folder where you installed the product. Most go into a sub-folder called “Effects”, but there are also plugins that should drop files into the “FileTypes” or “Resources” sub-folders. Paint.Net has no registry setting for the location, or for where it was installed, so we are talking about a situation more like choice 2 above.

Things that don’t work for Paint.Net and Plug-ins via Connection Groups

  • Original package is installed to the designated PVAD (whether the expected location in Program Files or something more creative). Create the plug-in package by declaring a different PVAD, Expand-to-local-system and drop the files under the original package’s folders, causing these files to be VFS’d.
  • Original package is installed to the designated PVAD (whether the expected location in Program Files or something more creative). Create the plug-in package by declaring the same PVAD as the original package, Expand-to-local-system and drop the files under the original package’s folders, causing these files to be PVAD’d.
  • Original package is installed to the folder other than the designated PVAD (whether the expected location in Program Files or something more creative) causing the files to be VFS’d. Create the plug-in package by declaring a different PVAD, Expand-to-local-system and drop the files under the original package’s folders, causing these files to be VFS’d.
  • Original package is installed to the folder other than the designated PVAD (whether the expected location in Program Files or something more creative) causing the files to be VFS’d. Create the plug-in package by declaring the same PVAD as the original package, Expand-to-local-system and drop the files under the original package’s folders, causing these files to be VFS’d.
  • Any tricks involving Pellucidity (merge-with-local or override-local) settings in any of the packages in any of those scenarios.
  • Any of those scenarios, using a trick to modify the shortcut current working directory. The shortcuts for apps reference the target exe under the C:\ProgramData\App-V\guid\guid\Root folder (or Root\VFS\… under that when a VFS style install was used in the main package), and by default use the containing folder as the current working directory. We can’t alter the target location, as it must point to the real exe, but we can make an edit in the DeploymentConfig file to change the current working directory. Unfortunately, the developer does not seem to use the current working directory to locate the plugin folder. And if it had worked, it only solves launch by shortcut and not by FTA.

To the best of my knowledge (because the developer doesn’t provide the specifics or the source code), the developer seems to be looking for the dlls using a location relative to the folder of the where the primary exe was actually loaded into memory from, as read from the process block (or as written in more modern .Net developer speak, the loaded assembly information using reflection), and then providing the full path to that dll to load it.

The exe will always be loaded from the C:\ProgramData location and (currently) there is nothing we can do to change that location. Each of the attempts above fail to overlay the plug-ins in a way that will be seen using that detection technique.

Something that did work
I had given up on connection groups with Paint.Net after struggling with it on and off over a period of months.

But then one day, while I was researching something else, I hit upon an idea. What if I sequence the plug-ins not like the sequencer environment, but like the client environment under App-V? Here is what works.

  1. Sequence Paint.Net. You can use either a PVAD or VFS style install of the main package, but to keep things simple let’s assume a PVAD style installation.
  2. Revert the Sequencer to a clean snapshot.
  3. Prior to sequencing, determine the package and package version guids of the main package. This information is located in the internal AppXManifest.xml file. I typically use the AppV_Manage tool to pull this information out for me. On the Publishing tab, select the package and expand the manifest in the lower window. The GUIDs are under the identity property of the manifest.
  4. Natively install Paint.Net to the C:\ProgramData\App-V\guid\guid\Root folder (or the actual VFS subfolder under that folder). In other words, make the sequencer look exactly like the client will look when the package is deployed!.
  5. Now start sequencing, but as a new application and not a plug-in. Set the PVAD to a C:\uniquename. (Not that it really needs to be unique). Install/Drop the plug-in dlls under that ProgramData location, which will then be VFS’d.
  6. Create your connection group and you are good to go!
    • The only problem with this technique is that you can never update the main application without having to redo the plugins. So maybe you just ask the user’s what plugins they need and stick with a single package with the plug-ins that you only update once a year.

A collection (so far) of #APPV 5 Client file visibility and blocking information

While I continue to try to get a better grip on connection groups, let’s continue documenting some of the side issues I have come across. Today, I’ll focus on a single package and either want to block visibility of a potential local installation or need to allow visibility of something local, like a license file.

In attempting to work solutions to these challenges, I noticed a few things in 5.0 SP2 that need to be written down somewhere. Here are a collection of things involving the file system.

Single package oddities of the File System kind:

  • In the sequencer editor, you cannot change the Pellucidity setting (“merge with local” versus “override local”) on folders that are under the Root (aka “Primary Virtual Application Directory”, or PVAD) folder, only those that are in the VFS area.
  • The visual indication of this setting as shown in the sequencer for folders under the PVAD cases (the ones where you can’t change the settings anyway) are sometimes wrong. They may appear as grey (normally meaning “Merge with Local”), or yellow (normally meaning “Override local”), however the color does not appear to affect the implementation at the client for those PVAD folders. In a typical package (where all folders are created while monitoring) is seems that the Root is grey, direct subfolders are yellow, and subsequent subfolders are grey. But the implementation at the client depends on how the reference is made and has nothing to do with the color of the folders above.
  • By contrast, you can change the Pellucidity settings when VFS style installs are used, and the display is accurate.
  • By default, an executable in the package with a shortcut is launched at the client using a current working directory under the ProgramData/AppV folder so the merge/override setting is not applicable as the client cannot have any local files in those locations. But if the program looks for files using a reference of the original PVAD folder, the client has a virtual junction point pointing to the ProgramData location, allowing local files under a real local PVAD folder to be seen, and all of the PVAD folders then act like merge-with-local (no matter what color is showing).
  • Pre-creating the PVAD folders in advance of sequencing affects the display, but not the client implementation one iota.
  • In the file system tab of the sequencer editor, you can only add a file, not a folder.
  • In the file system tab, you can only add a file that is currently on the same disk partition as your package, but not present in the PVAD area. Which means it ends up in the VFS area.
  • Once a file is added this way, you can then edit the file mapping to change the mapping to a PVAD location to get the placement you want, including naming a new sub-folder not currently present in the PVAD. If desired, you can then remove the added file from the tab, leaving the folder you created. [EDIT: I always recommend making such a change by adding the file during monitoring mode of the sequencer rather than in the editor whenever possible; we tend to have issues with "losing" these kinds of edits if the package is upgraded in the future.] In the image below, FolderUpdateC and file FileUCA.txt were added this way.
  • For the file system, you can add deletion markers, which hide visibility of a potential client entity. But it only works at the folder level, not the file level. Furthermore, these markers do not show in the Sequencer Editor. (To mark a folder as deleted, pre-create the folder prior to sequencing, then delete the folder while in installation monitoring mode).
  • Folders in the VFS have default settings just like App-V 4.*. If created new during monitoring, they are marked override local, otherwise marked merge with local. Client testing results, however, are affected by reference. The default publishing for the shortcut will actually be to a ProgramData/App-V/guid/guid/Root/VFS folder, meaning that the effect is override local. If the app references the original location (like C:\Program Files\vendor) then the results will either be merge or override depending on the package setting.
  • When you have multiple level folders, walking down from the root, as soon as a folder is marked for Override, all subsequent sub-folders act as override (even when marked merge) because the app can’t see client files past the first Override. This is nothing new, but everyone needs a reminder now and then.
  • For shortcuts, you can change the working directory. This is done by editing the deploymentconfig file. By changing the working directory back to where the app was installed to, you can gain merge with local capability.
  • Unfortunately, there does not seem to be a way to get FTA and protocol handler launches to have an altered working directory.
  • Junction Points may be included in your package. This makes another way to pop in an external license file.

Multi-package oddities of the File System kind:

  • If you have several packages in a group that contain reference the same VFS folder, all must mark the folder as merge-with-local if you want the app to be able to see any additional files and sub-folders that are present at the client.
  • Conversely, if any one or more of the group packages have the VFS folder marked with the override-local setting enabled, you will not be able to see the client files/subfolders. It does not matter which of the packages has this setting (primary or plug-in), nor in what position the override set package is placed in the group ordering.
  • And to finish off the connection group with one VFS folder marked override-local scenario, setting override-local on any of the connection group packages has NO effect on the ability to see all of the plug-in files under that folder no matter what order is used in the group. This means that you can always see all of the files from every package in the group as long as the folder is in the VFS and is being referenced by the original path reference.

As I said, not like 4.* at all. And that last item on the list gives me some hope that it might be possible to get more plug-ins to work in connection groups. But it isn’t simple. Controlling the reference used by the app is a hack that can be applied in some cases and not possible in others. And VFS installs sometimes don’t work for some packages at all. Shortcuts seem to only be able to point to real exe files, not to other shortcuts or symlinks. You can affect the current working directory of a shortcut, but not for file type associations. And I have some PVAD cases where the presence of a client folder seems to affect plug-in visibility that I have yet to document.

So while I continue to pound away at some of those, let’s just stop here and summarize what we can do with a single package and potentially stuff at the client layer.

Blocking Visibility

When you want to block visibility of a potentially locally installed native version of an app, we need to consider both the registry and the file system.

The file system is most easily handled by making sure that you don’t install to the same place that the native app will be. This eliminates all forms of file conflict no matter if you install to a unique folder, either in the PVAD, or VFS’d, just don’t specify the PVAD as under Program Files and don’t install it to Program Files either when you sequence it. It also eliminates visibility to natively installed files for the same app no mater how the app references the file locations (assuming it doesn’t pick up an old registry reference pointing it there).

The Registry lacks some of the File System complications, and when sequencing acts like a VFS in that Pellucidity is correctly marked based on new or pre-existing keys. And you can always change the setting on the keys. However, it does not allow you to install to a different registry location, so it is important to ensure that the top level vendor key (in both HKLM and HKCU) is marked “override local” when you need to block.

There can be other issues causing you to do more to fully block. Java is a great example, and Aaron Parker (stealthpuppy.com) has a blog that shows the basic technique of adding Registry Key deletion markers to the package (the blog post is for App-V 4.5, but other than your need to add in newer Java version references it should be sufficient for you). And Dan Gough (packageology.com) is in process of posting a three part series on Java and App-V 5 as I write this.

Blocking Visibility with (limited) Client file access

Let’s say you want to have a package that needs to reference a license file, but you want to block visibility to the other version.

Start by installing the software in your package to somewhere other than the normal Program Files area. I like to use a PVAD of a folder directly under C:\ that is the package name. So install to there. Or if you want a VFS style installation, name the PAD as C:\PackageName.PVAD and install to C:\PackageName. The latter, while not recommended by Microsoft, gives you the flexibility to customize all of the folder Pellucidity settings (and usually works anyway).

The situation with the registry is the same as before. The right keys should be set to override automatically, but if not you may change them.

To make the license file visible, you might try one of several approaches:

  1. Get the file coped to inside the virtual environment at the client. This would entail a StartVE script to copy the license file to the PVAD referenced folder location, which would then be redirected to the users application related data area. Currently, this only works if the package was a PVAD installation and not a VFS one (hopefully this might change some day). You probably want to copy a file from the user’s home drive share folder, so you may need my ScriptLauncher tool to make this work.
  2. Add a junction point inside the package for the license file, pointing to somewhere outside the package. The target location should probably be on the client machine (to avoid having to enable local-to-remote link following), ideally in the user’s appdata roaming folder. Because the reference will be done under the user credentials, this can work.
  3. Change DynamicConfig.xml as follows.
    a) Locate and modify the current working directory of the application shortcut. For example, from

    to this

    Note that you change the to reference the folder as C:\PackageName\…. You don’t want to set it as [{AppVPackageDrive}]\PackageName but actually as C:\PackageName (the former would become C:\ProgramData\App-V\guild\guid\Root\VFS\AppVPackageDrive ). This is another reason not to install the application to C:\Program Files vendorname in your package; you would have to hard code a reference either C:\Program Files or C:\Program Files (x86) for the working directory and your package would only work on one architecture.
  4. b) Add an AddPackage script to create the base folder on the client system as an external folder (see example at the bottom of this post),
    c)Create an SCCM job to copy the file locally, or add a StartVE script to copy the file (and don’t let your users become admins).

“Who Are You”? and App-V 5 Script Error 534

One of the frustrations that I have in working with App-V 5 virtual apps is remembering to forget everything that I know about how application virtualization worked before version 5. Quite often, the same concept is implemented just so slightly differently that I make assumptions about an app problem that is incorrect, simply because I “know” more than I should. Even worse is when things act differently in some cases but not others due to test cases that I had not considered being important distinctions.

Lately I have been digging into issues we have seen with Connection Groups. I had hoped to have an article that would look at some problems and solve them, but it seems I just keep getting deeper and deeper into side issues that distract me when I discover that at the detail level things don’t work exactly like I thought. Even worse is when they sometimes act differently due to those additional conditions that I had not considered.

Ultimately, I think that my research papers on Pellucidity, Deletion Objects, and Connection Groups in App-V 5 as well as Connection Group Fun with App-V 5 may turn out to be rather naïve for the file system and only really cover certain situations. Eventually I’ll have to update those, but for now let me document one of the side issues that have come up and slowing me down in getting the complete picture of how things work now.

Scripts and Error 534 (hex)

Last year Microsoft’s Josh Davis wrote on adding scripts into Dynamic Config files. But it doesn’t mean that I still don’t struggle with them. Here is an example.

One side issue that came up recently was in attempting to run a simple script as a Publishing Script in the UserConfigusation section of the DynamicConfig.xml file. I simply wanted to create a folder, external to the virtual application, at publishing time.
Here is an example of that script:

Let’s not worry about why I wanted to do this, because it turns out that it doesn’t matter what you try to do in the script. When you add the package with that config file, and then publish to the user, you get this neat little error:

And there is little information about error 534 out there other to state was is already displayed in the error above. Some sort of context error. I had previously considered the wording in point #3 of the “gotchas” section of the Davis blog to be a typo of some kind.

“User scripts cannot be placed in the UserConfiguration section of the deployment policy. User scripts run in a user context and will only be invoked for user Publish and Unpublish as well as the corresponding runtime events (Start/Stop Process/VirtualEnvironment). If you do this, the package add will fail as this will invalidate the deployment config schema.”

I mean, why would Microsoft add a plublishing script sample in the UserConfiguration section, but not an add sample, if you could never use it? I had interpreted that paragraph to mean that AddPackage scripts could not be placed in the UserConfiguration section. But maybe not? But I swear that I have done this in the past.

The exact same PublishPackage script placed in the MachineConfiguration section works great (assuming you publish to the machine using -global). I also tried it using Powershell instead of cmd, and even just gave it a benign command. All fail with the same error.

We believe that the scripts, including a UserConfiguration PublishPackage script, that are indicated as running in the user context in the Davis blog should be executed by a system process that impersonates the user (which is why user profile environment variables are not available to the script). So it looks like some kind of SID issue. Enabling all of the App-V debug logs also provide no new insights.

After a bit of testing, I learned a new small detail that affects scripts. What kind of account you are logged in as makes a huge difference!

Here is a summary of some test cases that I ran on 5.0 SP2:

Log In As File Section
UserConfiguration MachineConfiguration
PublishPackage StartVE AddPackage PublishPackage
Administrator Error 534 Error 534 OK OK
Standard User OK OK OK OK

The 534 error appears differently when it occurs in a Start of VE script (and I assume also StartofProcess script):

I usually test first logged in as an Admin, and when it looks OK I then test as a standard user, so that was the issue, unless of course you have users that normally log in with Admin rights. It looks like they will need either AddPackage or global publishing if they need scripts.

Side note for those using my AppV_Manage tool: Log in as a standard user. You need to launch AppV_Manage using the RunAsAdministrator in order to perform actions such as AddPackage. But you need to launch the tool without RunAsAdministrator if you want to perform other actions, like publishing to the user instead of the machine. This advise is based on the AppVClient powershell requirements and is unrelated to the permission issue in the use of scripts documented above, but necessary to know if you want to test as a standard user.

For extra credit

Ultimately, I realized that although I want the script to run when the app is published to a user account, in reality it would be equally reasonable to run it at AddPackage time. So moving the script to the MachineConfiguration section and changing it to an AddPackage script made this script work for all users. But, of course, this is not going to help if I need to perform an act that is user specific, such as drop in a license key, and the user is an admin.

An additional issue I noticed is that the script must return 0 when RollBack is set. I want it to rollback if the directory creation fails, but not if it fails for the reason that the directory already exists. This is easily handled by a modification of the script to first check for the existence of the directory, and to keep it simple without adding a multiline file I modified the AddPackage script to use powershell:


<AddPackage>
  <Path>powershell.exe</Path>
  <Arguments>-NonInteractive -Command “if (test-path -path C:\BaseVFS) {return 0} else {mkdir C:\BaseVFS}”</Arguments>
  <Wait RollbackOnError=”true” Timeout=”30″/>
</AddPackage>

Microsoft blogs about AppV_Manage

Maybe there is something to AppV_Manage you’ll find useful. This free tool from the App-V 5 Tools section of the TMurgent website was recently covered as a great source for troubleshooting App-V in a Microsoft support blogpost on TechNet by Microsoft’s own John Behneman!

Check it out here: http://blogs.technet.com/b/appv/archive/2014/01/30/how-to-troubleshoot-app-v-5-0-deploymentconfig-amp-userconfig-script-deployment-failures-using-appv-manage.aspx

Scripting Restriction in App-V 5: error 0x8AD

In our training class on App-V 5 SP2 two weeks ago (yes, I did do the class using the Service pack less than a week after the release), we had a problem in one of the labs and saw a new client error 0x8AD. I won’t name names, but the student causing all the chaos is visible in the photo below.

Photo from App-V December 2013 Training Class
Or at least the error seemed new to me. Maybe it was in the earlier versions and I hadn’t run into it, or maybe a different error return from an otherwise known (but easily forgotten) problem.

The problem threw us for a loop, and with multiple people all trying different things it got a little crazy to where I thought perhaps we had a situation where sometimes it worked and sometimes not. In the end, being able to test in a calmer environment with my own hands on the keyboard, I was able to see it for what it was. Here is the story.

We wanted to use DeploymentConfig File scripting to modify the way an application acts after sequencing was complete. Often, this is something you might do if something small is found in UAT and you don’t want to crack the package back open. In our case, we wanted to add the ability for the end user to select the default FTA for a number of graphical file FTAs used by the application. The vendor did not register these capabilities (Application Capabilities publishing) but we can add them ourselves. Normally, I would perform this action inside the package, but we wanted to work with the scripting in the Config files.

The registration of Application Capabilities includes adding some registry items, and we created a set of reg import files to use when publishing and unpublishing the package. But some students got an error, 0x8ad when publishing at the client. Others did not. It took some time to figure out what happened and we jumped to a number of wrong conclusions and issues along the way.


PS C:\Users\Admin> Publish-AppvClientPackage -PackageId 7cf15471-58b2-4753-a5d4-79c4e7446a0d -VersionId 691357a2-96c2-4278-86c5-8bb418b8a0a2

Publish-AppvClientPackage : Application Virtualization Service failed to complete requested operation.
Operation attempted: Publish AppV Package.
Windows Error: 0x8AD – The user name could not be found
Error module: Shared Component. Internal error detail: 0DF02625000008AD.
Please consult AppV Client Event Log for more details.

Issue #1: Disabled scripting at the client (Result=error 0x0D)
This is not enabled by default when you install the client. I leave this off in the class VMs I provide so that students run into this problem in the lab. When scripting is disabled, the error occurs only when the script is run, not when you perform an action using a Config file that includes scripting. So if you Add-AppVClientPackage with a DeploymentConfig file that has scripts, the cmdlet will complete without error as long as you didn’t have an AddPackage script. If it contains a PublishPackage script, you would see the error at Publish time, StartVirtualEnvironment or StartApplication would occur when you later launch the app.

Issue#2: Wrong place in the file. (Result=script didn’t run)
The PublishPackage script has two locations inside the DeploymentConfig file, in the MachineConfiguration section and in the UserConfiguration section.
If you intend to publish using the -global flag, you must place the PublishPackage script in the MachineConfigion section of the file, if you intent to publish to the current user, you must place the PublishPackage script in the UserConfiguration section of the file.

If you put only a script under one section and publish using the other mode, the script never runs.

Issue #3: Testing with a cmd prompt to “see the script”. (Result=Nothing, then Rollback with error 0x0A)
Because of the above, people often want to replace the script with a cmd prompt, so that they can try to run the action themselves. So they replace the script command and parameters with “cmd.exe” and “/k”.

This might work great on a StartVirtualEnvironment or StartApplication script that is running under the user context, but not for those running under the system context. If you do anything in the script that causes a user-interface interaction on a script running under the system context (add, remove, publish, unpublish, for example), these scripts will get stuck since there isn’t a valid desktop to display/prompt on. They seem to just get stuck waiting for input. Eventually, the script timeout (default 300 seconds) kicks in and kills the script. If you kept the default rollback action of true on the add/publish scripts, you’ll get an error, but only if you wait around 5 minutes to see it.

Some students also hit this problem not with a cmd prompt, but because they forgot the “/s” option on the regedit command to silently perform the import. By the way, regedit and regedt32 both accept either “/s” or “-s”.

Issue #4: Understanding what you can/cannot do in each section.
You can, and probably should, put the publish script (or a version of the script) in each section. The reason for potentially two versions of the script is that you might have different script versions depending on if the publish is to all users or only the current user being published. For example, our scripts should affect HKLM locations when publishing globally and HKCU locations when publishing to a specific user since the app it is related to has the rest of the extensions published that way..

One piece of overly-simplified advice from Microsoft that you might here is that if you want to change HKLM locations, this must be placed in the MachineConfiguration section and if you want to change HKCU locations these must be placed in the UserConfiguration section. But it turns out that this is true only if you are placing entries in the “Registry” element of the xml file. A script is allowed to modify either registry hive. You might not want to affect locations affecting all users in a user publishing action, but it will work. Not understanding this can cause confusion when troubleshooting also as you can jump to wrong conclusions about the results you just saw.

Issue #5: Failing to remove and re-add the package when changing the DeploymentConfig file.
Typically, the script command is to run a command against a file. While Microsoft recommends putting the file inside the package, often we prefer to put the file in an external location, such as the folder that holds the AppV package itself.

If your first test didn’t produce the results you wanted, you might be able to just edit the external file. Then, all you have to do is unpublish/publish to get the updated script to run. But if you had to edit the DeploymentConfig file itself, then you must also remove the package, and then re-add it with the updated config file first. There are at least three ways to mess that up and in a room of 10 students at least 2 will do it.

And if it had been an Add-Package script you always have to remove and re-add with the DeploymentConfig file, whether you edited the DeploymentConfig file or the external script file.

Issue #6: Clients never forget
The theory that you can just unpublish and remove the package and re-test is invalid. Sometimes it will work, but sometimes not. Our mystery 0x8AD error made this painfully clear. The client does not clear everything out. And if you had any unbalanced bad scripting, it gets worse.

When I add something in a Publish time, I always add a script to remove it at unpublish. Same with Add and Remove. But if one of those scripts has an error, you are now out of balance. (Often unnoticed when it is the undo script as the default is Rollback=false). And this will affect subsequent tests. So remember to revert the client VM now and then to get rid of ghosts of the past.

Issue #7: Always edit a copy
To the best of my knowledge, the 0x8AD error we had in the class happened due to an editing mishap that placed something in the XML that the client didn’t like when we published. What that was, I don’t know. We looked and looked. We copy and pasted. Sometimes we saw the error, sometimes we didn’t. We left believing that the new release probably contained a new scripting bug that only appeared at random.

In the calm of my own lab, I found that wasn’t true. I still don’t now exactly what was wrong with the DeploymentConfig file. But the student followed my advice and made a backup copy of the original file before starting the edit. I was able to wipe out the badly edited version causing the problem with a pristine copy of the original, edit it, and get everything to work perfectly.

The only problem with making a copy is where to put it. If you deploy using SCCM, you definitely don’t want it in the same folder as the App-V package. So I’d recommend dropping it in a subfolder in that case, otherwise, I just copy paste directly in the same folder.

Conclusions
When scripting, make the backup of the Config file before editing. If you have a problem, don’t try to quick-fix-it. Be methodical and try to not jump to conclusions, of you just make things worse. When in doubt, revert everything and start over.

Maybe someday the App-V team will give us a real editor for scripting and prevent this nonsense. I know a dozen people that agree with me on this now.