Active Directory, Powershell

PowerShell, ActiveDirectory and the elusive Filter parameter

When searching for users in Active Directory using PowerShell, the ActiveDirectory module is often one of the first things that comes in to mind. The module has been around for quite som time now but there is one problem that many users still stumbles on, the Filter parameter. There are basically three methods for searching after a user with Get-ADUser.

tl;dr; This article explains how use the -Filter parameter when searching AD, if you just want the answer, skip down to the Solution.

Using the Identity parameter

When searching for a specific user where a key property is known, the Identity parameter works well. It accepts the value for one of the following properties:

  • DistinguishedName
  • GUID (objectGUID)
  • SID (objectSID)
  • SAM AccountName (sAMAccountName)

Using the LDAPFilter parameter

The LDAPFilter parameter accepts an LDAP query string. The syntax for LDAP queries is quite comprehensive and well documented. However, getting started with LDAP filters can be a bit of a struggle and involves understanding bitflags to filter on some properties, e.g. enabled users.

The example below shows how to find all enabled users with an LDAP filter. Not something that is easy to understand for someone not used to LDAP filters.

Get-ADUser -LdapFilter ‘(!userAccountControl:1.2.840.113556.1.4.803:=2)’

Using the Filter parameter

The third option when searching the AD with PowerShell is to use the Filter parameter. This parameter accepts a filter using the PowerShell Expression Language, a syntax most PowerShell users should be very comfortable with. But none the less, this is something I often see people struggling with.

When using a new command for the first time, most users turn to Get-Help. Let’s have a look:

So far so good, but let’s have a look at one of the examples bit further down:

Get-ADUser -Filter {mail -like “*”}

This is where it usually goes wrong. The filter parameter accepts only a string and in this example we’re giving it a script block? In this specific example that works, but we need to really understand why it works before we can use this technique. Lets start from the beginning.

Parameter type

Each parameter on a command has a specific type, this means that PowerShell will convert the input given to the parameter to this object type. This conversion is called casting. Let’s have a look at the Filter parameter again:

The text marked in red shows that the Filter parameter is of type String, meaning that before the Get-ADUser even gets the filter we have specified, PowerShell will convert it to a string (cast it to a string). We can actually perform that conversion ourselves to see what Get-ADUser will see. This is how:

[string] {mail -like “*”}
No surprises there. We get a string that contains ‘mail -like “*”‘. But quite often we want to search for a specific mail address. A specific address stored in a variable. Let’s try again:

$mail = ‘simon@domain.com’
[string] {mail -like $mail}
This is where things usually goes wrong. If we convert a scriptblock to a string, variables won’t be replaced with their values and we will search for a user that has the mail address of $mail, which isn’t even a valid email! We probably won’t find anything.

Solution

Even if it sometimes works to use a scriptblock as value for the -Filter parameter, make a habit of always using a string. It will be converted to a string anyway. If you are uncertain how to build the string, here are a few starting tips:

  • Use double quoted strings Using double quotes (“) around the filter will expand any variable typed inside the string.
  • Use single quotes (‘) around any value in the filter Inside a double quoted string, single quotes can be used just like any other character.
  • If in doubt, save the filter in a variable and write it to the screen.

Let’s have a look at an example using these tips.

$Name = ‘Simon’
$Filter = ” Name -Like ‘$Name*’ ”
Write-Verbose -Message “Using filter: [$Filter]” -Verbose
$User = Get-ADUser -Filter $Filter
Here I have the variable $Name containing a name I want to search for. In a real world scenario this variable is probably assigned a value somewhere higher up in the script.

On the second line I create a filter string and assigns it to the variable $Filter. I’ve added an extra space both in the start and end of the string just to make it easier to read, these won’t make a difference. Note the single quotes around the value.

Line three writes a verbose message showing my filter in clear text, this way I can make sure that the filter looks as I intended it to. Once I’m done with my script I can remove the -Verbose from Write-Verbose and the message will only show if my script is run with -Verbose.

Line four runs Get-ADUser with my filter and assigns the result to $User. Jobs done!

 

Searching for objects in Active Directory using PowerShell is really simple, but unfortunately there are some examples out there that might throw you on the wrong path. If in doubt, stay with the tree simple tips and you’re off to a good start.

AzureAD, O365, Powershell

Managing licenses with AzureAD V2 PowerShell module

On november 17th, a new version of the AzureAD PowerShell module was released to the gallery. This can be found here: https://www.powershellgallery.com/packages/AzureAD/2.0.0.30

In the old MSOnline module there were two commands used to change assigned licenses to a user. First we had the Set-MsolUserLicense with the parameter ObjectID or UserPrincipalName could be combined with AddLicenses, RemoveLicenses and LicenseOptions. Secondly we had New-MsolLicenseOptions which would create a licenseoptions object.

In the new AzureAD module there still is a command called Set-AzureADUserLicense, but it has only two parametrs, ObjectId and AssignedLicenses. There is no command to create licenseoptions.

Now this can be a bit confusing for the unexperienced PowerShell user, but fear not, we’ll get to the bottom of this!

Let’s start by looking in the help document:

AssignedLicensesParameter

This gives us a hint to use New-Object to create an AssignedLicense object. Let’s try that out!
In my test I used Get-AzureADSubscribedSku to get a list of licenses available to me and found the SkuID of of POWER_BI_STANDARD to be ‘a403ebcc-fae0-4ca2-8c8c-7a907fd6c235’, replace that with the SkuID of your license.

This throws an error stating:

This tells us that the object expected by parameter AssignedLicenses is not actually an AssignedLicense object but an AssignedLicenses object. Lets create one of those and see how it looks:

This tells us that we have two properties, AddLicenses and RemoveLicenses. The AddLicenses is a list of AssignedLicense objects, this is where our object goes. Let’s try this again:

This seems to be working! My user now has a PowerBI Standard license. But I also had a PowerBI Pro license since before, lets see if we can update the code to also remove that one. Note that the property RemoveLicenses on our AssignedLicenses object above is a list of strings. My PowerBI Pro license has a SkuID of ‘f8a1db68-be16-40ed-86d5-cb42ce701560’.

Works like a charm! And my Office 365 E1 license that was assigned from the beginning of this is still there and wasn’t touched at all. By using Get-Help and Get-Member we have now figured out how to use Set-AzureADUserLicense in the new AzureAD PowerShell Module and we’ve made a few learnings:

  • Parameter AssignedLicenses takes an object of type icrosoft.Open.AzureAD.Model.AssignedLicenses
  • This object has two properties, AddLicenses and RemoveLicenses
  • AddLicenses takes a list of Microsoft.Open.AzureAD.Model.AssignedLicense objects.
  • RemoveLicenses takes a list of strings containing SkuIds
  • Already assigned licenses that are not part of either AddLicenses or RemoveLicenses are left untouched

 

Active Directory, Powershell

Test GPO read permission with PowerShell (MS16-072 – KB3159398)

Last patch-tuesday, Microsoft released Security Bulletin MS16-072. This update changes how Group Policies are downloaded to a computer which might cause GPOs to fail to apply.

A common symptom for this is that users no longer get their drives mapped by GPO.

To quote the KB-article KB3159398:

This issue may occur if the Group Policy Object is missing the Read permissions for the Authenticated Users group or if you are using security filtering and are missing Read permissions for the domain computers group.

Group Policies not working can be a great problem, but fear not, these is ofcourse a solution. Once again quoting the KB-article:

To resolve this issue, use the Group Policy Management Console (GPMC.MSC) and follow one of the following steps:

– Add the Authenticated Users group with Read Permissions on the Group Policy Object (GPO).

– If you are using security filtering, add the Domain Computers group with read permission.

-“But what if I have hundreds of GPOs? That might take me all day!”

-“Not if you use PowerShell!”

Get permissions on a GPO with PowerShell

First off we need to get a list of all GPO:s in our domain. To do this we need the module GroupPolicy which comes with the RSAT tools for Group Policies. I make sure I have the module by running the following command:

Get-Module -Name GroupPolicy -ListAvailable

Now I can list all GPOs in the domain with the command Get-GPO and the parameter -All.

To get the permissions of a GPO I use the command Get-GPPermission with the parameters Id (Id of the GPO), TargetType (get permissions for a User, Computer or Group) and TargetName (Name of the principal to get permissions for). In this scenario I want to make sure that the group Authenticated Users has read-access to all my GPOs so that will be my target and I want to get permissions from all GPOs so I will have to loop through all the GPOs running Get-GPPermission once per GPO.

This would give me the permissions for Authenticated Users on all GPOs and on some occations I might get an error stating that the GPO does not have the security principal that I asked for, meaning that Authenticated Users does not have access to the GPO. Aha! This is what I’m looking for isn’t it?

If I wrap my Get-GPPermission command in a Try-block and add -ErrorAction Stop to make all errors terminating and there by catchable I have a good start.

Once I have got the permissions I want to make sure that Authenticated Users actually has either Read or Apply rights (Apply also implies read according to the documentation at MSDN).

If the command Get-GPPermission throws an error or the permissions does not contain Read or Apply we will want to make sure the GPO has the right permissions set so we return the GPO-object.

This resulted in code looking like this:

Now I realize that this is far from perfect, it will give me a false positive on all GPOs where Authenticated Users has Custom permissions, but it will give me a list of GPOs with a potential problem.

You could do this a lot better by using the ActiveDirectory module and Get-Acl to get the actual ACL of the GPO, check for more than one group (you could potentially want to give the group All Computers access instead) and even use Set-GPPermission to automatically modify the permissions on any faulty GPO. If you do any of this, write a blog post about how you did and let med now!

Powershell

Keeping my GitHub Forks up to date

In a previous post I wrote about when I updated the command Update-AzureRM to only update modules that has a newer version available and not download and overwrite modules that already are up to date. Unless I use the -Force parameter of course.

Once I’ve done the change and was satisfied I thought that maybe someone else might benefit from this update and since the AzureRM module is available as an open source project on GitHub I requested to have my changes included in the official version of the module. This is how I did it.

To perform the following steps you need to have an account on GitHub and a git client installed. The fastest way to achieve this is to register a new account and install “GitHub for Desktop” from: https://desktop.github.com/ It also requires some basic knowledge of git, for an introduction you can look at my session from PowerShell Summit EU 2015 which is available on Youtube.

Now let’s dive right in!

Fork – Creating my own copy of a repository

The first thing I have to do when I want to contribute to a repository on GitHub which I don’t have write access to is to create a copy of the repository on my GitHub account. Such a copy is called a Fork. In my case I browsed to the Azure-PowerShell project on GitHub (https://github.com/Azure/azure-powershell) and clicked on the Fork button in the upper right corner of the screen:

ClickFork

Clicking on the Fork button will create a copy of the repository on my account and take me to this copy.

Clone – Downloading a copy

Once I have my own copy of the project where I can make changes I need to download the project to my local machine. The process of downloading a copy of a repository is called Clone. To clone a repository I first need to find the URL to clone from. This can be found in a textbox in the upper part of the page together with a clipboard button. Click the button once to mark the text and once more to copy to clipboard.

GitCopyURL

When I have the URL in my clipboard I open my Git Shell and navigate to the folder where I want the module to be downloaded. Then I clone the repository by typing:

Add upstream – Get updates from original repository

Now when I have a local copy of my fork I want to make sure that my copy stays up to date with the original repo. This is extra important if I intend to work on something during a longer period of time but I try to make this a good habit. To make sure that i can get updates from the original repository I need to link my local clone to the original repository. To do this I add something called a remote to my local repository. I can view all remotes currently configured by running the following command:

Running this command shows me that I have one remote called origin that is set up for both fetch and pull. Origin is the default name used for the repository I cloned from (in this case my fork on GitHub). Now I want to add another remote, and I’m going to call that remote upstream. To do this I have to get the URL to the original repository and simply use this command to add that url as remote:

Update my local copy

The local copy of my the repository (the copy on my machine) now has two links (remotes). One refering to the copy on my GitHub account, called origin, and one referring to the original version on GitHub. To download updates made on the original version i use the command fetch, telling it to fetch from the remote named upstream.

This will update the information my computer has about the remote but will not do anything with my local copy. Now I can compare changes between my local copy and the upstream remote using git diff.

Git diff is used to compare difference of files and by default it opens up a text-viewer looking very much like “less” in linux. Here I can use the arrow keys and PageUp/PageDown to navigate the text and hit ‘q’ to quit. I can also do some more advanced operations like searching, for a full help, press ‘h’. If I want to see the changes for a single file that is easy done by appending the filepath to the previous command, for example.

Once I’m done comparing the upstream remote to my local repository I can bring all the changes from upstream in to my current branch by using the merge command.

Update my copy on GitHub

Now my local repository should be ahead of my origin (my copy on GitHub). To send the new updates (if there were any) to my copy on GitHub I simply do a push.

If origin is my default remote I don’t need to name it here, but I guess it doesn’t hurt to be extra clear about where I want to push. This is the process I use to keep my forks up to date. Please feel free to leave a comment if you do it in any other way.

 

General, Powershell

Real world DevOps training

Butterfly + text

I’m an Ops guy, I have a history of working in IT operations and I think of myself as a “technician” or “ITPro”. To be honest, I don’t know what to call it, but my point is that I do not have a background as programmer. I’ve been talking with other Ops-persons about integrating and collaborating with developer teams for quite some time now. Trying to make others understand the benefits they can both give and receive when collaborating with developers and trying to convince operations teams that we have lots and lots to learn from developers. And that is getting more true for each day now. We are suddenly not just given the tools to use code, but also tasked with the expectations to manage a bunch of things “as code”. PowerShell, Desired State Configuration and cross platform infrastructure all makes us more or less forced to embrace the mindset of programmers.

For me that’s what DevOps is all about, trying to build a better understanding between Dev and Ops in a better together kind of mindset. A few months ago a developer friend of mine told me about the conference called Swetugg, a conference for .net developers in Stockholm, Sweden. This friend didn’t just talk me in to attending the conference, he suggested I’d submit a session abstract about PowerShell. Said and done, I went straight home to write an abstract and it was accepted. I was about to give a talk on a conference for developers!

The first slide in my presentation.
The first slide in my presentation.

I’ve just gotten home from that very conference and it has been a great learning experience. I can willingly admit that a lot of the content from the sessions passed far above my head, but I learnt something new in every session I went to. But that was just the start of it! Spending two full days with developers, making new contacts, learning about the obstacles they encountered and how they made their way past them was a great experience! Not to mention being able to share stories about obstacles I’ve encountered and getting their view of the problems. I learned even more by just speaking to the attendants than I did on the actual sessions. And as a third bonus, I got to meet and talk with a whole bunch of great speakers.

If there is such a thing as DevOps training I would say this is just that. If you are an “Ops” try to attend a “Dev” conference, a dev meetup or a user group meeting! And if you are a “Dev” try to get in contact with a few “Ops” in the same way. I think we have a great amount to learn from each other.

Oh my presentation? Well no one threw rotten tomatoes, I got some attendants asking me questions afterwards and I learned a great deal about presenting. Now I can’t wait for PowerShell Conference Europe in April which will be the next time I’ll give a talk.

Powershell

Updating AzureRM only when needed

I stumbled upon this great post by Ian Farr the other day about Automagically Keep the Azure PowerShell Module Up-To-Date. In this post Ian tells us how keeps his help and azure module up to date by starting a background job from his profile script. In the end, Ian mentions that he recently added the command Update-AzureRM to his job and that it updates the AzureRM modules each time even if he already has the latest version.

I’ve run the Update-AzureRM command a few times and noticed the same frustrating fact, it takes almost 40 minutes to run even if all my modules are up to date!

Update-AzureRM
Running Update-AzureRM takes 38 minutes

My general solution to this has been to run Update-Module -Name Azure* and just update all modules with a name beginning with Azure (this also keeps AzureAutomationAuthoringToolkit up to date). But this time I got curious and thought ‘Wonder if I can speed things up with Update-AzureRM?’.

Examining the command

The first thing I do is examining the command using Get-Command like this:

This tells me that the command is actually a function that is part of the module AzureRM, let’s have a look at that module:

Turns out this is a script module written in PowerShell so I might be able to do something about it.

Update-AzureRM

Opening up the function I can see that Update-AzureRM is changing the repository policy, calling Install-ModuleWithVersionCheck once for each module that should be updated and then sets my repository policy back to whatever it was before. This leads me to believe that the problem of updating lies within Install-ModuleWithVersionCheck which happens to reside in the same psm1 file.

Install-ModuleWithVersionCheck

This function will search for the specified module on my computer, if it is installed it will set the variable $ModuleAction to “updated” and run Install-Module using the parameter -Force, if it is not already available in my computer it will set $ModuleAction to “installed” and run Install-Module without using -Force.

Problem found!

I think I’ve found the problem here, let’s look up the parameter Force in the help for Install-Module:

Get-Help -Name Install-Module -Parameter Force
Help for parameter -Force on Install-Module

It seems like the Force parameter is only necessary if I want to overwrite a module of the same name and version. My own testing with the parameter also shows that using -Force will override the confirmation message when InstallationPolicy for a repository is set to Untrusted (as reported on UserVoice). This leads me to conclude that removing -Force from install module would prevent it from downloading and reinstalling any module that already is up to date. I only want to forcefully download and install a module if I use the parameter -Force on Update-AzureRM.

To change the behavior I first add a Force parameter to Install-ModuleWithVersionCheck and calls it from Update-AzureRM using -Force:$Force. This will pass thru the Force parameter to Install-ModuleWithVersionCheck.

The second step is to do the same thing for Install-Module within Install-ModuleWithVersionCheck.

To make sure it works as intended I run the command Update-AzureRm and now it takes 2 minutes instead of 38. And all I had to do was a small change on five rows! Once I’m at it I also added a help description for -Force on Update-AzureRM.

Update-AzureRM
Now Update-AzureRm only takes two minutes.

Since the AzureRm PowerShell module is open source and published on GitHub I submitted an issue describing the problem with forced updates and my suggested change of the code. Hopefully you will all have this new functionality in the next release.

In the next post I will describe the process of submitting suggested changes to an open source project.

 

Uncategorized

Lumia 950 XL – My first impression

First in line

Last Wednesday, I attended the Microsoft Lumia release event in Stockholm Sweden. I showed up roughly an hour before the event opened and happen to arrive at the exact same time as the first Lumia fans. Except for some light rain, we had a great time sharing experiences on using Windows Phone and Windows 10 Mobile as Insiders. Just after 7 PM, the doors were opened and the store was quickly filled with people buying Lumia Phones. There were also some wraps, beer sand sodas, but who had time for that when new hardware got handed out?

I ordered a white Lumia 950 XL, paid the SEK 6995 and got handed the two precious little parcels. One bearing the text Microsoft Lumia 950 XL and the other read Microsoft Display Dock (HD-500). Now there were no time to lose, lI had to get out of there and find some where to unbox my new devices!

Lumia 950 and Display Dock boxes
Lumia 950 and Display Dock boxes

Setting the stage – my expectations

But before I tell you about my experience with the new Lumia flagship phone, I want to set the stage by telling you a little bit about my previous phones and how I use them. I’m not really a phone geek, but I spend a lot of time using my phone each day. I commute roughly an hour in each direction every day and I try to be as productive as possible. When I rarely miss an opportunity to start my laptop and do some work while on the bus, but it’s often too crowded for me to be comfortable using my laptop. I often end up using my phone instead. I read up on all things that interest me, listens to podcasts, watches videos on Microsoft Virtual Academy or YouTube, read and write email, stay up to date with Twitter and also read (and sometimes comments or at least takes notes on) documents for work. One of the most used apps on my phone is OneNote, it is just great for typing down abstracts and notes for blog topics, PowerShell script ideas or just planning my day. This being said, I use my phone a lot, but I’m not a heavy app-user. As long as I have the Office suite, a decent browser and some audio/video streaming apps for YouTube and the like I’m happy. I’ve been a loyal iPhone user from 3G up to 4S but replaced my iPhone with a Nokia Lumia 925 in 2013 and haven’t looked back since. Sadly I accidentally crushed my 925 early this summer and replaced it with a Lumia 735 while waiting for the next high end version of from Microsoft.

Newly unboxed Lumia 950
Newly unboxed Lumia 950, photo taken on Lumia 735

Build quality

The first thing I noticed when picking up the phone is how light and well balanced it is. After being accustomed to the quite small 735 I was worried that the extra inch of screen size would require me to hold the phone with both hands. So far I have not had any problems with that at all. The next thing I notice is how well build the phone feels. I’ve read about the cheap plastic feel some people experience and I do not share their experience at all. The back of the phone comes off easy and clicks back in place with a perfect fit. The phone feels as well built has any.

New keyboard features

During the setup process I make good use of a new little nifty feature in Windows 10 Mobile, the arrow keys integrated in the keyboard. Just in between Z and X is a little blue dot. Touching the dot will reveal four arrows that will work in a joystick-like manner letting me navigate through text. This is a feature I’ve been really missing in Windows Phone 8.1! Also note the graphics on the SPACE key below. Swiping on the space key lets me switch between my keyboard layouts and pressing down on the space key and moving the finger up and down lets me move the keyboard around on the screen. Both quite small details that really makes a change for the overall impression.

Pictures or it didn’t happen

Once set up, the first thing I did try out was the camera. The physical camera button is back again and didn’t really realize how much I’ve missed it until now. I pressed the camera button and being used to the Lumia 735 where it takes around five seconds to start the camera and another few seconds to autofocus and take a blurry picture, I was amazed! I do realize I’m used to really low standards, but this camera is fast! Clicking the camera button twice will start the camera, autofocus and fire of a shot in under a second. OK I didn’t actually time it, but it really happens in no time at all. And the pictures look really good! I’m not an experienced photographer in any way, but this is a camera that absolutely can compete with my Sony NEX-6 for everyday photos.

Windows Hello with Iris scanner

Once I was satisfied with the camera I set out to explore the new Windows Hello feature that lets you unlock the phone by letting it scan your eyes. Sounds like science fiction? It’s not. The first time I set up the Iris scanner it asked me to hold up the front facing camera in front of my eyes. I had to hold it there for a few seconds while feeling mildly uncomfortable. The optimal distance seems to be just close enough to not really be able to focus on the screen without straining the eyes. Once the process was complete I tried to unlock my phone a couple of times with varying results. After giving up I got back in to the menus for Sign-in options and clicked the button that said “Improve recognition”. This lets me scan my eyes over and over again in different light conditions and different angles. After doing five-six extra scans I was satisfied and tried to unlock the phone again. Worked like a charm! And it kept working all day, even worked in total darkness!! I’m once again amazed! It took between one and three seconds for the phone to recognize me and unlock screen. I probably unlocked the 50 times without any problems.

Then I got home and wanted to demonstrate my new cool phone for my wife. Suddenly it just wouldn’t recognize me. I tried adding more scans but it didn’t help. Tired and a little bit disappointed I gave up for the night. The next the I still had the same problem until I got my morning coffee. I’m not sure if it was my mood, the effect a strong cup of coffee has on the eyes in the morning or just plain luck, but that day the phone kept recognizing me all day. I added a new eye-scan every second hour or so until I went to bed. After that it has been working every time for the last four days. Once this started working I realized it’s a feature I absolutely need on all my phones in the future!

Other details

Other details in Windows 10 Mobile is the ability to search in All Settings as well as in All Apps, this is a feature I’ve been missing in Windows Phone 8.1. The Office Suite works as expected and the new Outlook mail and calendar apps are great! One problem in the calendar however is when I enable week numbers, the header for months with long names makes the Today-button disappear of screen which is a bit annoying. The option to group tiles on the Start screen in groups just like I could on my old iPhone is also a nice addition that I didn’t realize I missed until now.

Continuum!

I’ve written this long post that started with me buying a new phone with a display dock without even mentioning the dock, or the ability to connect the phone to an external monitor to have a more computer-like experience. Shame on me! Of course I’ve tried the docking station! First, the unboxing. The box that the display dock came in is exactly the same size as the box for the phone. If I was surprised by how light the phone felt, I really got surprised when I realized how small and heavy the dock was. It fits in the palm of my hand and reminds me of a smaller version of Apple TV. The dock is made in metal, is really heavy for its size. It has three USB ports, a HDMI port and a DisplayPort in addition to the two USB type C ports, one for power input and one to connect your phone.

I connected a monitor using HDMI and keyboard and mouse using USB. It should work to connect keyboard and mouse using Bluetooth, but I didn’t have any to test with. Once connected to the dock, the phone opens up an app that works as a touchpad and the monitor looks quite like it does on a laptop. Only universal apps can run on the monitor, everything else has to run on the phone. There are not many universal apps available at the moment, mostly the built in apps, the Edge browser and the Office apps. The experience in Continuum is nice, I can’t wait for universal apps for Citrix, VMware Horizon View and Remote Desktop, that will make my phone the ultimate thin client.

I’ve mainly been using Edge and OneNote in Continuum. Watching Netflix works like a charm in the browser, but HBO Nordic doesn’t work since it requires Silverlight. OneNote works great, I’ve actually written this whole review switching between typing on the phone, typing in a USB keyboard in Continuum and dictating to the phone.

Some things I do miss in Continuum however are a virtual keyboard. Not being able to type without connecting a physical keyboard is a problem if I want to use my phone to present at meetings for example. Also all apps in Continuum runs in full screen, I did not find a way to put two apps side by side. And a last thing I dislike is if I turn the screen off on my phone, the monitor also goes black. I don’t know how high the risk is for the screen burning in on the phone, but I can hardly imaging it being lit up with a static image for hours is healthy.

Browsing the Edge of the internet

Windows 10 Mobile also includes the browser Edge which works quite alright. I’ve found one page that makes my phone reboot each time I visit it which was quite annoying and sometimes when I enter a page it takes a few extra seconds after loading the page until the keyboard appears when I click a text-box, otherwise I haven’t really found any problems.

Battery

I’ve been using my phone quite heavily today, probably three hours streaming podcasts, 90 min watching video, syncing emails all day, been talking for roughly an hour in total and kept myself up to date on social media. I got a warning that battery was down to 10% 15 hours after I removed it from the charger. After charging for 40 minutes the battery indicator says 60% and the phone is burning hot. Quick charge or whatever they call the new USB type C feature is really quick!

Summary

This ended up being a quite long post. All in all I’m very satisfied with my new Lumia 950 XL, it is really a great phone.

General, Powershell

Problem with Pending Reboot when using Desired State Configuration

On Knowledge Factory, the company I work for, every one get’s their own lab-server. Nothing fancy, but it helps a lot when I want to test something in a controlled environment.

I’ve been playing around a bit with desired state configuration on my lab server lately. And especially with the great module VirtualEngineLab which I’ve been using to automatically build various scenarios. Each time I start a new build, the module uses the DSC resource xPendingReboot to check for pending reboots.

I got constant warnings about pending reboots and was advised to reboot my host before trying again. I didn’t understand why I would need to reboot so I looked in to the resource and realized that my server had “PendingFileRename”-actions in the folder C:Windowssystem32spool. Now that’s strange, my lab server doesn’t even have a printer. Or does it?

It turns out that I’ve been sloppy and brought my clients local printers with me to the RDP session. Each time new printer drivers were installed it caused a pending file rename operation. To make sure this won’t happen again, I’ll disable the possibility to bring local printers to the server, also known as client printer redirection.

This can be done in one of two ways, either by GPO or by a setting in the registry. I chose to use the registry, and of course I used PowerShell to change the setting by this one-liner:

Setting a registry value is not really a valid option if you have hundreds of servers in a domain, then the GPO approach is the way to go. In Windows Server 2012 R2 the setting can be found in: Computer Configuration –> Administrative Templates –> Windows Components –> Remote Desktop Services –> Remote Desktop Session Host -> Printer Redirection > Do not allow client printer redirection

If you’re interested in knowing more about the VirtualLabEnging module and how to use it, Iain held a great session at PowerShell Summit Europe which is available on YouTube: https://www.youtube.com/watch?v=jefhLaJsG3E

The code Iain is using in the session is available on GitHub: https://github.com/iainbrighton/PSHSummit-Man-vs-Testlab

Deployment, General, Powershell

Running legacy VBScripts from PowerShell

VBScript can feel like a thing of the past, but truth is a lot of companies have invested heavily in VBScript during many years. It all can’t be simply translated to PowerShell over a night. To get started with translating VBScripts to PowerShell, one way could be to break up the VBScripts into usable parts. This way we can start translating the Control Scripts to PowerShell and keep the using the VBScripts as is. Then we can replace the VBScripts part by part.

The first step is to create a reliable and reusable to invoke VBScripts from PowerShell. For this task I wrote Invoke-VBScript.

Invoke-VBScript can be run in three different ways:

  • With regular parameters
  • Accepting script path from pipeline (ByValue)
  • Accepting script path and argument from pipeline (ByPropertyName)

Regular parameters

Invoke-VBScript -Path .vbscript.vbs -Argument 'Arg 1', 'Arg 2' -Wait

Value from pipeline (ByValue)

.vbscript.vbs | Invoke-VBScript -Argument 'Arg 1', 'Arg 2' -Wait

Value from pipeline (ByPropertyName)

$Input = [pscustomobject]@{Path = '.vbscript.vbs';Argument='Arg 1', 'Arg 2'}
$Input | Invoke-VBScript -Wait

All three examples will give the same output. As long as the Wait parameter is specified, the function will wait for the script to complete and return one single string containing the output from the VBScript. If the Wait parameter is left out, the script will instead return a job reference to the job started.

The script is available for download from Technet Gallery.

Powershell

PowerShell functions and Parameter Sets

A PowerShell function can have different parameters depending on how it is called. This is called Parameter Sets. For example, Get-Process has a non mandatory parameter called Name which specifies which processes to Get by Name. But is also has a parameter called ID which also specifies which processes to get, this time by ID. Both parameters exists but are mutually exclusive, you cannot use them both at the same time, since they are defined in two different Parameter Sets.

First some basics

A parameter set is defined in the [Parameter()] block of a Parameter.

For example:

This defines a function with two ParmeterSets, Name and ID. The parameter set Name has one parameter called Name and the parameter set ID has one parameter called ID. This means that the parameters can’t be used at the same time.

The function itself will write only the name of the current set to the pipeline. This is accessed by the property ParameterSetName on the automatic variable $PSCmdlet.

There are a few ways to investigate a command and see which parameter sets that are available. This easiest way is to call Get-Command with the parameter –Syntax like this:

image

This shows each parameter set that can be used. The first set has one non-mandatory named parameter, Name and the second one has one non-mandatory named parameter, ID.

All is good as long as one of the parameters is specified but if we try to run the command either without specifying any parameter we will get an error stating:
Parameter set cannot be resolved using the specified named parameters.
Which means that PowerShell did not know which parameter set to use and threw an error.

There are two ways to get around this. Either we can set one, and only one of the Parameters to mandatory. If Name is mandatory and the function is called without parameters, PowerShell will assume that the set is ID since that is the only set without any mandatory parameters.

BUT there is a better way! By setting one parameter set as default, PowerShell will know which one to use if more than one matches. This is done by simply adding this:

[cmdletbinding(DefaultParameterSetName=’Name’)]

One Parameter in multiple Sets

All right, so far things are fairly straight forward. What if I want one parameter to be a member of more than one set?
Then give it more than one [Parameter()] block!

Let’s say that I want to be able to call the function using either only Name, or with both ID and Name but Name should be mandatory only in the first scenario .

Here is a new example:

Now if I run just Test-MultiParamSets without parameters, PowerShell will use the default set I specified which is ‘Name’ and prompt me for a Name since it’s mandatory.

If I on the other hand run Test-MultiParamSets using the parameter ID, I will not be prompted for a name since it is not mandatory in this parameter set.

image

Running Get-Command –Syntax again will now show this:

image

We can clearly see that the parameter Name is only mandatory in the first set.

Are you still awake? Awsome! Then let’s get one sprint further down the rabbit hole!

Using AllParameterSets

Now I start to get quite satisfied with my function, but after using it regularly I get tired of always typing –Name, I want the parameter Name to always have position 0, meaning that the first unnamed parameter value will always belong the parameter Name (as long as the parameter hasn’t been assigned a value using -Name).

One way to do this is to add a new Parameter block with Position = 0 in it. Since this block doesn’t have any ParameterSetName in it, it will apply to all the sets.
Here is the example code again, this time updated again:

Now I can run Test-MultiParamSets and it will accept Name as a positional parameter. To make it more clear which parameter has which value, the function now also prints the value of each parameter.

image

Let’s get back and have a look at the Syntax from Get-Command:

image

Here is when things get a little bit messy. Get-Command still shows the parameter Name as being named and not Positional!

The caveat here is that the properties in my third Parameter block is actually not added to any parameter set but to a new set called __AllParameterSets.

To see this we can use the object returned from Get-Command. It has a property called Parameters, which is a hashtable with one key for each parameter. If we look only on the parameters Name and ID (the function also has a bunch of common parameters):

image

This shows that the parameter Name is part of three parameter sets, even though only two is shown by Get-Command –Syntax.

The parameter set __AllParameterSets is a hidden parameter set which properties will merge with all the other sets. My experience is that if a property is set in a named ParameterSet, that property wins, otherwise the Property from __AllParameterSets is used. However, setting a property to $false counts as it being not set, meaning if Mandatory is True in __AllParameterSets and False in both the set Name and ID, True wins.

A screenshot to summarize:

image

Note here that the parameter Name is named (Position is not set) in the sets Name (Green) and ID (Red) but Positional in __AllParameterSets, meaning that the __AllParameterSets wins and parameter is positional.

Any property applied to a parameter set from __AllParameterSets is not shown when using Get-Command –Syntax. Name shows up as a named parameter for both sets in the picture above.

Workaround

To make Get-Command show the correct syntax, don’t use __AllParameterSets, make it a habit to add every property you want in a set to that set, even you have to add it multiple times. I also find the code much more readable this way.

Here is the code once more, written the way I think should be the best practice:

Now each set has the property Position = 0

Running Get-Command –Syntax again will show the correct syntax:

image

If you read this far, thank you! And please leave a comment.