Aug 132012
 

When building a machine via a task sequence using either SCCM or MDT, there is no easy way to tell if a program has successfully installed or not. There is the option to halt the task sequence when an error occurs but this isn’t always preferable. you don’t always want a minor application halting the whole build.

Lets say that you want to make sure application A is installed before applications B and C get installed.  We can always look at the file system to check if the files or folders are there,  but this doesn’t guarantee that the application installed correctly, it might have copied the files but not added any registry settings or properly registered DLL’s.

A better way,is to look at exit state which the task sequence engine records in the registry. For this we can turn to PowerShell. This can be completed via script which is pretty straight forward with only one tricky bit, thrown in for fun.

The property that contains the status of a package is the _state key which resides in this location

HKLM\Software\WoW6432Node\Microsoft\SMS\Mobile Client\Software Distribution\Execution History\System\s0100100\GUIDLIKEKEY\_state

For some reason the _State property resides in a key that has a random GUID like name… not sure why, but it does.  This means that it is impossible to enter the value as part of the script.  Thankfully PowerShell helps get around this with two commands Get-childitem and Get-ItemProperty.  Using these commands we can find out what the GUID key name is at run time, which then enables us to check the value of the _State.

This script can be run from a task sequence command line using the following arguments to supply the PackageID.

Powershell.exe Registrycheck.ps1 ‘S0100100′

 

Now that we have got this information we can use it create task sequence variables to perform addition checks while the task sequence is running.   For information on how this work and a script that can be modified check out this post http://www.industrialarcservices.com.au/2012/07/03/script-to-clean-up-statestore-folder-only-if-load-state-was-successful/

If you have any issues please feel free to comment or send me an email – Martin @Industrial Arc Services com au

 

 

NOTE:

Only packages create registry keys.  Anything executed from a command line will not create these registry keys.

Jul 032012
 

First and foremost I want to say thank-you to Dylan at Virtual technology Solutions for helping me with all my PowerShell questions. Browse his site, use his very cool PowerShell Scripts, but maybe not the evil ones :)

I re-image machines virtually all day every day, but these are development and testing machines, and most of the times they are virtual machines.  This means that there is no data on them to lose, which means that I throw caution to the wind and re-image without care.

This carefree attitude comes to a grinding halt when actual users become involved. Whether they are testing, pilot or standard, I get very paranoid about losing anything which makes me double check and triple check before deleting anything. This paranoia gets even worse when I have to rebuild my wife’s machine who is professional photographer, let’s not go down that down rabbit hole…

This is where USMT comes in really handy. If you use USMT with the /nocompress switch it can offer you an easy fail-safe in case the load state process doesn’t run successfully.  This fail-safe takes the shape of the UserStateStore or if you are using the MDT toolkit OSDStateStore.  This unassuming folder has saved me quite a few times from having to say the following dreaded sentence to users ‘Sorry but everything is gone’

One of the other switches that you can use with USMT 4.0 is the /hardlink switch, which will dramatically reduce your scan state and load state times.  One down side however is that you will need to remove the statestore directory before you can re-run a build.  If the UserStateFolder folder is present the build will fail next time the scanstate process is run.   The solution to this is to delete the UserStateFolder, though this isn’t straight forward as the aforementioned hard links can cause files to remain locked. If you just delete the folder as normal you run the risk of corrupting the user’s data also.  This is why USMTUtils.exe has to be used to successfully delete the folder without any adverse impact on the user’s data.

USMTUtils.exe is a command line based program which is included as part of USMT 4.0.   There are many ways USMTutils can be  used to remove the UserStateStore folder,

  1. Manually via an administrators command prompt.
  2. Post build using a DCM and a batch file
  3. As part of the build task sequence as pre-build task
  4. As part of the build task sequence, run after the load state

This post will cover the 4th option, running USMTUtils.exe after and only if loadstate has been successful.  The easiest way to run USMTUtils.exe it set your task sequence to download and execute, and run a command line of

Echo Y | usmtutils.exe” /rd %OSDSTATESTORE%

This for me has one downside and one MASSIVE problem. The task sequence cannot be set to execute from DP which is my preferred setting,  and more importantly there is no checks to ensure that load state has run successfully and all the user’s data is where it should be.

This is when a PowerShell script, batch file, task sequence folders and a variables comes to the rescue.

First the PowerShell script.

The script is pretty straight forward, it looks a the last line for the loadstateprogress log for the words “Successful run”, if it is present then create a T.S variable called LoadStateCheck and set it to ‘True’ if not then set to ‘false’

Now the batch file.

This is a simple copy and execute affair.

Now too bring these scripts together into something useful

The thing that ties these two steps together is the creation of a variable called ‘USMTLoadStatSuccess’. This variable gets a value of True or false depending whether loadstate was successfully or not in restoring the user’s data.

This is what the steps look like.

  1. Set PowerShell execution policy to unrestricted
  2. PowerShell Script to create variables
  3. Batch file to copy and execute USMTutils.exe
  4. Create conditions for running USMTUtils.exe

 

May 252012
 

Sometimes, I wish the world was flat and that we all spoke the same language!!!

This would eliminate the need for timezones as the sun would rise and set at the same time for everyone, and having a single language would make OS deployment so much easier.  Granted the lack of language diversity would also mean a lack of awesome cultures, and a lack of timezones would mean no more jet lag and getting up in the middle of night to talk to people in the antipodes… Actually that might not be such a bad thing.

ANNNNYWAY…

My current client has offices all over the world.  I use the term offices very loosely as some ‘offices’ are nothing more than a relocatable building on the cold wind swept plains of Mongolia or deep in the heart of Chile or the remote outback of Australia.  This makes localization critical as there isn’t much point having English installed as the default language when  users, primary and sometimes only language is Spanish.  This just causes unnecessary support calls asking how to change everything back to Spanish.

This localization of windows 7 deployments can be done in single image by using Microsoft Deployment Toolkit (MDT), and it’s customsettings.ini file.  This one little ini has more power than most people realise, and learning how to harness it will make any deployment so so much easier.

The problem I had was that the default language for Chile wasn’t being set.  I was getting the keyboard, time zone and  user location but not the correct language.  This mean that every time a user was logged in everything was in English, great for me, not for my hot dog  loving Chilean friends.  The cause and therefore the solution to this issue quite obvious really, it just took me longer to work it out than I would have hoped.

What my problem was… Chilean isn’t a lanagauge  but Spanish is!

I was trying to set everything to ES-CL which work for UserLocale, but not for  UILanguage and SystemLocal.

Once I changed customsettings.ini to have the following values for Chile

UserLocale=ES-CL
UILanguage=ES-ES
SystemLocale=ES-CL

Everything worked!

This is what my final customsettings.ini file looks like,

For testing different sites settings I change 10.0.1.168.190=Brisbane to 10.0.168.190=Santiago

Now that Chile has their preferred language set as default, I can get back to searching for the elusive babble fish which will make this problem disappear forever.

Mar 052012
 

I came across this strange issue yesterday, something that I have never seen before and after some searching of the pipes(internet) it looks like not other people have either.

When I install a SCCM I never use any service accounts, I prefer to use the system account of the SCCM server instead of Service account as they rarely changes their password and can get locked out.   The system account can do everything that you need it to, add it to local admins on your target machines for client push, add it to the local admin accounts on your secondary site servers to enable the install of secondary site.    Since the Server account is in the local admin then you don’t have to worry about permission during  install.  This can also works for domain controllers, I know they have local groups, but if you add the server account into the BUILTIN admin group in AD,  it will have the same affect and allow you to install the agent.

Software updates points can use the system account as long as the proxy is configured correctly, Sender are configured by default to use the service account for communication between servers.

You get the idea the service account can do everything that you need. That is way I like to use them, I have never had any issues using system accounts,  That is until I got an issue, with senders to secondary sites.

The issue raised it’s head when we noticed that no distribution points where updating.  What made this strange was that everything was fine the day before.

TO THE LOGS!!!

As normal the first thing to do with any SCCM troubleshooting I headed to the log files, which in that situation was distmgr.log and sender.log.  Distmgr.log look fine, not much activity but nothing to suggest any issues.  However once I opened up sender.log, I saw that the world was not in order.

Cannot connect to server SVR-CM02.OptimusPrime.ltd at remote site S06, won’t try send requests going to site S06 for an hour or until there are no active send requests. SMS_LAN_SENDER 01/03/12 2:35:11 PM 11184 (0x2BB0)

There is no existing connection, Win32 error = 5 SMS_LAN_SENDER 01/03/12 2:35:11 PM 11400 (0x2C88)

Error during connection to \SVR-CM02.OptimusPrime.ltdSMS_SITE (5). SMS_LAN_SENDER 01/03/12 2:35:11 PM 11400 (0x2C88)

Error is considered fatal. SMS_LAN_SENDER 01/03/12 2:35:11 PM 11400 (0x2C88)

Cannot connect to server SVR-CM02.OptimusPrime.ltd at remote site S06, won’t try send requests going to site S06 for an hour or until there are no active send requests. SMS_LAN_SENDER 01/03/12 2:35:11 PM 11400 (0x2C88)

There is no existing connection, Win32 error = 5 SMS_LAN_SENDER 01/03/12 2:35:12 PM 11184 (0x2BB0)


One thing I noticed was that nothing was showing up in red, like it normally with the trace viewer.   Error 5, is accessed denied so I starting looking that the secondary site to ensure that the Primary server had permission, so I checked the local admin group, the SMSPKGD$ share, everything looked normal.   After some head scratching I thought I would delete the sender, wait a bit(time to get a coffee) then recreate it. Unfortunately to no success.  For shits and giggles I decided to add my user admin account to the sender to see if that makes a difference, and what do you know I was presented with more giggles than shits.  IT WORKED.  So I changed it to another service account Transform.ltdServiceaccount which has the required rights and it is still happy.

I have no idea why, the service account wasn’t working, but now that it is working with a service account I am happy to leave it that way.

 Posted by at 3:22 pm  Tagged with:
Feb 282012
 

Deploying software is Config Mgr’s bread and butter, create a package, program and advertisement and point it at  ’All systems’ collection and hit the go button.  However most times a little more control and finesse is required.  This control and finesse comes from collections, they enable you to be more selective about which machines or users you target for deployment.

There are times which you want deploy software to all machines, but most of the time you just want to hit a selection of machines.  Lets take office 2010 for example. You might need it to go all Win7 machines but not the developers, servers and control systems machines.   There are many ways you can achieve this.  You can create an A.D group of machines which you want to get the software and base the query off that,  or on the flip side a group of machines which you do not want to have the software and use that as your criteria, you could use direct membership…  as I said there are many many ways…

My personal favourite is to use targeted collections.  A targeted collection is ‘All machines which don’t have X software installed’   to learn how to create these collections follow this post. http://scug.be/blogs/sccm/archive/2010/08/19/configmgr-query-not-installed-software-amp-subselect-query-s.aspx

This all works great until you get the following the following happens.  The development team complains that they don’t want Office 2010 and they want to stay with 2003 (not sure why as they only develop internal apps  and everyone will soon have office 2010, but you can’t argue with a Dev it’s a waste of time :) ). you still need to exclude servers as there isn’t much call for office to be installed on a domain controller.

So what you now need to do is exclude specific machines. Again there are many many ways to do this, from creating a A.D group or another collection, or you can do it in the query.

Dec 142011
 

This is an update to a previous post on the same topic, however this is aimed at people who have Windows 7 or have deployed powershell to XP machines.

As the title describes this check to see if you are on a wireless network before continuing the build, or any task sequence for that matter.
It goes without saying that this script won’t run in winpe, as it doesn’t have PowerShell :( Hopefully this will change in the near future.

Remember that this can be run without having to modify the execution policies as outlined in this post

 Posted by at 11:03 am
Nov 092011
 

Powershell has been designed with security in mind, you can’t just double click on script to run it.  Even when you right click and execute a script you need to first set the signing and execution policy.  This is as good thing, considering the power of powershell and the evil that it could do, I like it that way.

So this is why the following seems strange, I am not sure if it an oversite or it is by design.  What it allows you to do is, run a script by passing it line by line.  This could be useful is you want a scheduled task on a server without changing the execution policy for the user, something which might not be a great idea when scheduled  task as the system account.

You can use this to pass the script line per line.

Powershell.exe Get-Content ’C:ScriptsPigs.ps1‘ | powershell.exe -noprofile - 

The trick the script is the “-” on the end. Another way you could achieve the same result is to run the script with the following method powershell.exe -NoProfile -ExecutionPolicy Bypass -File However using  ’bypass’ will prevent any prompt and warnings from appearing,  IMO it is best to avoid this.

Running in a Task Sequence

The other situation which it could be useful is for running powershell scripts in  a task sequence.  Instead of setting the execution policy you can  ”(powershell.exe -Command { Set-ExecutionPolicy Unrestricted }” .

When using this in a task sequence you will need call the location of the script this can be called via in the following ways

If the script is a package then

Powershell.exe Get-Content ’.Pigs.ps1′ | powershell.exe -noprofile - 

Or if you add it to the MDT scripts you can use

Powershell.exeGet-Content  ’%deployroot%scriptsPigs.ps1′ | powershell.exe -noprofile - 

You will also need to add

“%systemroot%system32WindowsPowerShellv1.0″  as the start in location

Nov 032011
 

Let me first start by saying that I am not great at vb script. I prefer powershell, not getting into the reasons why but I do.

So when a question was recently posted on myITforum about how to change the prefix of a computer name from XP to W7 I knew a script would be the answer. Not sure what made me think why not I will give it a go, but I am glad I did.

In terms of scripts it is very simple if the first two characters = XP then replace. As I said simple but there is something about producing a script yourself which is very satisfying.

Like most scripts the logging makes up the bulk of it.

Oct 202011
 

Making WinPe boot images is one of those things that I need to do on a semi-regular basis, but unfortunatly not regularly enough to remember the syntax for DISM.  To over come this I created a PowerShell script to do it for me.

The script updates both x86 and x64 WinPe WIM files.

  • It checks to makes sure that the Windows Installation Toolkit is installed.
  • Creates  folders to mount this WIM files to
  • Takes a copy of the original WIM files
  • Mounts than adds the following components to the WIM file,  Scripting.cab and WMI.cab
  • Adds Trace32.exe and Trace64.exe into respective WIM file
  • Un-mounts the WIM.

Other components can be added like HTA and ODBC but since I didn’t need them I have rem-ed them out.

To add the Trace32.exe and trace64.exe  into WinPe create a folder called Trace in ‘C:Program FilesWindows AIKToolsServicing’

Both Trace32.exe and Trace64.exe are available here

http://blog.esmnetworks.com/operating-system-deployment/sms-trace64-and-trace32-for-winpe/

Oct 132011
 

Everyone loves a good back end, but now there is something for people who love a good front end…

http://www.davitools.com/fepstools/fepstools.aspx

This handy utility helps you get the right command line for the PStools, which includes the most useful of all the PS tools PSExec.  I am not going to explain  the benefits of the PStools read that here

Since there is already good documentation, I will pass on re-writing the benefits of or how to use Fepstools,  so you can read it here.

http://www.davitools.com/fepstools/documentation.aspx