Quantcast
Channel: Ken Brumfield's Blog
Viewing all 33 articles
Browse latest View live

NTLMv2 or not NTLMv2, that is the question.

$
0
0

Enabling NTLMv2 is a project always fraught with challenges, mostly due to the lack of visibility into exactly which authentication protocol is being used by a client machine.  Management often is not gung-ho about the try it and see what breaks methodology of identifying systems that can not support NTLMv2.  As such, many administrators have often asked to deploy NTLMv2 to the enterprise with minimal impact to client systems.

Up until now, sniffing network traffic was the only option available, and not a very good option.  With the release of Windows Vista and 2008, this becomes dramatically easier as both event filtering is improved AND security auditing has been dramatically improved.  And, since there is plenty of documentation on how to deploy NTLMv2, this will just tell you how to identify which systems are not using NTLMv2.

  1. On a Windows Vista or 2008 machine use the command line to enable auditing for Logon Events.
    "auditpol /set /subcategory:logon /success:enable /failure:enable"
  2. Create a custom view or filter the security log using the following syntax (copy/paste the content between the quotes):
    "<QueryList> <Query Id="0" Path="Security"> <Select Path="Security">*[System[Provider[@Name='Microsoft-Windows-Security-Auditing'] and (EventID=4624)] and EventData[Data[@Name='LmPackageName']!='-'] and EventData[Data[@Name='LmPackageName']!='NTLM V2']]</Select> </Query> </QueryList>"

If auditing is enabled on the DCs, all the domain accounts being used anywhere in the enterprise will be caught.

Check out Eric Fitzgerald's blog for how to script wevtutil.  If used with the above filter you can easily automate pulling the data you want out of the security log.  Also, my thanks to Eric for the insight into the fact that we now audit the hash used during authentication
http://blogs.msdn.com/ericfitz/archive/2008/07/16/wevtutil-scripting.aspx

 

Here is a sample event for reference:
Log Name:      Security
Source:        Microsoft-Windows-Security-Auditing
Date:          5/28/2008 9:51:11 AM
Event ID:      4624
Task Category: Logon
Level:         Information
Keywords:      Audit Success
User:          N/A
Computer:      computer.contoso.com
Description:
An account was successfully logged on.

Subject:
 Security ID:  NULL SID
 Account Name:  -
 Account Domain:  -
 Logon ID:  0x0

Logon Type:   3

New Logon:
 Security ID:  ANONYMOUS LOGON
 Account Name:  ANONYMOUS LOGON
 Account Domain:  NT AUTHORITY
 Logon ID:  0x1161d3f3
 Logon GUID:  {00000000-0000-0000-0000-000000000000}

Process Information:
 Process ID:  0x0
 Process Name:  -

Network Information:
 Workstation Name: SOURCEMACHINE
 Source Network Address: 192.168.X.X
 Source Port:  4996

Detailed Authentication Information:
 Logon Process:  NtLmSsp
 Authentication Package: NTLM
 Transited Services: -
 Package Name (NTLM only): NTLM V1
 Key Length:  128

This event is generated when a logon session is created. It is generated on the computer that was accessed.

The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe.

The logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network).

The New Logon fields indicate the account for whom the new logon was created, i.e. the account that was logged on.

The network fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases.

The authentication information fields provide detailed information about this specific logon request.
 - Logon GUID is a unique identifier that can be used to correlate this event with a KDC event.
 - Transited services indicate which intermediate services have participated in this logon request.
 - Package name indicates which sub-protocol was used among the NTLM protocols.
 - Key length indicates the length of the generated session key. This will be 0 if no session key was requested.
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name="Microsoft-Windows-Security-Auditing" Guid="{00000000-0000-0000-0000-000000000000}" />
    <EventID>4624</EventID>
    <Version>0</Version>
    <Level>0</Level>
    <Task>12544</Task>
    <Opcode>0</Opcode>
    <Keywords>0x8020000000000000</Keywords>
    <TimeCreated SystemTime="2008-05-28T13:51:11.177Z" />
    <EventRecordID>63818</EventRecordID>
    <Correlation />
    <Execution ProcessID="656" ThreadID="752" />
    <Channel>Security</Channel>
    <Computer>computer.contoso.com</Computer>
    <Security />
  </System>
  <EventData>
    <Data Name="SubjectUserSid">S-1-0-0</Data>
    <Data Name="SubjectUserName">-</Data>
    <Data Name="SubjectDomainName">-</Data>
    <Data Name="SubjectLogonId">0x0</Data>
    <Data Name="TargetUserSid">S-1-5-7</Data>
    <Data Name="TargetUserName">ANONYMOUS LOGON</Data>
    <Data Name="TargetDomainName">NT AUTHORITY</Data>
    <Data Name="TargetLogonId">0x1161d3f3</Data>
    <Data Name="LogonType">3</Data>
    <Data Name="LogonProcessName">NtLmSsp </Data>
    <Data Name="AuthenticationPackageName">NTLM</Data>
    <Data Name="WorkstationName">SOURCEMACHINE</Data>
    <Data Name="LogonGuid">{00000000-0000-0000-0000-000000000000}</Data>
    <Data Name="TransmittedServices">-</Data>
    <Data Name="LmPackageName">NTLM V1</Data>
    <Data Name="KeyLength">128</Data>
    <Data Name="ProcessId">0x0</Data>
    <Data Name="ProcessName">-</Data>
    <Data Name="IpAddress">192.168.X.X</Data>
    <Data Name="IpPort">4996</Data>
  </EventData>
</Event>


Performance Optimization Philosophy

$
0
0

Optimizing performance is not just about making things run faster, it is about making them run appropriately fast based on perception and cost.

Several questions may be posed at this junction:

  • Why does philosophy need to be discussed?
  • Isn't performance just about making things run faster?

There is a common misconception that performance optimization is about making things faster.  In actuality, optimizing is actually about finding the correct balance between any number of tradeoffs.  Often and simplistically this optimization is balancing between either cost and hardware (think upgrading processor), or cost and labor (think development/test time).

The other component is user perception.  This is often the most challenging part of the equation and is often what triggers reviews of existing infrastructure due to perceived poor performance.  It is not uncommon that the software is behaving normally and that the issue is not due to a hardware scalability issue, but the end user still feels that the software is not fast enough.  Unfortunately, in these scenarios, there is often little that can be done in the short run without waiting for hardware to catch up with the needs of the code, optimizing the code, or waiting for fundamental architectural changes (like moving from x86 to x64) to come to pass in order be able to eliminate hardware bottlenecks.  In short, the return on investment (ROI) of throwing more hardware at the code to fix user perception may not be there.

Here are some analogies to illustrate the point:
My grandmother is looking for a new car.  She lives on a fixed income and, since she is getting older, doesn't travel much anymore, mostly to the grocery store or bingo, which are within 3 miles of her house.  Due to her needs, the Lamborghini Murciélago is probably a little excessive though it probably performs well.  I'm sure that pretty much any car that runs will be performant enough for her needs.  Furthermore, making the investment in the Murciélago is probably not going to fix her perception that it takes too long to go the 3 miles to the supermarket.

Stretching the analogy a little:
Regardless of what car she gets, it is not a plane, thus it will never ever fly.  But also, there is no guarantee that if she gets a plane it will be faster than a car (the land speed record is 766 mph and a Piper Cub goes about 130 MPH).  Just like it is critical never to assume that moving from x86 to x64 will speed up the application.  It would be more analogous to moving from a car to a tractor trailer where more stuff can be stored (increased addressable memory) so fewer trips (to disk/network) need to be made.

 

Identifying Stale User and Computer Accounts

$
0
0
Using AD to determine whether or not people are still working for the company and are allowed to logon to the systems is not the ideal, and account management should happen based on knowing what accounts should and should not be use, and not by figuring out which haven’t been used.  Realistically, if a fired employee is still logging on to the system we are not going to pick up the account that is stale and disable/delete it like actually needs to be done.
That said, in the real world things aren't always quite that easy.  As such, regardless of whether the account is a user account or computer account we have several attributes that are stored with the account that help us determine if it is used recently.  Unfortunately they all are potentially inaccurate in one fashion or another.  These attributes are pwdLastSet, lastLogon, and lastLogonTimeStamp (as of Windows 2003 DFL).
Essentially you can determine if the account is stale by ensuring all of the attributes are over a designated threshold.  A starting threshold for users is 3 times the maximum user password age and for computers is also 3 times the maximum computer password age.  In short, if both pwdLastSet and lastLogonTimeStamp are greater than the threshold, it is pretty safe to delete the account, unless you are in academia and the faculty member may be on sabbatical.
If you don’t have both of those, it gets a little more questionable as to whether or not the account is still in use, as each attribute can incorrectly report how recently the account was used in the following fashions:
pwdLastSet
  • This is systemically is inaccurate if either the domain has no password policy specifying an age limit or the account has the userAccountControl attribute PASSWD_CANT_CHANGE bit set.  Note, computer accounts can be configured to not change their password, but I have not observed many environments which change this setting.
  • This can be also misrepresent the recentness of account usage if, for example, the user or computer has not authenticated to the network in the intervening time between when the password needed to be changed and any threshold you may specify.  Think user is on vacation or sabbatical (common in academic environments).
  • This is also inaccurate if a user has a laptop and travels for extended periods.  Since the system is not on the network to communicate with the Domain Controller at boot, it can not reset the account.  This can be addressed by several methods, restarting the netlogon service after VPN has been established or using nltest or netdom to reset the password in a VPN startup script.
lastLogon
  • The data in this attribute is not replicated, thus this is only accurate on the DC the user last logged into.  Unless all DCs for the domain are queried, the data may be inaccurate.
  • Since AD clients are site aware, this also means that if there was only one DC in a remote location (or as happens sometimes, only one DC listed in WINS or DNS if they aren’t configured properly) and that system is decommissioned or lost due to some sort of outage it is entirely possible any indication the account ever logged in no longer exists.
  • This only tracks interactive logons.  This essentially means that a user has to press Ctrl+Alt+Del in order for this to register.
    Terminal services logons are a different type of logon in the SECURITY_LOGON_TYPE enumeration, of type RemoteInteractive and may not update lastLogon.  At some point I will test this and update the blog (possibly, “best laid plans of mice and men” and all that).
  • This is updated only when a client logs on.  If a user does not log off their machine for 90 days and the machine does not reboot, this will report the user has last logged on 90 days ago, which is exactly the truth.  It does not update in order to report that the user has been accessing the system and the network for the last 90 days.
  • Updating only when logon occurs also affects computers if they are not rebooted.  If the computers have remained up and running, the lastLogon is when they booted up.  This is highly unlikely to impact client systems, but may impact servers if they are up for greater than a specified threshold.
    Extremely long uptimes are much less likely if security updates are being deployed regularly.
  • (Updated 11/20/2014) – this does not track users who logon via cached credentials while the computer is off line and then connect.  An example is VPN scenario.
lastLogonTimeStamp
  • This requires Windows 2003 domain functional level (DFL).
  • Prior to Windows 2003 SP1 this did not track all network logons.
    http://support.microsoft.com/kb/886705
  • This can be up to 14 days off, though by adjusting your threshold but this shouldn’t be a problem if the number is sufficiently high.
  • As with pwdLastSet, this is also inaccurate if a user has a laptop and travels for extended periods.  The concerns and methods to address this are the same methods as pwdLastSet.
Also, when pulling this data you could also run into null values and these cause the following concerns:
  • pwdLastSet – the password gets set, updating this attribute, if you use any of the native Microsoft tools to create the account or when the computer is first joined to the domain.  If this is "0" (zero), some 3rd party code probably created the account and the computer never joined.  Except for a minor inconvenience to whomever pre-created the account, this account can be safely deleted unless one of the other timestamps is not null
  • lastLogon – this could be null for any number of reasons.  The user never logged on interactively (think user who only uses web based e-mail), the user never logged on to the DC(s) queried, or the user last logged on to a DC that no longer exists.
    If this is null on all DCs and lastLogonTimeStamp is not available, do not assume the account is stale unless no decommissions of DCs have occurred within the threshold.
  • lastLogonTimeStamp – if this is null the account has never logged on since the domain was brought to DFL 2003.  This is only a concern if the DFL was raised within the threshold designated for the account to be stale.
Be careful with place holder computer accounts for non-Windows OSs prior to as they may behave differently.  If you look at the operatingSystem attribute on the computer object you can determine if it needs more attention.  Examples:
  • Microsoft Cluster Server Virtual Server computer accounts.
  • OS X
  • Unix Interop
  • SAMBA
  • NetAPP

Update 11/20/2014:

There are several scenarios where there are non-logon processes that update lastLogonTimeStamp when a user has not logged on.  Anything that consumes Service-for-User (S4U) (this behavior was introduced in Windows 2003 SP1).  This includes the following known scenarios:
Another scenario that causes lastLogonTimeStamp to update is pre-populating passwords on RODCs.  Specifically know to happen via repadmin /rodcpwrepl.
LastInteractiveLogonTimeStamp– this has all the limitations of “LastLogon” except that it is replicated

Cases have been found where users known to have left the company were still showing as active and logging in.  However, further investigation showed infrastructure consuming S4U to audit permissions were increasing lastLogonTimeStamp.  This returns us to the recommendation at the start of this article that HR data needs to be authoritative.  These could have just as easily been the users continuing to logon and gain access to corporate resources they no longer should have.

References:

Trials and Tribulations of Learning the Vista Automated Installation Functionality

$
0
0

I've pretty much come to the end of my initial learning curve on how to automate Vista installations using the AIK.  There is some great documentation out there on how to execute the specific tasks necessary to add drivers and packages to the image.  However, there are some gaps on how to tie it all together.  It's the subtlties that really hurt my learning curve (and installing the OS over and over and over... to test the effect of each change) that don't seem to be well documented anywhere.  I'm hoping to share at least the trickiest items that I encountered in order to save someone else many hours of learning.  As a note, I have not tried to use what used to be called "Business Desktop Deployment" (BDD) and is now "Microsoft Deployment Toolkit" (MDT) and some of the challenges I have below may be addressed in that.

I'm doing my deployments via Windows Deployment Server (WDS), the replacement for Remote Installation Server (RIS).  In RIS, I never really used the RIPREP functionality because I found the administrative burden of creating a new RIPREP image for each hardware platform and every time I needed to deploy new software excessive.  Though RIPREP could push the complete OS and applications much faster than going through the install process, I just found it easier to deal with one scripted install I could add drivers for all the hardware to, and deploy applications via SMS. 

First off, for anyone who has used RIS and the "unattend.txt" methods of installs in the past, there are a couple of features I really miss or have not yet figured out how to do in WDS:

  • What I miss most from RIS: If it did not find a computer object with the netbootGuid attribute populated with the machine's UUID, it would prompt for a computer name during the initial startup screens. This meant I did not have to pre-stage the system before the user or SA installed it, but if it ever had to be re-installed it would keep the same name and OU location in the hierarchy (very useful in DR scenarios) since RIS would populate the netbootGUID with the UUID upon creation (WDS has an approval process scenario that requires manual intervention, but doesn't have a "auto approval" mode). Additionally, since computer name is really the only unique piece of information needed for each and every system I really liked the fact that I could deliver a 4 step install process to users and administrators and leave them with a fully provisioned system:
    1. Press F12 on boot
    2. Log in
    3. Enter computer name
    4. Go do something else for several hours
  • In RIS, if the security on the images was managed in such that a user was only allowed to see one image, RIS automatically selected that image and installed it.  Regardless as whether or not only one image is available, WDS prompts the end user to select an image.
  • In the "unattend.txt" install automation method, the disk configuration options were tied to the image being deployed.  I really liked this feature since I could have both a server OS image and a client OS image on the deployment server and allow the server operator to create the partitions they wanted according to their needs while automating the partitioning of the client system disks.  Now I think I need two WDS servers to provide the same level of functionality.  I don't think this is WDS limitation, but more a limitation related to the 2 stage install process Vista uses.  I still miss this functionality, regardless of where it falls.
  • The ability to have one OS build/WIM and multiple configuration files if the only difference, for example, is that one department doesn't want their users to have certain windows features installed by default (think the default Windows games).
  • The RIS UI loaded very fast, this cut down the time an administrator was sitting idle on a system rebuild, cutting operational costs.
  • What I really like about the new tools:

    • Multi-cast - large deployments = nuff said.
    • The administrative tools are much better.
    • The tools and documentation for generating the scripted installs made life a lot easier than the initial learning curve I recall going through with unattend.txt.
    • Driver management.  Run a couple of command lines and the image is updated. There is no longer a need to have to manually update a text file (typos... grrr) and build out a folder structure for every image/driver set managed.  Adding in new drivers to the boot image is much easier than in RIS and uses the same methodology as the install images, which is a very nice win.  And no more drivers all using oemsetup.inf tripping over each other in the boot image and fighting with that.
    • Drive partitioning tools are much better.  Even if the UI can't provide the functionality needed, the ability to drop to a command line and use diskpart for the fine grained configuration desired is awesome.

    Idiosyncratic Windows Deployment Server Vista Setup Options

    $
    0
    0

    I found getting both of the below topics to work the way I wanted in Windows Deployment Server (WDS) to be surprisingly tricky and fraught with unexpected results that took me quite a while to figure out.  After all, one has to run through some large portion of the OS install before finding out that it fails.  Multiple mistakes = multiple OS Installs, which drag out this learning curve rather significantly.  I hope to save people some of these reboots and time lost by sharing what I learned.

    Computer Naming and Domain Join (applies to Windows 2008 Server as well):

    I really like eliminating the majority of repetitive trivial tasks.  When managing desktops in an enterprise, something as simple as A) logon and change the computer name, B) reboot, C) join the computer to the domain, and D) reboot, which only takes 10 minutes can have a significant impact.  Even on a deployment as small as 5000 computers, this can add up to a significant cost.  5000 systems * 10 minutes per system / 60 minutes per hour = ~833 man hours, just renaming the computer and joining it to the domain.  SYSPREP and the mini setup do a lot to help reduce this impact, but that still means that some administrator has to revisit the computer after the OS is deployed to the box and before a user can work.  This seems an incredibly inefficient use of labor to me.

    As a result of this, I like to have the system join the domain during the install process.  Unfortunately, this was a little more challenging in the Vista automation than one would suspect, and there are several postings on this throughout various forums.  Unfortunately, the bits of the information are scattered about in a fashion that doesn't really help to put the full picture together.  The key items I learned are:

    • %MACHINENAME% will pick up the computer name from AD if the netbootGUID attribute is populated for the system UUID as expected (See earlier posting regards this).
    • %MACHINENAME% will function as "*" if the above case is not true.
    • "*" will give the computer a random name (as documented).  However, even if the "Microsoft-Windows-UnattendedJoin" element of the XML is populated correctly, the computer will not be joined to the domain when the system has to generate a name.

    Within the WDS/AIK space, this forces an administrator to pre-stage each and every machine.  Of course, joining the domain can be managed outside of the WDS/AIK space either by manual methods or scripting (i.e. use netdom and batch files) to overcome this limitation, but I wanted to avoid using “Autologon” functionality and writing “code” in order to accomplish something that could be taken care of during the install process.  I may, at some point, end up working on this in order to address the scenarios where it won’t fail above, but I have my learning curve on the “Microsoft Deployment Toolkit” to go through first to see if it provides the level of functionality I desire.

    Note:  At some point we seem to have updated Windows so that the computer can be renamed and joined to a domain in one shot, though it seems that many people either don't know or don't use this.  It is a little tricky too; the computer name must be changed first, then the domain membership.  If this is done in the reverse order, it won’t rename the computer account in AD that was created when the domain was joined.

    Required Unattend.XML settings:

    ·         Specialize Pass

    o    Microsoft-Windows-Shell-Setup\ComputerName = %MACHINENAME%

    o    Microsoft-Windows-UnattendedJoin\Identification\JoinDomain = <Enter Domain Name>

    o    Microsoft-Windows-UnattendedJoin\Identification\MachineObjectOU = <Enter OU>

    o    Microsoft-Windows-UnattendedJoin\Identification\Credentials\Domain = <Enter Domain Name>

    o    Microsoft-Windows-UnattendedJoin\Identification\Credentials\Password = <Enter Domain Name>

    §  Note:  This password will not be encrypted when the following setting is enabled.  Hide Sensitive Data in an Answer File

    o    Microsoft-Windows-UnattendedJoin\Identification\Credentials\UserName = <Enter Account Name>

    Eliminating the Mid-Install Wizard (Out-Of-the-Box-Experience):

    Also, within my continued endeavors to make installs as low touch as possible, I feel having the computer pause for human intervention somewhere in the middle of installing the OS undermines much of the other automation.  Thus, a wizard that pops up mid-install to ask what language it is desired to run the computer in and to create a local account is something I would seek to eliminate.

    Though the dialog the wizard presents a reasonable question (locality settings) that the average user could handle, it wouldn't bother me so much if there wasn't another delay that prevented the system from being used immediately afterwards (the computer goes through the performance tests to determine the Windows Performance Index).  Having a user or administrator sit through that progress bar is also a productivity impact, again the reason I seek to eliminate that interim step.  Also, since the workstation is joined to a domain, I do not want to create additional and unused accounts for no good reason in order to make the box just disappear from the user experience (though I would have settled with doing so and saw some suggestions on forums to this end).

    Normally it will show the wizard if regional settings and creation of a local account are not both configured.  As I stated above, I wanted to avoid creating a user account, thus as a work around, I found that setting Microsoft-Windows-Shell-Setup/OOBE/SkipMachineOOBE to “true” in the oobeSystem pass bypassed this mid-install wizard.  The nice part is that this did allow me to set the settings I desired (i.e. regional settings) and bypass the others (local user account).

    As a warning, the help context states that SkipMachineOOBE is deprecated and shouldn’t be used, which may cause additional issues as the install process may be changed in future versions of Windows, but it currently works for my needs.  Also, heed the warning in the article, and setting SkipMachineOOBE to true may leave the machine in an unusable state, see the next step.

    Required Unattend.XML settings:

    ·         oobeSystem

    o    Microsoft-Windows-Shell-Setup\OOBE\SkipMachineOOBE = True

     

    Suggested Unattend.XML settings:

    ·         oobeSystem

    o    Microsoft-Windows-International-Core\InputLocale = <Enter SelectedLocale>

    o    Microsoft-Windows-International-Core\SystemLocale = <Enter SelectedLocale>

    o    Microsoft-Windows-International-Core\UILanguage = <Enter SelectedLocale>

    o    Microsoft-Windows-International-Core\UserLocale = <Enter SelectedLocale>

    o    Microsoft-Windows-Shell-Setup\OOBE\HideEULAPage = true

    o    Microsoft-Windows-Shell-Setup\OOBE\ProtectYourPC = 3

    o    Microsoft-Windows-Shell-Setup\OOBE\NetworkLocation = Work

    o    Microsoft-Windows-Shell-Setup\OOBE\SkipMachineOOBE = True

    o    Microsoft-Windows-Shell-Setup\UserAccounts\AdministratorPassword = <Enter a Password>

    Administrator account warning:

    By default, on Vista the administrator account is disabled.  Thus, if no local account is created and the computer does not get properly joined to the domain, the machine will appear to be useless.  This is not as bad as it seems, by following the guidance on Windows Vista Security : Built-in Administrator Account Disabled the computer can still be joined to the domain.  In short, boot into “Safe Mode with Networking” and join the computer to the domain.

    Note:  I found in the documentation for the unattended install settings where it states that the administrator account can be enabled via Microsoft-Windows-Shell-Setup\AutoLogon\Username.  I didn’t have any luck getting this to work on (using Vista SP1 was the only version I tried), but it doesn't matter since there is another workaround.

    On Windows 2008 Server, if the domain join fails the administrator is prompted to set the Administrator account password, so this is not a concern.

    Little Shop of Drivers

    $
    0
    0

    I take all my drivers and put them in %DRIVERS_ROOT_PATH% (see batch code below) and the install images I want to mess with in %FILES_ROOT_PATH%.  One folder per driver, the script iterates through each of the folders, and runs imagex /inf for each folder.  As I'm testing, this makes it much easier to start over from scratch as I was trying to get different stuff to work.

    Note:  I wrote this for the x64 Install.wim which only has 4 images in it, the x86 has 7, but it will work for x86 WIM if the bold/italicized number below is changed.  Then all the images within the WIM will be updated.

    Note:  I have put some work into this since my initial posting, and determined updating, rather than reposting made the most sense.  This will now also automate adding packages to the image so long as the packages are in %PACKAGES_ROOT_PATH% (Ensure the Directory name is the same as the .CAB file from the package so that it knows which CAB to install).
    This has also been generalized to work with a WIM that has any number of images.  This script assumes that the WIM file is in root of the folder structure that the drivers and packages are in, but that can easily be changed using the SET statements below.

    @echo off
    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    ::Check Inputs
    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    IF "%1"=="" (
    Echo Enter the directory root for the drivers and packages to add to the image.
    GOTO END
    )

    IF "%2"=="" (
    Echo Enter WIM file name.  This must be in the root of the
    GOTO END
    )

    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    ::SET Variables
    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    SET REFERENCENAME=%1
    SET MOUNTPOINT=D:\FOO\%REFERENCENAME%
    SET FILES_ROOT_PATH=D:\%REFERENCENAME%
    SET IMAGEFILE=%FILES_ROOT_PATH%\%2
    SET DRIVERS_ROOT_PATH=%FILES_ROOT_PATH%\Drivers
    SET PACKAGES_ROOT_PATH=%FILES_ROOT_PATH%\Packages
    SET LOGS_ROOT_PATH=%FILES_ROOT_PATH%\Logs
    SET WIN_AIK_INSTALL_PATH=C:\Program Files\Windows AIK\Toolsecho %WIN_AIK_INSTALL_PATH%
    Goto :End

    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    ::Ensure needed directories exist and are ready to be used
    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    if not exist %MOUNTPOINT% (md %MOUNTPOINT%) ELSE *"c:\Program Files\Windows AIK\Tools\x86\imagex.exe" /unmount %MOUNTPOINT%)
    if not exist %LOGS_ROOT_PATH% (md %LOGS_ROOT_PATH%) ELSE (del /s /q %LOGS_ROOT_PATH%>NUL)

    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    ::Identify number of images in WIM and process each image
    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    for /f "tokens=1,2 delims=:" %%i in ('imagex /info %IMAGEFILE%') do if "%%i"=="Image Count" SET IMAGE_COUNT=%%j
    Echo This WIM contains%IMAGE_COUNT% image(s).
    For /l %%i in (1,1,%IMAGE_COUNT%) do call :update %IMAGEFILE% %%i
    GOTO END

    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    ::Process per image steps
    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    :update
    Echo Updating %1 - Image #%2
    "c:\Program Files\Windows AIK\Tools\x86\imagex.exe" /mountrw "%1" %2 %MOUNTPOINT%
    for /f %%i in ('dir /ad /b %DRIVERS_ROOT_PATH%') do Call :InstallDriver %%i
    for /f %%i in ('dir /ad /b %PACKAGES_ROOT_PATH%') do Call :InstallPackage %%i %2
    "%WIN_AIK_INSTALL_PATH%\x86\imagex.exe" /unmount /commit %MOUNTPOINT%
    goto :EOF

    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    ::Install a specified driver
    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    :InstallDriver
    Echo Installing Drivers from %DRIVERS_ROOT_PATH%\%1
    "C:\Program Files\Windows AIK\Tools\PETools\peimg.exe" /verbose /inf=%DRIVERS_ROOT_PATH%\%1\*.inf /image=%MOUNTPOINT%>NUL
    IF ERRORLEVEL 1 ECho       ERROR:  Couldn't Install Driver "%1"
    goto :EOF

    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    ::Install a specified package
    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    f:InstallPackage
    Echo Installing Package %PACKAGES_ROOT_PATH%\%1
    "%WIN_AIK_INSTALL_PATH%\Servicing\pkgmgr" /n:"%PACKAGES_ROOT_PATH%\%1\%1.xml" /o:%MOUNTPOINT%;%MOUNTPOINT%\Windows /s:%TEMP% /l:%LOGS_ROOT_PATH%\%2-%1>NUL
    IF ERRORLEVEL 1 ECho       ERROR:  Couldn't Install Package "%1"
    :: /m:"%PACKAGES_ROOT_PATH%\%1\%1.cab"
    GOTO :EOF

    :END
    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    ::Clean up variables
    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    SET IMAGE_COUNT=
    SET REFERENCENAME=
    SET IMAGEFILE=
    SET MOUNTPOINT=
    SET FILES_ROOT_PATH=
    SET DRIVERS_ROOT_PATH=
    SET PACKAGES_ROOT_PATH=
    SET WIN_AIK_INSTALL_PATH=

    Managing netbootGuid

    $
    0
    0

    The attribute on AD computer objects netbootGuid is very important to allowing some deployment options (RIS and WDS) to locate the computer object the hardware belongs to.  Unfortunately certain activities, such as replacing the system board, swapping in new hardware but using a pre-existing computer name, or creating a new VM using an existing VHD can invalidate the accuracy of the netbootGuid attribute stored in AD.

    I wrote the script below (more precisely cobbled together since my VBScript is very rusty) to automatically check to make sure the netbootGuid is correct and update it if it isn't.  This script can be deployed as a startup script via Group Policies to ensure every time the computer boots, the netbootGuid will be updated if needed.  However, in order for this to work, the ACLs on the computer objects must be changed to allow SELF to write to the netbootGuid property.

    References (from whence I cobbled):

    Note:  I'm not entirely fond of having to write out to a temp file to turn the GUID into a byte array.  But it works.  I'm open to feedback on alternate methods as to how I can rewrite the function baConvertGuidToByteArray without writing out to the temporary file.  Thanks in advance.

    Note: Updated code 1/6/2014 based on comment below.

    <VBScript>

    'http://support.microsoft.com/kb/302467'The sample uses WMI to return the UUID on the system.'If a UUID can not be found on the system it returns all F's.'What RIS does in this case is it uses a zero'd out version of the MAC 'address of the NIC the machine is booting off of. 'This sample will return the value required to set the 'netbootGUID attributeOption ExplicitCall UpdateNetbootGuid(guidGetUUID, szGetDn)Function guidGetUUIDDim SystemSet, SystemItem, NetworkAdapterSet, NetworkAdapterSet SystemSet = GetObject("winmgmts:").InstancesOf ("Win32_ComputerSystemProduct")ForEach SystemItem In SystemSetIf SystemItem.UUID = "FFFFFFFF-FFFF-FFFF-FFFF-FFFFFFFFFFFF"ThenSet NetworkAdapterSet = GetObject("winmgmts:").InstancesOf ("Win32_NetworkAdapter")ForEach NetworkAdapter In NetworkAdapterSetIf NetworkAdapter.AdapterType = "Ethernet 802.3"And NetworkAdapter.Description <> "Packet Scheduler Miniport"Then
                        guidGetUUID = "00000000-0000-0000-0000-"& Replace(NetworkAdapter.MACAddress, ":", "")EndIfNextElse
                guidGetUUID = SystemItem.UUIDEndIfNextEndFunctionFunction szGetDNDim objSysInfo Set objSysInfo = CreateObject("ADSystemInfo")'Set DN to upper Case
        szGetDN = UCase(objSysInfo.ComputerName)EndFunctionSub UpdateNetbootGuid(guidUUID, szComputerDn)Dim oComputer'Get parentcontainerSet oComputer = GetObject("LDAP://"& szComputerDn)If ByteArrayToGuid(oComputer.netbootGuid) <> guidUUID Then
            oComputer.Put "netbootGuid", baConvertGuidToByteArray(guidUUID)
            oComputer.SetInfoEndIf'Clean upSet oComputer = NothingEndSubFunction ByteArrayToGuid(arrbytOctet)IfNot IsEmpty(arrbytOctet) Then
        ByteArrayToGuid = _
            Right("0"& Hex(AscB(MidB(arrbytOctet, 4, 1))), 2) & _
            Right("0"& Hex(AscB(MidB(arrbytOctet, 3, 1))), 2) & _
            Right("0"& Hex(AscB(MidB(arrbytOctet, 2, 1))), 2) & _
            Right("0"& Hex(AscB(MidB(arrbytOctet, 1, 1))), 2) & _"-"& _
            Right("0"& Hex(AscB(MidB(arrbytOctet, 6, 1))), 2) & _
            Right("0"& Hex(AscB(MidB(arrbytOctet, 5, 1))), 2) & _"-"& _
            Right("0"& Hex(AscB(MidB(arrbytOctet, 8, 1))), 2) & _
            Right("0"& Hex(AscB(MidB(arrbytOctet, 7, 1))), 2) & _"-"& _
            Right("0"& Hex(AscB(MidB(arrbytOctet, 9, 1))), 2) & _
            Right("0"& Hex(AscB(MidB(arrbytOctet, 10, 1))), 2) & _"-"& _
            Right("0"& Hex(AscB(MidB(arrbytOctet, 11, 1))), 2) & _
            Right("0"& Hex(AscB(MidB(arrbytOctet, 12, 1))), 2) & _
            Right("0"& Hex(AscB(MidB(arrbytOctet, 13, 1))), 2) & _
            Right("0"& Hex(AscB(MidB(arrbytOctet, 14, 1))), 2) & _
            Right("0"& Hex(AscB(MidB(arrbytOctet, 15, 1))), 2) & _
            Right("0"& Hex(AscB(MidB(arrbytOctet, 16, 1))), 2)EndIfEndFunctionFunction baConvertGuidToByteArray(ByVal strHexString)Dim fso, stream, temp, ts, n, szScrubbedStringSet fso = CreateObject ("scripting.filesystemobject") Set stream = CreateObject ("adodb.stream")Const TemporaryFolder = 2
    
        temp = fso.GetSpecialFolder(TemporaryFolder) & fso.gettempname () Set ts = fso.createtextfile (temp) 
    
        szScrubbedString = Replace(strHexString, "-", "")
    
        ts.write Chr("&h"& Mid(szScrubbedString, 7, 2))
        ts.write Chr("&h"& Mid(szScrubbedString, 5, 2))
        ts.write Chr("&h"& Mid(szScrubbedString, 3, 2))
        ts.write Chr("&h"& Mid(szScrubbedString, 1, 2))
        ts.write Chr("&h"& Mid(szScrubbedString, 11, 2))
        ts.write Chr("&h"& Mid(szScrubbedString, 9, 2))
        ts.write Chr("&h"& Mid(szScrubbedString, 15, 2))
        ts.write Chr("&h"& Mid(szScrubbedString, 13, 2))
        ts.write Chr("&h"& Mid(szScrubbedString, 17, 2))
        ts.write Chr("&h"& Mid(szScrubbedString, 19, 2))
        ts.write Chr("&h"& Mid(szScrubbedString, 21, 2))
        ts.write Chr("&h"& Mid(szScrubbedString, 23, 2))
        ts.write Chr("&h"& Mid(szScrubbedString, 25, 2))
        ts.write Chr("&h"& Mid(szScrubbedString, 27, 2))
        ts.write Chr("&h"& Mid(szScrubbedString, 29, 2))
        ts.write Chr("&h"& Mid(szScrubbedString, 31, 2))
    
        ts.close 
    
        stream.type = 1 
        stream.open 
        stream.loadfromfile temp 
    
        baConvertGuidToByteArray = stream.read 
    
        stream.close 
        fso.deletefile temp 
    
        Set stream = NothingSet fso = NothingEnd Function

    </VBScript>

    Getting the most out of the redundancy native to AD when making applications "AD Aware"

    $
    0
    0
     

    Many customers ask how they can best configure applications so that the applications can take full advantage of the fault tolerance built into Active Directory (AD).  While there is no one right answer to this question, there are several common strategies that are frequently used.  However, these strategies are not without their own shortcomings and thus deserve some discussion around the shortcomings of each of these strategies.

     

    To set the context, in all strategies that must be employed the application developer (yes we are talking about the other guy/gal, and not the AD guy) must handle the following scenarios in some fashion or another within their code:

    • Server inaccessible - Whether the server isn't online at all, or it goes down at some point after the connection was established
    • Concurrency - Since AD is loosely convergent, it may take several seconds to several hours (depending on the replication interval) for the data to replicate from one DC to another.  If there is the need to read the data immediately after it is written, or ensure consistency between multiple applications for any reason, all sensitive operations should occur on one box.

     

    Pointing all LDAP enabled applications to a DNS Alias - i.e. "activedirectory.contoso.com"

    • Pros
      • Easy for the developers to grasp and use.  Also a very low cost from the infrastructure perspective
    • Cons
      • Breaks Kerberos - To use Kerberos to authenticate against LDAP, the Service Principal Name of "LDAP/requestedserver.contoso.com" is queried.  In this case, "LDAP/activedirectory.contoso.com" would be searched for and would not be found.  Kerberos authentication thus fails and the application then tries NTLM.  While NTLM will work, it is well known that NTLM is less secure than Kerberos and we thus should avoid unless absolutely necessary.
      • Enabling Kerberos by registering the ServicePrincipalNames "LDAP/activedirectory.contoso.com" and "LDAP/activedirectory" on all DCs is not the best way to fix this.  The reasons not to are a Kerberos discussion, and are out of scope for this conversation.
      • Costly to setup and maintain from a labor perspective.  Every time a DC is added to or removed from the environment, this must be updated.  Also, if a DC is taken down for an extended period, this DNS record should be cleaned up.
      • Breaks concurrency since there is no guarantee that any two applications that require consistency of the data will communicate with the same box.
      • Not site aware.  Depending on the administrators' configuration of the alias, the LDAP searches may traverse a WAN link.
      • Does not distinguish between Global Catalog and non-Global Catalog Domain Controllers.
      • Unpredictable selection of DCs

     

    Using the FQDN of the domain (i.e. contoso.com):

    • Pros:
      • Easy for the developers to grasp and use.  Also a very low cost from the infrastructure perspective
      • DNS 'A' records are automatically maintained by the Domain Controllers and are registered by the NETLOGON service
    • Cons:
      • Not site aware.  All DCs register here (unless otherwise tuned) reference http://support.microsoft.com/kb/258213)
      • Does not distinguish between Global Catalog and non-Global Catalog Domain Controllers

     

    Using the FQDN of the domain to locate Global Catalogs (i.e. gc._msdcs.contoso.com):

    All the same concerns relating to the FQDN of the domain are relevant except that this record distinguishes a list of GCs.

     

    Using site specific SRV records:

    _ldap._tcp.SITENAME._sites.dc._msdcs.contoso.com

    _ldap._tcp.SITENAME._sites.gc._msdcs.contoso.com

    • Pros:
      • Ensures a DC or GC is located near the calling application.
      • DNS 'SRV' records are automatically maintained by the Domain Controllers and are registered by the NETLOGON service
    • Cons:
      • Requires more code.  Since this returns SRV type records, name resolution must be done separately and each record returned must be attempted individually to accommodate a system that might not be online at any point in time.
      • Accuracy is dependant on the efficiency of the AD site design.  However, this will affect clients above and beyond the current application

     

    Using non-site specific SRV records:

    _ldap._tcp.dc._msdcs.contoso.com

    _ldap._tcp.gc._msdcs.contoso.com

    • Pros:
      • DNS 'SRV' records are automatically maintained by the Domain Controllers and are registered by the NETLOGON service
    • Cons:
      • See "Using site specific SRV records"
      • Not site specific.

     

    Using DsGetDomainControllerInfo:

    • Pros:
      • Provides extensive detail about the Domain Controller
    • Cons
      • Requires more code.  Since this returns a list of servers, name resolution must be done separately and each record returned must be attempted individually to accommodate a system that might not be online at any point in time.
      • Can accommodate site awareness, since the site the DC is in is returned.  However, this site awareness must be implemented in code.

     

    Hard coding to a specific DC:

    • Pros:
      • Predictable
    • Cons:
      • Requires specific knowledge of the AD environment.
      • Should be a configuration option of the application.  We all know how many problems we can run into if we are hard coding values inside of an application and have to change them later.
      • Need to figure out a strategy to keep the application on line when the server goes down.

    Return of the Little Shop of Drivers

    $
    0
    0

    It's been a while since I've posted.  Just over a year actually...  I have this long list of half started posts, but just somehow can never seem to find the time to finish them up.  However, with the exciting new release of Win7, I have managed to update my scripts to automatically add stuff to the WIMs for deployment.  As such, here is the updated script deprecating IMAGEX and PEIMG for DISM.  Hope this helps with some of your automation needs.

    Editorial comments:  I do like DISM much better as it is a little easier in the syntax, plus searches directory heirarchies for drivers.  This makes it a little easier when I toss stuff into %DRIVERS_ROOT_PATH% so I don't have to waste time figuring out which directory contains the actual drivers (as you may have noticed, there is sometimes a whole bunch of other stuff in driver downloads).

     @echo off
    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    ::Check Inputs
    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    IF "%1"=="" (
    Echo Enter the directory root for the drivers and packages to add to the image.
    GOTO END
    )

    IF "%2"=="" (
    Echo Enter WIM file name.  This must be in the root of the
    GOTO END
    )

    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    ::SET Variables
    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    SET REFERENCENAME=%1
    SET MOUNTPOINT=D:\FOO\%REFERENCENAME%
    SET FILES_ROOT_PATH=D:\%REFERENCENAME%
    SET IMAGEFILE=%FILES_ROOT_PATH%\%2
    SET DRIVERS_ROOT_PATH=%FILES_ROOT_PATH%\Drivers
    SET PACKAGES_ROOT_PATH=%FILES_ROOT_PATH%\Packages
    SET LOGS_ROOT_PATH=%FILES_ROOT_PATH%\Logs
    ::SET WIN_AIK_INSTALL_PATH=C:\Program Files\Windows AIK\Tools
    ::echo %WIN_AIK_INSTALL_PATH%

    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    ::Ensure needed directories exist and are ready to be used
    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    if not exist %MOUNTPOINT% (md %MOUNTPOINT%) ELSE (DISM /Unmount-Wim /MountDir:%MOUNTPOINT% /discard)
    if not exist %LOGS_ROOT_PATH% (md %LOGS_ROOT_PATH%) ELSE (del /s /q %LOGS_ROOT_PATH%>NUL)

    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    ::Identify number of images in WIM and process each image
    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    for /f "tokens=1,2 delims=: " %%i in ('dism /get-wiminfo /wimfile:%IMAGEFILE%') do if "%%i"=="Index" SET IMAGE_COUNT=%%j
    Echo This WIM contains %IMAGE_COUNT% image(s).
    For /l %%i in (1,1,%IMAGE_COUNT%) do call :update %IMAGEFILE% %%i
    GOTO END

    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    ::Process per image steps
    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    :update
    Echo Updating %1 - Image #%2
    DISM /Mount-WIM /WimFile:%1 /Index:%2 /MountDir:%MOUNTPOINT%
    DISM /Image:%MOUNTPOINT% /Add-Driver /Driver:%DRIVERS_ROOT_PATH% /recurse /ForceUnsigned
    ::for /f %%i in ('dir /ad /b %PACKAGES_ROOT_PATH%') do Call :InstallPackage %%i %2
    DISM /Unmount-Wim /MountDir:%MOUNTPOINT% /Commit
    goto :EOF

    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    ::Install a specified package
    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    :InstallPackage
    Echo Installing Package %PACKAGES_ROOT_PATH%\%1
    ECHO DISM /Image:%MOUNTPOINT% /Apply-Unattend:%PACKAGES_ROOT_PATH%\%1\%1.xml /Log-Path:%LOGS_ROOT_PATH%\%2-%1>NUL
    IF ERRORLEVEL 1 ECho       ERROR:  Couldn't Install Package "%1"
    GOTO :EOF

    :END
    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    ::Clean up variables
    :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    SET IMAGE_COUNT=
    SET REFERENCENAME=
    SET IMAGEFILE=
    SET MOUNTPOINT=
    SET FILES_ROOT_PATH=
    SET DRIVERS_ROOT_PATH=
    SET PACKAGES_ROOT_PATH=
    SET WIN_AIK_INSTALL_PATH=

    Cost Efficiency of LED (or CFL) Light Bulbs

    $
    0
    0

         This week I worked on a project with one of my Microsoft peers and I helped him install some energy usage monitoring equipment in his house.  Without being specific about the solution, I must say it is pretty cool.  As a result of installing this and then running around his house flipping electrical devices on and off to see how they impacted the data being returned, we started talking about cost efficiency of various options.

         Notice I use the phrase, cost efficiency.  While I try to be green wherever I can, sometimes it just doesn’t make financial sense, and it definitely isn’t easy (check out the song Bein' Green).  Often the information to figure this out is really hard to find.  As a result of this I broke out Excel and spent several hours working on some numbers.  Herein lies the story of the analysis as to whether or not LED lights make sense.  I share this so that others can use what I learned or replicate the methodology to make more cost efficient purchases.  Also this methodology will also work for other replacements, say refrigerator or AC unit.

     

    Hold on to your seat, this is about to get exciting!

    With any analysis, we make certain assumptions and require inputs and outputs to estimate the impact.  These are discussed below.

    ASSUMPTIONS:

    • While there is definitely an energy difference in the whole process of mining raw materials through manufacturing to putting the product on the shelves, the analysis assumes those energy costs are factored into the price of the bulb.  Additionally, it doesn’t cover incidentals like fuel costs to get the bulb from the store to the house.  Thus, in an effort to simplify the analysis, this only looks at the costs of the bulbs and electricity.
    • Today’s prices are fixed, the price of energy and light bulbs will not change over the course of the duration of the analysis.  Though this does not represent reality, the results can easily be adjusted.  Increases in energy prices, decreases in LED bulb costs, and increases in incandescent bulb costs, will increase savings and decrease the break even point.

    INPUTS:

    • The cost, estimated service time (usage dependent life time), and wattage of the energy efficient option (LED/CFL).
    • The cost, estimated service time, and wattage of the comparison option (incandescent bulbs).
    • The energy cost.  In charge per Kilo-Watt Hour (KWH).
    • The hours of use per month.
    • The Cost Of Capital.

    OUTPUTS:

    • The time to break even on the investment.
    • The total savings over the life of the investment.

     

    Definition – Cost Of Capital:  In the context of this discussion, this means the interest rate that could be earned with using the money elsewhere.  This essentially affects how long it takes to break even and how much savings you end up with when the bulb finally wears out.  Two examples, let’s say you have 100 dollars today to spend and the LED bulb costs $80 more than the incandescent:

    • Option 1 – You can either buy the incandescent light bulb or invest in a savings account that earns 10% interest.  If you spend the $80 difference on the LED today, you are costing yourself the ability to earn 10% interest on the money.  This means you are losing $8 per year of income.  This sort of means you are “borrowing” the money at 10% until you break even, and then you start earning 10% on what you save from that point forward.
    • Option 2 – You have no intention of saving the money.  This essentially means that you have an interest rate of 0%.  At the end of the life of the bulb all we are measuring is how much more cash you had in your pocket to spend as a result of buying the LED today.

     

    Sample Data For Inputs (These numbers come from my favorite big box home supplies retailer website):

    • LED Bulbs cost $60 plus tax, about $65, use 12 watts and last 50,000 hours.  (For the numbers, I looked PAR-30 floods, as this is what we happened to be comparing)
    • Incandescent bulbs cost $8 plus tax, about $8.50, use 75 watts and last 2,000 hours.
    • These bulbs are in the home office which gets a lot of use, to the tune of 12 hours or so a day, 22 or so days a month.  This is a total of 264 hours per month.
    • The cost of electricity is about $0.11 per KWH in this area.  This includes the distribution and usage charges.

         To illuminate the math behind how much we get charged for the energy to use the light, let me show how it is worked out.  The energy cost of an LED bulb used for 1 hour at 12 watts (converted to 0.012 KW) at $0.11 per KWH costs $0.00132 (1 hr. * 0.012 KW * $0.11 per KWH = $0.00132).  If the bulb is used for 100 hours a month, that means that it costs $0.132 per month to use.  For the incandescent bulb, just replace the 0.012 KW with 0.075 KW for the incandescent bulb.  This results in a $0.00693 savings per hour of use ($0.00825 per KWH incandescent - $0.00132 per KWH LED).

     

    Other considerations:

         The longer service time of the LED light means that if an incandescent bulb will be used instead, the incandescent bulb will have to be replaced multiple times over the life of the LED.  The math:  50,000 hours of life out of the LED divided by 2,000 hours of life out of the incandescent.  This results in the incandescent bulb being replaced 25 times (50,000 hrs / 2,000 hrs = 25).  This would cost $212 in incandescent bulbs over the 50,000 hours of use ($147 in total savings).  The question is also where you would break even which is $65 for the LED divided by $8.50 per incandescent, which is 7.64 changes of the light bulb ($65 / $8.50 = 7.64 changes).

         Note:  Time has a significant impact on whether or not the investment pays off.

    • 10 hours of use per month:
      • That would take 1528 months or 127 years to break even on the bulb purchase alone.  (2000 hours * 7.64 bulbs / 10 hours)
      • That would take 5,000 months to use up the LED or 417 years to realize the totals savings.
    • 730 hours of use per month (365 days / 12 months * 24 hours = 730 hours per month average) – on all the time.  Just replace the number of hours of use in the previous equations to do the math here.
      • That would take 21 months or less then 2 years.
      • The total life of the LED would be about 69 months, or less than 6 years to realize the savings.

     

    Combining all of this (read as “Plugging the data into Excel”):

    We’ll assume that this is in the home office and the light is used 12 hours a day, 22 days a month (264 hours).

    Option 1 – Cost of Capital is 10%

    • Break-even point = 1.75 years
    • Total savings at end of life of the LED bulb = $1,005
    • Total lifetime of the LED bulb = 15.8 years

    Option 2 – Cost of Capital is 0%

    • Break-even point = 1.6 years
    • Total savings at end of life of the LED bulb = $484
    • Total lifetime of the LED bulb = 15.8 years

    Q.  Why does it take longer to break even with a higher capital cost, but you save more over the same period?

    A.  This is the power of compound interest.  This is another topic.

     

    So all this math is really fun, but what does this mean?

    Let’s start with the amount of usage of the bulb.  Due to the cost of energy, the greater the difference in the wattage AND the more you use the bulb, the greater the impact on savings and pay off.  Check out this picture.

     

    Using the same numbers as Option 1:

    image

    As can be seen from the above chart, there is a really steep curve on breaking even.  Furthermore, if the bulb is used less than about 43 hours a month, the purchase of it will never break even.  Also, from the picture, we can see that in order to break-even in a reasonable time frame (let’s say 5 years), the bulb needs to be used at least about 100 hours per month (about 3.5 hours per day, every day).

     

    Using the same numbers as Option 2:

    image

     

    As we saw in the math, if you don’t save the money, you break even a lot faster (steeper curve), though the break even times in low usage are ridiculously long (who’s going to use a bulb for 400 years?).  This means that in order to break even in the same 5 year time frame, you need to use the light about 80 hours a month (2.75 hours per day, every day).

     

    In the longer term (How much does this actually save me?):

    As we can see, payback times are significantly impacted by the usage.  This also leads to an impact on the savings from using an LED bulb over the lifetime of the bulb.  We will have 4 scenarios we will look at here:

     

    Usage

    Cost Of Capital

    1280 hours per month (Home Office estimate)0%
    2280 hours per month10%
    3730 hours per month (Light is on all the time)0%
    4730 hours per month10%

    Note:  Where the red (dashed) line crosses the blue is the estimated lifetime of the bulb.  The blue line predictions to the right of that intersection are what would happen if you keep up with the same savings strategy.

     

    Scenario 1 – LED lasts 15 years and saves $479 over the course of the 15 years:

    image

     

    Scenario 2 – LED lasts 15 years and saves $944 over the course of the 15 years:

    image

     

    Scenario 3 – LED lasts 5 years and saves $432 over the course of the 5 years:

    image

     

    Scenario 4 – LED lasts 5 years and saves $537 over the course of the 5 years:

    image

    Note: 

    Summary:

    • LEDs don’t pay off for infrequently used lights, ever.
    • Regardless of your cost of capital, given current costs, it appears that the lights need to be used about 80 to 100 hours in order to break even in a reasonable time frame.  Reasonable to the author is 5 years.
    • The more the light remains on, the more money LEDs save over time.
    • Narrowing of price differences in LED and incandescent bulbs and increases in energy prices will accomplish the following three cases:
      • Increase the savings over time.
      • Reduce the break-even point for purchase time frames.
      • Make LED replacements for low usage lights, less than 80 to 100 hours, viable.

     

    The spreadsheet all this work was done in is attached to this post so that others can play with the numbers as well.

    More robust options for locating DCs

    $
    0
    0
     

    While this is a blog on Technet and reasonably should be targeted towards infrastructure topics, I am including a little bit of development context in relation to Active Directory (AD).  I share this as we often observe a barrier in communication between the administrators of AD and the developers who write code against AD.  Specifically, this is for the benefit of AD administrators so that conversations between AD administrators and developers can be a little more constructive.

     

    Often times myself and my peers have observed applications take outages because of coding practices in connecting to AD.  Unfortunately, in general, AD administrators do not have enough development background to provide usefully precise recommendations to the developers in how to use code to leverage AD.  Thus, I hope this post and the previous post on strategies for locating DCs will allow AD administrators to provide some reference guidance to help the developers identify some options to be a little more dynamic in their ability to locate DCs.

     

    Binding to RootDSE:

    Moving forward, in talking about serverless binds, I'm surprised at how frequently the question "How do I locate a Domain Controller (DC) or Global Catalog?" is encountered.  This is usually in the context of diagnosing an application that is hard-coded to a DC, and the application fails because a DC goes offline.  In one of the previous blog posts I spoke about the various pros and cons of  using different methods to locate a DC.  That's doing it all the hard way, but useful if more control is needed.  ADSI (and System.DirectoryServices) and WLDAP32 (and System.DirectoryServices.Protocols) have some really nice functionality to take the work out of figuring out which DC to connect to.

     

    There is not much to add that this article does not cover, at least for ADSI:  Serverless Binding and RootDSE (Windows).  Some key points from the article:

    • If possible, do not hard-code a server name.
    • In this case, a default domain controller from the domain that the security context of the calling thread is in will be used.
    • If a domain controller cannot be accessed within the site, the first domain controller that can be found will be used.

     

    Since this information is a little hard to find or scattered across multiple pieces of content, to assist in the recommendations for non-ADSI binds populate the method/function parameters as follows to bind to RootDSE:

     

    Using the above to communicate with a specific server for consistency purposes:

    As mentioned in the previous post, the only reason (at least that I've heard so far) to code against a specific DC is when there is a need for consistency.  An option for many scenarios were consistency is needed, rather than hard-code a specific DC, is to use RootDSE to find a server.  Specifically, read the data in the "dnsHostName" attribute from the RootDSE object returned in the search.

     

    If the data in "dnsHostName" is used as the server name for all future connections, this will provide the consistency needed for most applications, while allowing changes to the environment to be much more dynamic.  Though from here it gets more complex, depending on how the application handles errors from the DC due to inaccessibility.  This has more to do with application design questions and moves beyond the scope of this article.  But at the very least restarting the application will allow the application to locate an available/online DC and resume functioning.

     

    Additionally, this will allow the application to port between domains easier, especially given bullet number two above.   Just by changing the security context the application is running in, a DC in a separate domain will be located.

     

     This has to do with many applications, but not all.  There are some exceptions, though those usually have to do with using AD metadata for synchronization purposes.

     

    Java:

    With limited familiarity of Java, the ability to provide specifics is limited.  But according to the documentation, LDAP Naming Service Provider for the Java Naming and Directory InterfaceTM (JNDI), section 6.2 specifies "The LDAP service provider supports the use of DNS configuration for automatically discovering the LDAP service."  So a similar approach is possible.

    LED By Excitement

    $
    0
    0

    Until yesterday, I haven’t needed to replace a light bulb in my house since I wrote the post with my analysis the cost efficiencies of LED vs. Incandescent bulbs.  Historically though I’ve tended to purchase CFL whenever possible, more because I’m the sort of guy who forgets to replace one light bulb in the room until both blow out.  Thus, due to the longer service times of CFLs, I am left in the dark much less frequently.

    This time though, I figured I would look at the LED options.  As I hunted around, the one that particularly caught my eye was the fact that some manufacturers are now making LEDs into candelabra shapes.  As I like to leave the lamp outside the house on during all night time hours, this is probably my most irresponsible use of electricity at being turned on 12 hours a day on average.  The lamp I currently have does have a dimmer that kicks it into high gear when motion is sensed, but I need to replace the light because that motion sensor/dimmer timer thing is broken.  It’s spotty at best and regardless of what duration the lamp is set at, it only turns it on bright for about 15 to 30 seconds,.  Not to mention those bulbs are burning out all the time.

    Unfortunately, I forgot to bring the spreadsheet attached to my previous post to my favorite (read favorite as most conveniently located) big box retailer with the biggest selection of light bulbs.  Therefore, I went home (wondering how much the two trips is costing me in gas and chiding myself for not planning ahead) and started doing some math.  I estimate that at 365 hours per month, at $0.11 per KWH, the outside light costs me about $58 per year.  I know it is less than that because it dims, but I don’t know how much wattage it uses when dimmed, so I’m starting with the worst case scenario.

    And then I saw this as I compared the LED candelabras to the incandescent I currently use:

     

    25w

    40w

    2.5w LED

    3w LED

    Light Output (lumens)

    300 lm

    540 lm

    30 lm

    30 lm

     

                    Hrmm…  Cut my light output by a factor of 10 by cutting my wattage by a factor of 10.  Great for cost efficiency, but that pretty much defeats the purpose of why I have the lights out there.

                    Apparently it was “Time to change the paradigm”, as we like to say in the corporate world when we think marginally outside the box.  So I fundamentally re-evaluated how I was looking at this problem.  Since I was using 3 bulbs at about 540 lumens each, I decided my goal should be to see how to produce about 1600 lumens as cheaply as possible.  After a little research, I found that this was the amount of light output by a 100W bulb.  Note, a lot of 100W bulbs do less than this, so I had to hunt around to find one that did, but that came at the expense of a shorter service life.

     

                    Now that I had the information I needed to compare between bulbs, it is all about plugging in numbers to the spreadsheet.  Here are the important details:

    ·         Incandescent - $0.97, 100W, Estimated Service Time 750 hours

    ·         Current Candelabra - $2.47 (3 bulbs), 120W, 150 hours

    ·         CFL (Dimmable) - $8.97, 23W, 10,000 hours
    Assumption:  Dimmed = ~12W, and as an exterior light it will be mostly dimmed.

    ·         CFL (Non-Dimmable) = $1.99, 23W, 10,000 hours

    ·         Assume cost of capital is 0% and cost per KWH = $0.11

     

                    Plugging all the data into the spreadsheet I came to the following conclusions:

    ·         Non-Dimmable CFL (payback at end of life of the first bulb)

    o   Compared to 100W Incandescent - Break Even – 0.02 years (7.3 days) and saves $73 over the cost of the almost 2 years the bulb should last.

    o   Compared to the candelabra, immediate (heck the CFL bulb is cheaper) and saves $94 over the 2 years of the bulb.

    ·         Dimmable CFL (payback at end of life of the first bulb)

    o   At 23 watts for 12 hours a day

    §  Compared to 100W Incandescent – 0.19 years (69 days), $66 in savings

    §  Compared to candelabra, 0.12 years (43 days), $87 in savings

    o   At 12 watts for 12 hours a day

    §  Compared to 100W Incandescent – 0.17 years, $75 in savings

    §  Compared to candelabra – 0.11 years, $96 in savings

    The conclusion becomes, in order to maintain the same amount of light I should replace the lamp with one that will take a CFL.  Over the long run, the dimmable CFL does pay off, but to the tune of $9 every approximately 2 years.  Since I get more light (lumens) out of the non-dimmable (1600 lumens) vs. the dimmable (1400 lumens) I will need to figure out whether the $9 a year saved is worth the reduced amount of light or I need to find a light where the “2-Level Lighting” feature can be disabled so that I can maximize my flexibility in selection of bulbs.  In reality it isn't all that significant dollar value in payback though the savings relative to the spend are awesome, but at least I don't feel so bad replacing my currently broken lamp as I know I'll make it back in 2 years.

    Thanks to the massive savings predicted by my analysis, this gives me a little freedom to invest in researching the various solutions available.  As I think back to my scientific method and designing experiments to test my hypothesis that I need 1600 lumens to adequately light the area, several historical examples come to mind.  As such I have chosen to leverage one of the best historical examples of decision making by process of elimination.  In this regard, I will follow the wisdom of the Goldilocks.  I will try both the dimmable CFL and the non-dimmable until I find one that is “just right” (see references at the end for the complete history of her experimental work).

    References:  Dramatic Reader for Lower Grades by Florence Holbrook - Project Gutenberg

    Making the World Greener One Monitor at a Time - Reset SCOM Monitors Enmasse

    $
    0
    0

    Updated 10/19/2014 – Put the script on TechNet Gallery.  Link:  ResetAllMonitorsOfASpecificType.ps1

    Recently, I’ve had the pleasure of tuning a new SCOM implementation.  Like all new implementations of monitoring software there were a lot of alerts coming from the systems in the environment.  As I went through and tuned the installed Management Packs, I realized that resetting the individual monitors, for those that don’t recalculate automatically, once the override is implemented is really, really, really tedious.  Especially when the systems number in the hundreds or thousands.

    I looked at “GreenMachine”, but realized that it was to much of a brute force approach and I didn’t want to reset the monitors that we had not yet troubleshot, so I developed the below PowerShell script to automatically iterate through all systems experiencing the issue and reset the monitor.

    The PowerShell script to do so is attached to this post, just download and rename it to ResetAllMonitorsOfASpecificType.ps1.  Note:  You will need PowerShell and the Operations Console and Shell installed on the machine you run this from.

    Note:  I updated this heavily the day after I released it.  The old approach would only work if there were alerts that existed for the monitors.  So if someone closed the alerts, no reset of the monitor.  Additionally, I ran into an odd quirk where when run from the Operations Manager Shell where the queries to SCOM returned 2 items where there should only have been one (I had only tested under the regular PowerShell console and in the ISE).  This is fixed, though when run in the Operations Manager Shell it will "close" the alert multiple times.  Without spending a whole bunch of time to deal with this quirky behavior, there is nothing else that can be done instead of just letting it reset the monitor multiple times.  However, this is PowerShell, so if anyone wants to take on that before I get around to messing with it, I'll be glad to update this script with the info.

    Updated content as of 4/5/2011 following this.  I've also updated the text file in hopes that it works this time.
    Script Syntax (just copy and paste to a file with a .PS1 extension, I hope it doesn't mess up the formatting):

    param (
        [Parameter(Mandatory = $true)]
        [string]$rootMS,
       
        [Parameter(Mandatory = $true)]
        [string]$MonitorDisplayName
    )

    #adding SCOM PSSnapin 
    if ((Get-PSSnapin | where-Object { $_.Name -eq 'Microsoft.EnterpriseManagement.OperationsManager.Client' }) -eq $null) 
    {
        "Adding Operations Manager Snapin to session"  
        Add-PSSnapin Microsoft.EnterpriseManagement.OperationsManager.Client -ErrorAction SilentlyContinue -ErrorVariable Err 

    if ((Get-PSDrive | where-Object { $_.Name -eq 'Monitoring' }) -eq $null) 
    {
        "Mapping PSDrive for OpsManager" 
        New-PSDrive -Name:Monitoring -PSProvider:OperationsManagerMonitoring -Root:\ -ErrorAction SilentlyContinue -ErrorVariable Err | Out-Null 
    }

    #Connect to rootMS 
    Set-Location "OperationsManagerMonitoring::" 
    New-ManagementGroupConnection -ConnectionString:$rootMS| Out-Null
    Set-Location Monitoring:\$RMS 

    #Based on the display name in object health explorer, get the monitor identity
    $FindMonitorFilter = 'DisplayName LIKE ''' + $MonitorDisplayName + ''''
    $Monitors = @(Get-Monitor -Criteria $FindMonitorFilter)
    If ($Monitors -eq $null)
    {
        "Couldn't find the Monitor definition.  Exiting"
        Exit
    }

    ForEach ($Monitor in $Monitors)
    {
        #Get the class the monitor applies to
        $MonitorClass = @(Get-MonitoringClass -Id $Monitor.Target.Id)

        #Get the list of monitors with the display name that are in Error and Warning state
        $MonitoringObjectFilter = "(HealthState = 2 OR HealthState = 3) AND IsAvailable = 'True'"
        $ActiveMonitors = @(Get-MonitoringObject -MonitoringClass $MonitorClass[0] -Criteria $MonitoringObjectFilter)
        "Found '" + $ActiveMonitors.Count + "' active monitors."

        If ($ActiveMonitors -ne $null)
        {
            #loop through the list of degraded agents and perform actions described within the loop
            Foreach ($ActiveMonitor in $ActiveMonitors)
            {
                #Output current entity working on and monitor being worked on.
                "Resetting Health State on '" + $ActiveMonitor.FullName + "'"

                #Reset the monitor (assume that the monitor can't be recalculated since that is easier to code)
    #            $ActiveMonitor.ResetMonitoringState($Monitor.Id)|Out-Null
            }
        }
    }

    My Latencies Are Too High!

    $
    0
    0

    Yes it has been a VERY long while since I posted.  The list of what I want to post keeps getting longer and longer, as does the queue of pending requests waiting for access to me.  As a result of the long queue wait times the average access times for anything I want to do are unacceptably high.  I need to figure out how to better parallelize what I need to do.

    What I choose to post here is information where I had a lot of difficulty getting the answers too.  I.e., it isn't floating around out there at all or doesn't answer some of the specific questions I needed answers too.  The challenge is that the format the answer will often work for me in is not necessarily something I can just cut and paste here.  So it has to wait until I have time to polish it, as I don't have Ralph Macchio hanging around to help (wax on, wax off).

    Moving on to the topic of the post, if you didn't catch the double entendre in the title and the first paragraph, this post is about storage performance and design options.  This is from a scenario where there was fault tolerance was on each individual LUN and for capacity management they storage was being allocated in multiple small LUNs and concatenated at the server.  This was as a result of production outages due to poorly performing storage.  In working with the storage and SQL teams, this was a test that was run in response to the argument that “we don’t see performance improvements in striping over spanning”.  Which was absolutely true as in the test environment they were measuring only response times as a measurement of “performance”, not total throughput needed.  All us storage aficionados know that it is throughput (IOPS) demanded vs. the ability of the storage to deliver it which drives response times.

    As an analogy, think of trying to get the high school football team to a game.  Let’s say it takes an hour to drive to the game.  Whether 1 mom/dad/coach takes a car with 3 of the players or one bus is taken with all the players, it still takes an hour to drive to the game.  This means multiple trips be made or multiple drivers have to drive.  Saying that the “trip” isn’t any faster doesn’t negate selection of the bus as the best option.

    In short the conclusions below are really a reiteration of what we already know, more spindles exercised equals more throughput.  Spanning was essentially throttling full performance throughput of the storage to just the LUN which the active data was on.

     

    For the critics out there, I know this isn’t real world and read/write ratios and higher costs of writes as well as aggregation of writes at the array controller impact total throughput.  The goal here is to explore the relationship between load driven, response times, where throughput maximizes while minimizing complexity of the test harness.  The relationship is what is important and will be consistent even if the storage configurations change.  (Note:  This is in the comments at the end, I put it here for all those who won’t read all the way to the end before posting feedback).


    Striping vs. Spanning

     

    Winner

    Striping!!!

    Return Of The Analysis

    Overview

    In a spanned set, data is only read to or written from the sub set of disks which hold the data needed. If all data is consumed all the time, this will eventually balance Input/Output (IO) as the storage fills. In the meantime, and for scenarios where only a subset of data is (think most recent month of 5 years of historical data in a database) only the spindles containing that data will be used.

    For reference, minimal load to a fully loaded, but not overloaded disk, should respond to the operating system in 4 to 6 milliseconds (ms) on average, depending on the disk speed. Disk speeds will not go below 4 to 6 milliseconds due to physical limitations of the mechanical device. Therefore, as the IO requests from the Operating System and Application arrive at a rate greater than the storage can service the requests, said IO requests begin to wait in the queue. Thus the more requests that can not be serviced immediately, the greater the wait times. Degraded is considered to be in the 15 ms range, Critical in the 20 ms range.
    NOTE: Cache will lower disk times, but caches WILL become saturated under sustained load in excess of what the storage can support and as such should not be included in planning for overall support load. Instead they should be looked at as an accelerator under normal load conditions and a buffer for transient load conditions. These tests were done WITH a cache on the SAN, so even if the belief that caches magically fix all evils, it can be observed here that there are still limits even on SAN and cache.

    Note: ALL data below was configured on the same server on the same 3 LUNs, only the partition type was changed.

    Legend

    • Red Line is the disk latency discussed previously. (Avg sec/Read, Avg sec/Transfer, Avg sec/Write from PhysicalDisk or LogicalDisk performance counters)
    • Blue Line is the number of operations per second performed by the storage (Reads/sec, Transfers/sec, Writes/sec from PhysicalDisk or LogicalDisk performance counters)
    • Green Line is the number of operations outstanding. This is the independent variable and is controlled by the test harness (iometer). ("Current Disk Queue Length" from PhysicalDisk or LogicalDisk performance counters)

    Spanning

    Below (Figure 1) is the overall performance picture of the performance of the spanned system. As it is quite small, as specific areas are called out there will be a zoomed version near said text.

    Figure 1

    Observation #1

    • Red line (latency) and green line (load generated) increase at the same rate.
    • This supports the previous statement that disk wait times are correlated with the amount of IO being asked of the underlying storage.
    • Real world application and idiosyncrasies:
      • This is why "Current Disk Queue Length" is suggested as a counter to gauge whether or not storage is performing well. "Current Disk Queue Length" fails in that it is a point in time counter and can not accurately represent the median load over a given time period. Thus, when processing hundreds of IO per second (IOPS), one sample every second or greater it doesn't give a very good picture of the aggregate trend.
        NOTE: This problem scenario can be observed in the drops in the green line. Even under structured loads this data is skewed.
      • One idiosyncrasy is that certain scenarios can cause the IO to be delayed "in flight" (somewhere between it exiting the queue and returning from the underlying storage). Low "Current Disk Queue Lengths" and high latencies can hint at this scenario. Due to the inaccuracies in "Current Disk Queue Length", the correct tools to confirm this scenario are native ETW tracing and tools, such as XPerf, that consume said data.

    Observation #2

    • Blue Line (IOPS serviced) peaks and stays flat regardless of how much more load the test harness attempts to push.
    • Real world application:
      • Once you're done, you're done.
      • Can't squeeze blood from a stone.

    Observation #3

    • The Red Line (latency) in picture to the right (Figure 2) is scaled differently (1000x) so the actual values can be seen more clearly.
    • As the latencies approach 20 ms, the maximum throughput approaches the absolute maximum.
    • Reference the previous picture where the Blue Line (throughput) maxes out pretty close to the left hand side of the chart at about 800 IOPS.

    Observation #4

    • From the pictures (Figure 1 and Figure 3) below, performance maximizes at an average of about 800 IOPS

    Figure 3

     

    Striping

    This is the same 3 LUNS reconfigured as a stripe.

    Figure 4

    Observation #1

    • Red Line (latency) increases at a rate roughly equivalent to one-third the rate of increase of the Green Line (load).
    • Again, this supports the previous statement that disk wait times are correlated with the amount of IO being asked of the underlying storage. The fact that the correlation isn't one to one is due to the fact that the OS sees 3 "physical disks" (each LUN is presented as a physical disk from the perspective of the OS) under this logical disk and the load is distributed across said disks. Thus each "physical disk" only sees one-third of the load, in turn only suffering one-third of the degradation
    • Real world application and idiosyncrasies:
      • In addition to previously mentioned…
      • The performance at the logical disk level can be very different then the OS "physical disk" level. By spreading load across multiple "physical disks" the logical disk gains the advantages of the best and minimizes the consequences of the worst.

    Observation #2

    • Blue Line (IOPS serviced) climbs more slowly, but still eventually plateaus regardless of how much more load the test harness attempts to push.
    • Real world application:
      • Still can't get blood from a stone.

    Observation #3

    • The storage has to be pushed much harder to saturate it. In the spanned scenario, saturation was reached at about 16 pending IOs outstanding. In the striped scenario, this maxed out at about 48 pending IOs outstanding.
      Notice this is a factor of 3 greater than the striped scenario. This should not be a surprise.
    • There appear to be 2 levels of saturation. One from 10 ms to 20 ms latencies and one from 20 ms and up. However, the higher level is much more volatile and "fails" down to the lower level of saturation quite often. This artifact should not be factored into scaling decisions.
    • Same as above, changed the scaling on this picture (Figure 5) so the latency value is easier to read.

    Observation #4

    • From the picture below (Figure 6) the throughput maxes at about 2500+ IOPS.
    • This is a little more than 3x the spanned scenario and should not be a surprise.

    Figure 6

    Testing strategy

    Tools

    IOMeter – www.iometer.org

    Perfmon – included in Windows OS

    Disk Manager – included in Windows OS

    Configuration

    • All Microsoft best practices were followed for storage configuration.
      • Partitions were aligned
      • File system used 64K clusters (as per SQL storage best practices)
    • 3 LUNs were configured in A) Span and B) Stripe
    • IOMeter
      • Access configuration – 64 KB IO sizes, 100% Random Read IO
      • Test Setup
        • Run Time – 60 seconds
        • Cycling Options – "Cycle # Outstanding I/Os – run step outstanding I/Os on all disks at a time
        • # of Outstanding I/Os – Start – 8, End – 256, Step 8, Linear Stepping
      • Iobw.tst (test file) was intentionally created to mostly fill one LUN (~32 GB) worth of space. This was selected to allow focus of analysis to be on the scalability of using one "physical disk" vs. using multiple "physical disks"
    • Perfmon – The above IOMeter will create a test run approximately 35 minutes in length. Thus a performance counter log was created to automatically stop after a similar period (so the test could run unattended).
      • Collect all PhysicalDisk and LogicalDisk counters.
      • Sample in 10 second intervals - thus there are multiple data points for each IOMeter step and the steps can be observed

    Comments

    This was done to demonstrate the change in performance between striping and spanning, as well as illustrate the impact on changing load. As such, the IO profiles were simplified to exclusively be random read IO in order to present the worst case scenario (as random read IO has a very low cache hit rate) and minimize variability in results due to a cache optimizing writes to/from the storage. Therefore, the maximum throughput demonstrated in this test does not reflect the impact of write IOs. As a result, total throughput numbers will not be accurate for a real world production scenario, however the relationship between striping and spanning will remain similar. In short, the behavior pattern is able to be generalized, while the raw throughput numbers are not.

    Additionally, scoping the test file size to reside on only one "Physical Disk" is not applicable to all scenarios. However, there are many scenarios where, due to data locality, this can easily be highly representative of real-time access.

    Troubleshooting SCCM Software Update Deployment Package distribution due to missing directories

    $
    0
    0

    I have SCCM running in my lab and ran into an issue on several occasions where the Deployment Package I created for the Software Updates started to error out on when updating the Distribution Points.  In reviewing the package distribution manager log I would see the following message:

    Severity Type Site code Date / Time System Component Message ID Description
    Error Milestone 002 01/09/2013 7:40:04 PM CM01.contoso.com SMS_DISTRIBUTION_MANAGER 2306 The source directory "\\contoso.com\SoftwareUpdates\c34e2458-681f-4a8b-8941-a460c2de314a" for package "0020000D" does not exist. The operating system reported error 2: The system cannot find the file specified.     Solution: Make sure you have specified a valid package source directory on the Data Source tab in the Package Properties dialog box in the Configuration Manager Console. If you specify a local path, it must be a local path on the site server. If you specify a path on a network drive, verify that the path exists and that the path is typed correctly.

    I read the above message and said to myself, "Now, where'd it go?  Wait a sec... What is "it"?"

    The challenge is that SCCM was tracking that the update had been download, but for some reason it wasn't in the location it was supposed to be.  When going into the console, setting the search criteria for "All Software Updates" to "Required >= 1" and "Installed >= 1" showed everything was downloaded according to SCCM.  Thus it took a little bit more digging for me to get this sorted out.  Since answers were sparse in regards to how to troubleshooting this outside of "review the logs", I figured I would share my solution.

    While I still don't know WHY SCCM thought the update had been downloaded and yet the files were missing from the package source, I could at least figure out how to find the updates in the console to re-download them.

    In the SQL query below just paste the highlighted (match the colors) from the above error message into the highlighted spots below.

    DECLARE@MissingSourceDirectoryNVARCHAR(512)
    DECLARE@PackageIdNVARCHAR(8)
    SET@MissingSourceDirectory ='c34e2458-681f-4a8b-8941-a460c2de314a'
    SET@PackageId= '0020000D'

    SELECTCASE
            WHENci.BulletinID LIKE''OR ci.BulletinID ISNULLTHEN'Non Security Update'
            ELSE ci.BulletinID
            ENDAsBulletinID
        , ci.ArticleID
        , loc.DisplayName
        , loc.Description
        , ci.IsExpired
        , ci.DatePosted
        , ci.DateRevised
        , ci.Severity
        , ci.RevisionNumber
        , ci.CI_ID
    FROM dbo.v_UpdateCIsASci
    LEFTOUTERJOIN dbo.v_LocalizedCIProperties_SiteLocASlocONloc.CI_ID = ci.CI_ID
    WHERE ci.CI_ID IN
    (
        SELECT [FromCI_ID]
        FROM[dbo].[CI_ConfigurationItemRelations]cir
        INNERJOIN [dbo].[CI_RelationTypes]rtONcir.RelationType = rt.RelationType
        WHEREcir.ToCI_ID IN
        (
            SELECTCI_ID
            FROM[dbo].[CI_ContentPackages]cp
            INNERJOIN [dbo].[CI_ConfigurationItemContents]cicONcp.Content_ID = cic.Content_ID
            WHEREcp.ContentSubFolder = @MissingSourceDirectory AND cp.PkgID = @PackageId
        )
    )

    Qualification (Added 1/13/2013): It is important to be aware that the table CI_ConfigurationItemRelations can have multiple levels of relationships and there are different types of relationships. The above query worked well enough for me, so I didn't investigate further. I would suggest this reference for more details the CI_ConfigurationItemRelations table: Steve Rachui - ConfigMgr 2012–Application Model–Internals–Part I

    Note: As a tip, I had several items missing from my source. I noticed in this case they were all from this month (January 2013), so after the third item month, I just went and changed the query in the console to show all "Downloaded" updates with "Date Released Or Revised" within the last month and downloaded them all as a batch.

    Update 1/13/2013:

    I realized I had a completely unnecessary piece of information in the original query. I had reused another query I had for which I needed the original date the update was released, not the date the most current revision was released. Below is the old query in case someone still wants to know the date the update was originally released.

    DECLARE@MissingSourceDirectoryNVARCHAR(512)
    DECLARE@PackageIdNVARCHAR(8)
    SET@MissingSourceDirectory =
    'c34e2458-681f-4a8b-8941-a460c2de314a'
    SET@PackageId= '0020000D'

    SELECTCASE
            WHENci.BulletinID LIKE''OR ci.BulletinID ISNULLTHEN'Non Security Update'
            ELSE ci.BulletinID
            ENDAsBulletinID
        ,
    ci.ArticleID
        ,
    loc.DisplayName
        ,
    loc.Description
        ,
    ci.IsExpired
        ,
    orig_postdate.DatePostedasorigDatePosted
        ,ci.DatePosted
        ,
    ci.DateRevised
        ,
    ci.Severity
        ,
    ci.RevisionNumber
        ,
    ci.CI_ID
    FROM dbo.v_UpdateCIsASci
    LEFTOUTERJOIN dbo.v_LocalizedCIProperties_SiteLocASlocONloc.CI_ID = ci.CI_ID
    LEFTOUTERJOIN
    (
        SELECT BulletinId,MIN(articles.DatePosted)As DatePostedFROM
        (
            SELECT
    ArticleId,MIN(DatePosted)As DatePosted
            FROM .[dbo].[v_UpdateCIs]
            GROUPBYArticleId
        )
    as articles
        INNER
    JOIN[dbo].[v_UpdateCIs]ciONarticles.ArticleID = ci.ArticleId
        GROUPBYBulletinId
    )As orig_postdate
    ONci.BulletinId = orig_postDate.BulletinId

    WHERE ci.CI_ID IN
    (
        SELECT [FromCI_ID]
        FROM[dbo].[CI_ConfigurationItemRelations]cir
        INNER
    JOIN [dbo].[CI_RelationTypes]rtONcir.RelationType = rt.RelationType
        WHEREcir.ToCI_ID IN
        (
            SELECT
    CI_ID
            FROM[dbo].[CI_ContentPackages]cp
            INNER
    JOIN [dbo].[CI_ConfigurationItemContents]cicONcp.Content_ID = cic.Content_ID
            WHEREcp.ContentSubFolder = @MissingSourceDirectory AND
    cp.PkgID = @PackageId
        )
    )


    MOMCertImport – Is it all it’s cracked up to be?

    $
    0
    0

    Context

    Core Reference:  Authentication and Data Encryption for Windows Computers in Operations Manager 2007
    Link to the script:  http://gallery.technet.microsoft.com/MOMCertImport-c3e7093b

    Note:  While the below does work, it isn’t the Windows Server and System Center Product Groups official solution so if you can’t get it to work or have problems, please use the steps in the above reference.  That said, at the end of the day all MOMCertImport seems to do is import one used registry value.

    Even though I tend to focus on platforms/OS related stuff, I’ve long since come to the realization that I can’t exclude tools such as System Center suite to help maintain and run a datacenter with high availability.  As such, I am constantly looking for how to leverage System Center to accomplish my platforms needs.  With things tending to be a little less hectic around the holidays I was able to spend some time solving some of those problems.

    Not surprisingly the lab machines are frequently rebuilt.  In addition, having become utterly dependent on System Center for making sure the lab is relatively stable on a day to day basis and configured according to best practices.  This includes the use of both System Center Configuration Manager and System Center Operations Manager to deploy patches, deploy the monitoring agents, some other basic utilities, as well as basic monitoring.  The challenge in this lab are that there are 2 forests that do not have a trust.  While deploying the System Center Configuration Manager Client across the two un-trusted forests is a relatively easy exercise, getting the SCOM agents deployed to systems in the “remote” forest and talking to the Management Server has been a very painful exercise.

    As many are aware, MOMCertImport is the officially provided tool to do this and there is plenty of content out there on how to use it.  The beef was that this process seems to have some rough edges and was very cumbersome every time a lab machine was flattened and redeployed.  The following are the sources of frustration every time there was rebuild:

    1. Both of the forests have Certificate Authorities and use auto-enrollment for all systems in the forest.  Why can’t it just use the certificate that the machine automatically picked up on domain join?
    2. Why is in necessary to create a custom template that had the exact same OIDs as the “Computer Template” that auto-enrollment used?
    3. Why is it necessary to create a certificate with the Private Key marked as exportable and ship it around to get it installed?  Not only a hassle, but also has potential security and manageability implications.
    4. Why is certificate enrollment such a cumbersome process?
    5. Why is it necessary to have to have two certificates on each machine thus confusing every other application (and me when troubleshooting) on the systems that didn’t know how to handle two certificates with identical OIDs, subjects, etc?
    6. In all fairness, a couple of the above challenges can be bypassed using the “/SubjectName” switch, however on a machine with multiple certificates with the same subject name MOMCertImport doesn’t always select the certificate that supports all the requirements for the SCOM agent.
      Note:  This wasn’t tested rigorously, but it seems to select the certificate with the SubjectName that was imported into the local machine store first.

    Discovery

    With those questions in mind it was time to start digging to understand what was going on when MOMCertImport is run.  This is what I found:

    First, in just trying to dig up the instructions (for the umpteenth time) as well as troubleshooting the automation for importing the code below, the following were useful references as I proceeded:

    Being cautious with information, I looked to validate what I found.  I started investigating where I always do when it comes to trying to figure out what is happening on a system, good ol’ Process Monitor.  After setting the filter on Process Monitor as follows, here is what MOMCertImport.exe did:

    image

    The results were as follows:

    image

    Note:  I looked to see if any files were changed (filter on Operation “CreateFile” and “WriteFile”) and there were not any changes outside of MOMCertImport re-importing the certificate into to the store.

    With remained was that MomCertImport created/changed two registry values:

    • HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Operations Manager\3.0\Machine Settings
      • ChannelCertificateHash (REG_SZ)
      • ChannelCertificateSerialNumber (REG_BINARY)

    The above blog entries focused on the ChannelCertificateSerialNumber registry value.  However, since there was a minor divergence between what was found in those reference and what Process Monitor uncovered, and I don’t know why MOMCertImport sets 2 keys but will work with only one, I decided to just copy the behavior of MOMCertImport.

    Analysis

    Looking more closely at the registry values unveiled the following:

    • ChannelCertificateHash – This is the “Thumbprint” of the certificate.
    • ChannelCertificateSerialNumber – This is the “Serial Number” of the certificate.  As the references above illuminated, this is a REG_BINARY value and has the byte values stored in the reverse order they are stored in the certificate.

    This also solved all the problems I outlined above:

    1. The certificates the computer gained through auto-enrollment can be used for the SCOM Agent.
    2. No custom template needs to be created.
    3. There is no need to have the Private Key be exportable or manually copy it around the environment.
    4. This can be simplified by using pre-existing certificate enrollment automation.  Reference “Configure Certificate Autoenrollment” to enable
    5. This can all be done with one certificate.
    6. The Serial Number is what stored in the registry allowing for very precise selection of the certificate.

    Furthermore, I found that MOMCertImport does no validation to determine if the certificate being imported meets the requirements of the SCOM agent.  In the automation below, it heavily leverages the content in the Troubleshooting OpsMgr 2007 and OpsMgr 2012 certificate issues with PowerShell article to institute validation before attempting to import the certificate.

    Note:  There are a plethora of copies of articles out there how to do create/validate the certificates.  A large variance in the values for KeyUsage were found.  While I was able to get this to work with a KeyUsage value of 0xA0, as per the reference PowerShell troubleshooting script, the PG’s content on TechNet states 0xF0:  How to Obtain a Certificate Using Windows Server 2008 Enterprise CA in Operations Manager 2007.  I don’t know if this is a supported scenario, thus if using the PG reference, the “Computer” certificate template will need to be updated.  Also, I could use a smaller “KeyLength” than specified in the PG article, thus it seems the agent doesn’t care about the KeyLength.

    Automation

    There were several things that were necessary to automate:

    • Validating existing configurations via SCCM.  Is there a certificate configured, installed, and valid for the SCOM Agent.  This IS SCCM’s detection method for the Application to configure the certificate.
    • Select a valid certificate from the local store and configure the SCOM Agent to use it.
    • Validating one or more certificates.

    The first step towards doing this was to ensure that the above 3 tasks could be repeated on a per machine basis consistently.  To that end a PowerShell script seemed to be the best approach.  As can be seen from the comments on some of the previous blog posts, this blog platform doesn’t handle code and code formatting very well, plus the script is over 750 lines.  As such, the resulting script is uploaded on Technet’s ScriptCenter.  Go here to download it:  MOMCertImport.

    The second step in the automation is to consistently get all the appropriate lab machines to run the scripted tasks.  Enter SCCM.

    Creating the Application Deployment Type

    Configuring Programs Properties

    • Installation Program - powershell -ExecutionPolicy Unrestricted -File .\MOMCertImport.ps1 –InstallBestFitFromLocalStore
    • Uninstall Program - powershell -ExecutionPolicy Unrestricted -File .\MOMCertImport.ps1 –Remove

    Programs

    Configuring Detection Method

    • Set the Script Type to PowerShell
    • Paste in the text of script above

    Detection Method

    Configuring Dependencies

    • Obviously, configuring the certificate for the SCOM agent does no good if the agent isn’t installed.  Configure the deployment of the SCOM agent as a dependency of this Deployment Type.

    Dependencies

    That’s it!  Now just deploy the Application.

    Note:  Initially I tried signing the script.  However it appears that when importing the script into the Detection Method SCCM does some reformatting.  As can be seen from the screenshot the script length is 658 lines whereas the script uploaded to TechNet ScriptCenter is 757 lines.  This reformatting breaks the signature.

    client policy 

    Updates:

    • 1/18/2014 – Fixed a bug in the Enhanced Key Usage Extension check.  It was passing everything if $EnableDiagnostics switch was used.
    • 1/18/2014 – Added the ability to request a certificate from Microsoft Enterprise and Standalone CAs (untested against non-Microsoft CAs, but give it a try and let me know how it goes).
    • 1/18/2014 - Now the file is so large it won’t cut and paste into the SCCM script validation.  However, if the “Open” option is selected and the script is loaded from a file it works fine.  This can also be solved by deleting the help content, the constants at the start, and everything in the #region Certificate/Request Install, as well as the labels in the “Main” switch statement for CreateCertificateRequest and GetCertificateResponse.  Yes, there is now some broken code in Main that references the certificate request functios, but it works.
    • 1/18/2014 – This script WILL work against the SCOM server, but since the SCOM agent shouldn’t/can’t be installed on the SCOM server, the dependencies will prevent the script from running and will set the deployment state to “Not Applicable”.

    Simple way to temporarily bypass PowerShell execution policy

    $
    0
    0

    One of the PowerShell challenges challenges I am constantly confronted with is dealing with running scripts on systems is blocked due to the security policy.  This is particularly cumbersome while writing or debugging new scripts.  Normally it is prudent to avoid lowering the overall security of the system by using the Set-ExecutionPolicy cmdlet and I often forget to return the system to the default state when done.  There is a simple way to solve this problem.

    From the run dialog (or command prompt) just execute “powershell –ExecutionPolicy Bypass” and it will start a PowerShell session that allows for running scripts and keeps the lowered permissions isolated to just the current running process.

    WSUS Installation Script

    $
    0
    0

    For a number of reasons I have to stand up WSUS servers relatively frequently, including that I keep breaking my Web Server.  Since I couldn’t find anything handy that did a good job of automating the install and configuration of WSUS, I created a script and posted it over on the Script Gallery for anyone who wants something to work off of.

    WSUS Install and Configuration Script

    Managing Windows Server Role Settings via System Center Configuration Manager

    $
    0
    0

    Context

    Even though I exclusively work on the infrastructure side, I operate under the theory that if I do something more than once, I need to automate it.  As can be seen from the blog, I am spending more and more time exploring how to use System Center to minimize the amount of repetitive work I have to do.  Or at least do the same task in new ways to relieve the tedium.

    Spending a lot of time in a lab, one of the challenges I have that scales to real world scenarios is that I constantly have to redeploy servers.  Since I have a wide variety of server configurations that I have to redeploy, I am continually looking for ways to reduce the amount of time I spend re-building.  For scenarios where I couldn’t find scripts that someone else was nice enough to share, my previous posts include links to some scripts I’ve built to help with these tasks.

    Realizing that managing the library of scripts, trying to remember what I named the scripts in the past, manually  kick of the scripts, wasn’t the most efficient use of my time.  Meaning I needed to expand my knowledge of the tools available to for managing infrastructure.

    My post “MOMCertImport – Is it all it’s cracked up to be?”, which included steps for configuring the servers via Application Management wasn’t quite as clean as I desired.  Specifically, I had to manage the script in multiple locations which meant updating multiple locations each and every time I updated the script.

    Discovery

    Goals:

    • Validation that the system was properly configured prior to beginning any work efforts.
    • Minimize the amount of time invested so that I could perform other tasks while the system was rebuilt.
    • Increase the modularity of the script in order to minimize time and effort of future updates.
      Note:  I’ve seen some massive post build configuration scripts out there that are a nightmare to try and figure out what they do.  (my MOMCertImport script definitely falls into this category)
    • Total build time is not a priority!  Total build involvement IS!

    Research:

    • PowerShell now has some really cool functionality for configuration management.  Unfortunately, this doesn’t come with a “platform” to deploy AND report on the configuration to the build servers.
    • SCOM – I could have created a rule/monitor and response combination.  When it comes to managing groups of systems to scope rules, it is a little harder than SCCM.  Primarily that it is all bundled in MPs and the ability to delegate the group membership management has limited flexibility.
    • SCCM Application Management – been there done that, see above.
    • SCCM Compliance Management – simple, flexible interface that allows granular modularity of both compliance settings AND scope management.

    Automation

    There are 4 steps to doing this:

    1. Create the script to detect and remediate the missing components.
    2. Create the Configuration Item
    3. Add the Configuration Item to a Configuration Baseline
    4. Deploy the Configuration Baseline

    1. The script

    The script used is posted over on the TechNet Gallery:  Configure Exchange 2013 Server Role and Feature Dependencies.

    As a basic explanation, there are 3 main sections:

    • The list of Roles/Features required – This is the only portion that needs to be edited.  Just edit the list of features required here.
      Note:  If using the PowerShell Desired State Configuration use this reference:  Windows PowerShell Desired State Configuration Role Resource
    • A comparison of the required list and how the machine is configured – shouldn’t need to edit
    • Export of missing Roles/Features – shouldn’t need to edit

    2.  Create the Configuration Item

    • Setting type = “Script”
    • Data type = “String”

    image

    Discovery Script

    Make sure the remediate parameter is set to “False”

    image

    Remediation Script

    Make sure the remediate parameter is set to “True”

    image

    Set the compliance rules

    • Set “Rule type” to “value”.  Existential rules don’t allow for remediation.
    • Set the operator to “Equals” and the value to (blank).  The current script design returns a null value if everything is installed.  This is my design, feel free to change it so the script outputs OK, 0, or something like that when the server is compliant.
    • To automatically remediate “Check the box to “Run the specified remediation script when this setting is noncompliant.”  If it isn’t apparent, checking this box runs the script configured immediately preceeding this.

    image

    3. Add it to the Configuration Baseline

    Note the revision column.  It’s kind of handy for managing the Configuration Items.  There can be one “Production” baseline that deploys a SPECIFIC version of the script and a “Development” baseline that deploys the latest/testing version.  That way trying to manage multiple, identically purposed Configuration Items can be avoided.
    Why is this handy?  Have you ever gone into your script directory and tried to figure out which version of script.ps1, script.ps1.old, script-quicktest.ps1, etc. was the most recent tested version of what you were working with?  Now you have version control AND that means just by fiddling with the Revision the ability to roll back is provided.

    image

    4.  Deploy the Configuration Baseline

    • Observe the setting “Remediate noncompliant rules when supported”.  To actually remediate, remediation needs to be enabled under the “Compliance Settings” AND the deployment configuration.  Meaning, only one Configuration Item needs to be created and actual remediation can be controlled.
    • Remediation respects maintenance windows.
      Note:  Since all my servers “in production” should already be configured properly, I want this configuration baseline to remediate the servers as soon as possible after (re)installation.
    • Schedule – there is a global client setting that controls how often Compliance Management is run, that can be overridden on a per deployment basis.

    image

    Other flexibility:

    In a broader sense, if there is a scenario where the detection/remediation has to occur after an application is installed, there is an option to define that the application must be installed BEFORE the Configuration Item will start complaining.

    In terms of Exchange, think of scenarios where you might want to have a member of the DAG rejoin the DAG after a reinstall.

    image

    One-liner PowerShell to set IP Address, DNS Servers, and Default Gateway

    $
    0
    0

    A fun part about configuring servers is that many servers still have static IP addresses.  The challenge is losing connectivity to the server when the configuration change is made.  Even with remote console access, this is an irksome task as it requires clicking through so many screens it’s just grrr…  (never mind that many of the remote console experiences are somewhat, shall we say, not on par with an RDP session).

    Enter the PowerShell one liner:

    &{$adapter = Get-NetAdapter -Name Ethernet;New-NetIPAddress -InterfaceAlias $adapter.Name -AddressFamily IPv4 -IPAddress 192.168.1.55 -PrefixLength 24 -DefaultGateway 192.168.1.1; Set-DnsClientServerAddress -InterfaceAlias $adapter.Name -ServerAddresses ("192.168.1.2","192.168.1.3")}

    Using this, at least gets the server configured and dynamic DNS configured (if using it), so all you have to do is run an ipconfig /flushdns on your client and your remote PowerShell session should reconnect.

    Key things to know:

    • Requires PowerShell 3.0 – there are other examples out there for how to use PowerShell to invoke WMI to manage this for systems not yet on PowerShell 3.0.
    • “;” indicates separate commands.  This is not piping data from one command to the other here, it is running 3 separate commands using the variable defined in the first.
    • This isn't a script so there aren't any ExecutionPolicy considerations.
    • Specify the adapter name, in this case “Ethernet” as appropriate for your system.
    • Other commands to manage the TCP/IP settings are here:  Net TCP/IP Cmdlets in Windows PowerShell

    Viewing all 33 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>