if you wish to change to your own list of random words just change the json location to a local directory with a json file or a different web url that contains the json you want to Randomize.
When you are tasked with administering windows machines/servers more often than not you need to have Remote Server access tools for the version of Operating system you are supporting. What do you do when you can’t get those tools installed for administrative or other reasons. The best thing to do is to look for a means to do this in PowerShell. This article describes how to find user and group information via the Dll’s that are available on windows.
All users/groups and objects in active directory have unique Security Identifier’s. To be able to locate and translate SID’s the class System.DirectoryServices.AccountManagement. The current logged in user’s sid can be retrieved using:
While the sid information has been redacted it is intact in terms of what would be displayed when calling the function. it’s groups we are looking for turns out this function has a Groups method.
The return value I received was much larger for this on a Corporate network especially if the (current) user is in a number of Groups.
Now that we have the Group SID’s now on to the process of Converting the SID’s into a human readable form. For the accounts discovered previously if we choose the first item [0] we can then see there is a .Translate on this item
In order to do the translation we’ll need to specify the type that the dotnet class expects. It expects a type of system.security.principal.ntaccount . This is the only class from the documentation that has the type expected.
([System.Security.Principal.WindowsIdentity]::getcurrent().groups[0]).translate([system.security.principal.ntaccount])
Value
-----
Everyone
The groups are known now to put this all together in a Foreach Loop to find out all the groups that the currently logged in user is a member of:
([System.Security.Principal.WindowsIdentity]::getcurrent().groups) | Foreach{( `
[System.Security.Principal.SecurityIdentifier]$_.value).Translate([system.security.principal.ntaccount])}
Value
-----
Everyone
.....(more groups Redacted)
With a few more updates this script can be modified to find per user when in a domain scenario. Or for local users:
Add-Type -AssemblyName System.DirectoryServices.AccountManagement
$userprincipal = ([System.DirectoryServices.AccountManagement.UserPrincipal]) -as [type]
$up = $userprincipal::FindByIdentity([System.DirectoryServices.AccountManagement.ContextType]::Machine,[System.DirectoryServices.AccountManagement.IdentityType]::SamAccountName,"somemachine\defaultAccount")
$up
GivenName :
MiddleName :
Surname :
EmailAddress :
VoiceTelephoneNumber :
EmployeeId :
AdvancedSearchFilter : System.DirectoryServices.AccountManagement.AdvancedFilters
Enabled : False
AccountLockoutTime :
.......
.......
ContextType : Machine
Description : A user account managed by the system.
DisplayName :
SamAccountName : DefaultAccount
UserPrincipalName :
Sid : S-1-5-xx-xxxxxxxx-xxxxxxxxxxx-xxxxxxxxxxxxxxxx-503
Guid :
....
Name : DefaultAccount
$up.GetGroups()
IsSecurityGroup : True
GroupScope : Local
Members : {DefaultAccount}
Context : System.DirectoryServices.AccountManagement.PrincipalContext
ContextType : Machine
Description : Members of this group are managed by the system.
DisplayName :
SamAccountName : System Managed Accounts Group
UserPrincipalName :
Sid : S-1-5-xx-xxx
Guid :
DistinguishedName :
StructuralObjectClass :
Name : System Managed Accounts Group
$up.getGroups().samacccountname
System Managed Accounts Group
There was a question presented on StackOverflow about how do you pull an image from the Clipboard. This article is about how this was done with two separate functions.
The first function Export-ClipItem is a function that detects what type of item is in the Clipboard. To perform this function the built in cmdlets with powershell 5.1 work nicely. This script however, is written assuming that an older version of Powershell maybe required.
The first thing that needs to be done is to get a windows form object:
Based upon inspection there are several items that can be tested for with (Contains) and then items can be retrieved from the clipboard with (Get) methods.
Starting with Text it can be tested with ContainsText(). Retrieval of the Text can then be done with GetText()
Since the Image retrieved from the clipboard is aready a System.Drawing.Image type. That library has a Save() function. it requires the path to save the image to and a type to save the image as.
There are two other types of data that can be retrieved from the clipboard.
ContainsFileDropList()
A file drop list is a collection of strings containing path information for files. The return from GetFileDropList is a StringCollection. For this blog post it was chosen to just save the contents of the return as a txt file.
The last type that can be retrieved from the clipboard is an Audio file. Performing the Export of the Audio will be presented in the next Blog post on this topic.
if($clipboard.ContainsAudio())
{
$clipboard.GetAudioStream()
#perform stream function to file ()
}
Now that we have the different types of output Text, Images, FileDropList, and at a later Date audio. Now exploration of the File name function can be explored. For this Script it was decided to write out a single file for each Clipboard operation. This next explanation demonstrates how this was done.
After having been in the industry for a while. I must admit that I’ve never tried to create a folder and file with the same name in a folder. Turns out that there is a rule that the same name can’t exist more than once in a folder or directory. This applies to both windows and linux systems.
I found this out because in my code I didn’t specify the ItemType on New-Item Powershell Cmdlet.
new-item $home/.config/j
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 1/4/2019 1:07 PM 0 j
new-item $home/.config/j -itemtype container
new-item : An item with the specified name C:\Users\tschumacher\.config\j already exists.
At line:1 char:1
+ new-item $home/.config/j -itemtype container
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ResourceExists: (C:\Users\tschumacher\.config\j:String) [New-Item], IOException
+ FullyQualifiedErrorId : DirectoryExist,Microsoft.PowerShell.Commands.NewItemCommand
To make sure I don’t commit this “cardinal sin” again. I wrote a small if statement to remedy my issue. Since I wanted the item I’m passing to “j” to be a folder I check to see if it is a file. If it’s a file I delete it (forcibly) and then recreate it as a folder.
If you’ve ever dealt with SCCM you’ll understand to get a client to forcibly download patches / software from SCCM you’ll need to call WMI to trigger a schedule.
Once the Request Machine Policy Assignments is triggered. Another function is called. This is where a search through the client logs determine success of the trigger invocation.
The value that determines success is Evaluation not required. No changes detected. Which can be found in the PolicyEvaluator.log
In order to find this value we first need to find out if the computer name that was passed is a local machine or a remote machine. This is done with the function Test-CCMLocalMachine. This function is used in both the invoke and the test to determine if excution is on the local machine or a remote machine. To make sure when searching the log a $TimeReference is used. If it is passed in one of the parameters then that value is used to search through the log. If it is not passed the current time from the remote or local machine will be used.
function Test-CMRequestMachinePolicyAssignments
{
param([Parameter(Mandatory=$true)]$computername,
[Parameter(Mandatory=$true)]$Path = 'c:\windows\ccm\logs'
,[datetime]$TimeReference,
[pscredential] $credential)
if ($TimeReference -eq $null)
{
if(Test-CCMLocalMachine $computername)
{
$TimeReference =(get-date)
}
else
{
[datetime]$TimeReference = invoke-command
-ComputerName $computername -scriptblock {get-date}
}
}
$RequestMachinePolicyAssignments = $false
# can see when this is requested from the Policy agentlog:
Push-Location
Set-Location c:
if(Test-CCMLocalMachine $computername)
{
$p = Get-CMLog -path "$path\policyevaluator.log"
$runResults = $P |Where-Object{$_.Localtime -gt $TimeReference} `
| where-object {$_.message -like `
"*Evaluation not required. No changes detected.*"}
}
else
{
$p = Get-CCMLog -ComputerName $computerName -path $Path -log policyevaluator -credential $credential
$runResults = $P.policyevaluatorLog |Where-Object{$_.Localtime -gt $TimeReference} | where-object {$_.message -like "*Evaluation not required. No changes detected.*"}
}
Pop-Location
#if in the
if($runResults)
{
$RequestMachinePolicyAssignments = $true
}
$RequestMachinePolicyAssignments
}
Finding this value in the PolicyEvaluator.log can take up to 45 minutes or more depending on the setup of your SCCM environment.
To allow for finding the value described above and two other triggers. The following script demonstrates its usage:
If you follow Twitter like I do especially when it comes to Powershell, you will already have noticed there is a new book out in the community called Powershell Conference Book. There are some very sharp PowerShell MVPs/Experts/Enthusiasts that have posted a chapter about what they’ve spoken on in conferences or will speak about.
I am very honored that I was also chosen to write a chapter in this book. My chapter is about the work I’ve done (SCCM), and dynamic parameters. I explore how to Parse a SCCM log file and then use it together with a Dynamic parameter. Using this dynamic parameter you can more efficiently locate a log entry with a few keystrokes.
A huge thank you for the opportunity and to the organizers of this endeavor (Mike F Robbins, Jeff Hicks,Michael Lombardi). Not only did they author their own chapter they also proof read and made sure the other authors were providing good content.
I would encourage everyone to go get a copy. There is a wealth of material in this book, while it is intended for experienced users, I believe that everyone can benefit from its content.
The proceeds from this book are going to a worthy cause:
Your contribution will help someone learn a new skill and move forward in their career. I’m honored to be able to have contributed with a chapter in this Awesome book.
In the first Blog post Parsing CCM\Logs I showed you how I was able to get a Script from the community and make a few tweaks to allow for it to parse logs for CCM. In this blog post I’m going to show you how I took the next step. My next step was to use the parsing logic in a script and incorporate a Dynamic Parameter.
To begin with I wanted to have the user not have to go to the machine and find every log for CCM\logs and then type in the value of that log name. For instance in my directory for c:\windows\ccm\logs there were 178 files with the .log extension. Trying to create a Validate set for this many logs is also problematic. So I chose the dynamic Parameter approach for this function. Now on to the script.
The first portion of the script is standard parameter values.
The next portion of this script is where the “magic” is. The next parameter -log is created dynamically from the first two variables ($computerName, $path).
I’ll do my best to explain how his code works. First step is the DynamicParam Statement. This tells PowerShell that we are going to create a dynamic Parameter. Very simply stated a dynamic parameter is a Parameter that is added at runtime only when needed.
The first thing that is done in this dynamic parameter is to create a Runtime Defined Parameter Dictionary. In order to add a runtime parameter we have to define the parameter and it’s attributes and then add it to the collection to return to the runtime so it will be added properly.
For the purposes of this script we are only going to make the parameter mandatory, set it’s position in the pipeline, and create a help message. There are other items that can be defined if required. We can see this by getting the members of the $logAttribute:
Since we are going to need this set of attributes in our parameter we need to add this to the Attribute collection ($logCollection) that will in turn be added to the runtime Parameter $logParam.
Next we’ll create our Validate set item from the list of logs on the remote machine which was gathered with the FilePath variable then added to a new object that will contain our ValidateSet attributes. Then add it to our LogCollection.
Finally we’ll add our parameter name and LogCollection to a Runtime Defined parameter. Then put this all in our Runtime Defined Parameter Dictionary. Then hand it back to PowerShell.
Now that we have the Full explanation of the Dynamic Parameter we can stitch our previous Log Parser together with this function to give us back any one of the logs on our remote machine. We’ll put this in our Process block of our function:
Now when we call Get-CcmLog we’ll get a return with a parsed log that has Log appended in the object name.
Full code is posted in a gist here:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
If you’ve ever worked with Configuration manager you’ll understand that there are quite a few logs on the Client side. Opening and searching through them for actions that have taken place can be quite a task. I needed to find when an item was logged during initial startup/build of a vm. So I sought out tools to parse these logs to find out the status of Configuration Manager client side. This post is about the tools/scripts I found and what I added to them to make it easier to discover and parse all the log files.
I started with the need to be able to just parse the log files. I discovered that Rich Prescott in the community had done the work of parsing these log files with this script:
With that script in had I made two changes to the script. The first change was to allow for all the files in the directory to be added to the return object.
The second change allowed for the user to specify a tail amount. This allows for just a portion of the end of the log to be retrieved instead of the entire log. That script can be found on one of my gists at the Tail end of this article.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
If you’ve ever worked with Oracle you are familiar with Oracle’s TNSNAMES file. This file describes how to get to a database. With ODP.Net it doesn’t provide a means to parse the TNSNAMES.ora file and then in turn use it with ODP.Net. From everything I’ve read you must just copy from the Description() and put Data Source = Description(). Then you can use that as a means to connect to your Oracle Database server. With that in mind I set out to write some scripting to help with this problem.
The first thing I did was to follow this great article by the Scripting Guys about how to use ODP.NET. After reading that article I found a great module on the Gallery that implemented much of what is spoken about there and I’ll be using that module here in this posting (SimplySQL).
Now I know where my TNSNAMES.ora file is located so I’ll bring it into my session with:
I brought the file in -raw so that I knew I would have a full object. Now with some REGEX I can get this file in to the fashion I want. First to look for the common string in my TNSNAMES.ora file somename = (Description = .
Now that I have the connections split into an array I can now select the one I want using Where-Object -like “myconnectionName“. Then with this handy commandlet Open-OracleConnection From this module (simplySQL) , all i have to do next is pass in my username and password and that should open My oracle connection.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
I wanted to find out each mac address was on my router. So I decided to find out what was available for a given IP Address. What I found was there is an API that you can query to Get information about what company owns that mac address.
Now to see how we query and get that information from the API:
we only need to Query the api and pass it a mac address and then pass in the url either JSON or XML:
invoke-restmethod -uri http://macvendors.co/api/58:EF:68:00:00:00/json | select result
result
------
@{company=Belkin International Inc.; mac_prefix=58:EF:68; address=12045 East Waterfront Drive,Playa Vista 90094,U...
Without a json Tag
(invoke-restmethod -uri http://macvendors.co/api/7C:01:91:00:00:00).result
company : Apple, Inc.
mac_prefix : 7C:01:91
address : 1 Infinite Loop,Cupertino CA 95014,US
start_hex : 7C0191000000
end_hex : 7C0191FFFFFF
country : US
type : MA-L
Telling the API to return XML
(invoke-restmethod -uri http://macvendors.co/api/58:EF:68:00:00:00/Xml).result
company : Belkin International Inc.
mac_prefix : 58:EF:68
address : 12045 East Waterfront Drive,Playa Vista 90094,US
start_hex : 58EF68000000
end_hex : 58EF68FFFFFF
country : US
type : MA-L
As you can see getting the results already comes in Json and or form of an object so getting this with PowerShell is pretty straightforward.