DacPac to Folders with PowerShell

A Question was posed on Stackoverflow.  How do you create a folder structure from a created DacPac or BacPac.  This article is how I went about doing this with PowerShell.

The first means was to try and find someone who’d tried this before. The best I could find searching was this article: Deploy DACPACs with PowerShell.  This Script does an excellent job of showing how you can use the SQL 2014 DLL’s to create a Script that can than be deployed to your database.   This did not answer the question though.

Thankfully one of the participants in the question was kind enough to show how to do this very thing in C# so I took their pseudo code and turned it into PowerShell Code.


using (TSqlModel modelFromDacpac = new TSqlModel(dacpacPath))
{
 IEnumerable<TSqlObject> allObjects = model.GetObjects(QueryScopes);
 foreach (TSqlObject tsqlObject allObjects)
 {
 string script;
 if (tsqlObject.TryGetScript(out script))
 {
 // Some objects such as the DatabaseOptions can't be scripted out.

// Write to disk by object type
 string objectTypeName = tsqlObject.ObjectType.Name;
 // pseudo-code as I didn't bother writing.
 // basically just create the folder and write a file
 this.MkdirIfNotExists(objectTypeName);
 this.WriteToFile(objectTypeName, tsqlObject.Name + '.sql', script;
 }
 }
}

 

Starting at the top of the script I need to translate the using Statement into a New-object in PowerShell. In order to do that I needed to find what Dot net class TSQLModel was in.  Based on that research I found that I needed to add the type to my session(Microsoft.SqlServer.Dac.Extensions.dll).  Once the type was added I was then able to get the model from my DacPac.

<br>

add-type -path 'C:\Program Files (x86)\Microsoft SQL Server\120\DAC\bin\Microsoft.SqlServer.Dac.Extensions.dll'

$model =[Microsoft.SqlServer.Dac.Model.TSqlModel]::new(((get-item ".\$dacpac").fullname))

 

Now that I have the Model of my dacpac I need to figure out how to make this into a PowerShell piece of code IEnumerable<TSqlObject> allObjects = model.GetObjects(QueryScopes);

I know my return type is IEnumerable Tsql object.. Now the question is how do I query my model and get that return object.  Based on the C# code I need to call GetObjects. GetObjects expects you to pass an object type (QueryScopes variable) and optionally you  can pass the Object Identifier ID or the Object type.   The queryScopes is an enumeration that has the following values(All, Builtin, Default, None, SameDatabase, System, UserDefined). I chose the All so I could see what this method would return.

 

$returnObjects = $model.GetObjects([Microsoft.SqlServer.Dac.Model.DacQueryScopes]::All)

Now the next step is to step through the return result and test each item to see if it can be scripted.  When calling the TryGetScript it has an output so we must declare a variable before we call this method.

<br>

$s = ''
foreach($r in $returnObjects)
{
 if ($r.TryGetScript([ref]$s))
 {
 $objectTypeName = $r.ObjectType.Name;
 $d="c:\temp\db\$objectTypeName"
 if(!(test-path $d ))
 {
 new-item $d -ItemType Directory
 }
 $filename = "$d\$($r.Name.Parts).sql"

 if(! (test-path $filename))
 {new-item $filename -ItemType File}
 $s | out-file $filename -Force
 write-output $filename
 }

}

 

I found when I ran this that it would error on creation of a directory.

dactest.ps1 (28, 10): ERROR: At Line: 28 char: 10
ERROR: + $s | out-file $filename -Force
ERROR: + ~~~~~~~~~~~~~~~~~~~~~~~~~~
ERROR: + CategoryInfo : OpenError: (:) [Out-File], NotSupportedException
ERROR: + FullyQualifiedErrorId : FileOpenFailure,Microsoft.PowerShell.Commands.OutFileCommand
ERROR:

In order to find this exception I decided to put a Try Catch around the act of creating the file:

Try
 {
 new-item $filename -ItemType File
 }
 Catch
 {
 "Filename error $filename"
 }

 

 

What I found after putting the Try catch in place was that the exception was be cause the object names were a Url:

Filename error c:\temp\db\Service\http://schemas.microsoft.com/SQL/Notifications/EventNotificationService.sql

In order to fix this I implemented a test to see if the item name was a url

[system.uri]::IsWellFormedUriString(‘http://schemas.microsoft.com/SQL/Notifications/EventNotificationService.sql’, [system.uri]::IsWellFormedUriString)

Now that I know that the filename is a uri I can parse and get the last item in the uri for the filename:


$url = "$($r.Name.Parts)"
 if ([system.uri]::IsWellFormedUriString($url, [system.urikind]::Absolute))
 {
 $u = ([uri]"$url").Segments[-1]
 $filename = "$d\$u.sql"
 new-item $filename -ItemType File -ErrorAction Stop -Force
 }

Example output

Directory: C:\temp\db

Mode LastWriteTime Length Name
—- ————- —— —-
d—– 3/5/2018 7:10 PM Assembly
d—– 3/6/2018 9:04 AM Contract
d—– 3/5/2018 7:18 PM DataType
d—– 3/5/2018 7:19 PM Endpoint
d—– 3/5/2018 7:19 PM Filegroup
d—– 3/6/2018 9:04 AM MessageType
d—– 3/5/2018 7:20 PM Queue
d—– 3/5/2018 7:20 PM Role
d—– 3/5/2018 7:20 PM Schema
d—– 3/6/2018 9:04 AM Service
d—– 3/5/2018 7:20 PM Table
d—– 3/5/2018 7:20 PM User
d—– 3/5/2018 7:20 PM UserDefinedType
-a—- 3/5/2018 7:06 PM 0 [Microsoft.SqlServer.Types].sql

Directory: C:\temp\db\Schema

Mode LastWriteTime Length Name
—- ————- —— —-
-a—- 3/6/2018 9:11 AM 54 dbo.sql
-a—- 3/6/2018 9:11 AM 76 db_accessadmin.sql
-a—- 3/6/2018 9:11 AM 82 db_backupoperator.sql
-a—- 3/6/2018 9:11 AM 74 db_datareader.sql
-a—- 3/6/2018 9:11 AM 74 db_datawriter.sql
-a—- 3/6/2018 9:11 AM 70 db_ddladmin.sql
-a—- 3/6/2018 9:11 AM 82 db_denydatareader.sql
-a—- 3/6/2018 9:11 AM 82 db_denydatawriter.sql
-a—- 3/6/2018 9:11 AM 64 db_owner.sql
-a—- 3/6/2018 9:11 AM 80 db_securityadmin.sql
-a—- 3/6/2018 9:11 AM 58 guest.sql
-a—- 3/6/2018 9:11 AM 84 INFORMATION_SCHEMA.sql
-a—- 3/6/2018 9:11 AM 54 sys.sql

 

 

The entire script is posted in a gist:


param($dacpacPath = 'c:\dacpacPath', $dacpac = 'your.dacpac')
add-type path 'C:\Program Files (x86)\Microsoft SQL Server\120\DAC\bin\Microsoft.SqlServer.Dac.Extensions.dll'
cd $dacpacPath
$model =[Microsoft.SqlServer.Dac.Model.TSqlModel]::new(((get-item ".\$dacpac").fullname))
$queryScopes = [Microsoft.SqlServer.Dac.Model.DacQueryScopes]::All
$returnObjects = $model.GetObjects([Microsoft.SqlServer.Dac.Model.DacQueryScopes]::All)
$s = ''
foreach($r in $returnObjects)
{
if ($r.TryGetScript([ref]$s))
{
$objectTypeName = $r.ObjectType.Name;
$d="c:\temp\db\$objectTypeName"
if(!(test-path $d ))
{
new-item $d ItemType Directory
}
$filename = "$d\$($r.Name.Parts).sql"
if (! (test-path $filename))
{
Try
{
new-item $filename ItemType File ErrorAction Stop Force
}
Catch
{
$url = "$($r.Name.Parts)"
if ([system.uri]::IsWellFormedUriString($url, [system.urikind]::Absolute))
{
$u = ([uri]"$url").Segments[-1]
$filename = "$d\$u.sql"
new-item $filename ItemType File ErrorAction Stop Force
}
}
}
$s | out-file $filename Force
write-output $filename
}
}

view raw

DacToFile.ps1

hosted with ❤ by GitHub

Advertisement

The Power of the Round Table (AZ PowerShell)

Last night we had some technical difficulties with our user group and getting the broadcast and the speaker setup and going.    So we had to make up something good to talk about in the user group.

So what we ended up doing was what I like to call a round table.  In this discussion everyone relayed some of their successes with PowerShell.   This article is just a “glimpse’ of some of the tidbits I was able to capture during the meeting.

If you are anything like me you like things that you can do to help you remember a command you last typed or you like to bring back a command from history and modify it slightly and try again.   One of the users last night showed us this wonder full trick in powershell the #(tab).

The best way to show this feature is to tell you to use the Get-History cmdlet in PowerShell to show your last few Items you’ve typed.


PSGit:\> Get-History

Id CommandLine
-- -----------
1 $env:COMPUTERNAME
2 h
3 $env:PSModulePath
4 $env:PSModulePath -split ';'
5 h
6 ($env:PSModulePath -split ';')[0]
7 ($env:PSModulePath -split ';')[1]
8 ($env:PSModulePath -split ';')[2]
9 h
10 get-process
11 h
12 get-service

I only show the history so there is context around what this little gem of a tip does. if I type #PS and then hit tab I’ll get each item from history that contains PS.  Demonstrated below:

poundtip

Now for another tip that I found useful as well.  Have you ever wanted  to create a variable and have the output of the variable on the screen as well.  This can be done with Tee-Object but there is a much shorter method that one of the users in the AZ Powershell User group demonstrated:


PSGit:\> ($var = ($env:PSModulePath -split ';')[0])
C:\Users\crshn\Documents\WindowsPowerShell\Modules

PSGit:\> $var
C:\Users\crshn\Documents\WindowsPowerShell\Modules

Simply enclose your command in parens and you get the output in your variable and to your screen.

We had a great discussion about this post Merging hashtables. This spurned a discussion on a very cool means to copy your object intact to another object.  The participant in the user group informed me that he’d share his code with me.  When I receive it I’ll add to this Post.

Lastly another user demonstrated how they use data from the perfmon reliability counters that every windows machine has.   you can view those reliability counters through a simple command at your prompt:

perfmon /rel

Turns out these counters are part of WMI and you could drill into these to help you with diagnosing problems in your infrastructure through Powershell and WMI.

Here is a post from Richard Siddaway on how he used some of the items in the class that this provides:

https://richardspowershellblog.wordpress.com/2015/09/29/win32_reliabilityrecords-class

It was a very fun Users group. If you are online or in Phoenix area Drop by and we’ll entertain you with a speaker or any of the great folks that attend.

 

Until then

Keep Scripting

 

Thom

UCS Director – Creating a Custom Workflow Task for PowerShell

If you are familiar with UCS director then you know you can create custom workfow tasks for anything that is JavaScript based.  I haven’t seen a means to do this for a PowerShell Script.  This is how I figured this out.

Note: Without the help of my co-worker  Don Reilly the task would have never worked.  He was able to find the correct methods to call for Director. 

First thing I did was to clone the in box PowerShell Task.

2018-01-05 09_11_59-Clipboard

That brings up a dialog to allow you to choose to clone from tasks that are already there

2018-01-05 09_15_01

Now that I have that cloned I can look at the contents of the javascript to find out how to call my script.   The script I chose to run is one from another community member.  His script gets the last error from the PowerShell agent.  I took his script and saved it to my PSA server in d:\director\powershell\director folder and am using this custom work flow task to call this when there is an error.

On to the rest of the setup.   Once  I cloned the script and then put in the code that I needed with the necessary input’s from the director custom task.  I found that the task wouldn’t run.  Researching further I found that when you clone the task the PowerShell Task itself is running differently than any other custom task.  You can see this behaviour in the screen shot below:

2018-01-05 09_23_22

So I then had to goto my resident expert Don Reilly who helped me with discovering the right controller to add to my custom workflow task.  I’ll step through in pictures what my work flow in UCS Director 6.5 looks like:

2018-01-05 09_26_58

2018-01-05 09_28_50

The input for tasks has one property that i chose to just enter in its values.  In the LOV values for OutputFormat I entered in the values of XML and JSON.

2018-01-05 09_28_50-2

2018-01-05 09_28_50-3

2018-01-05 09_28_50-4

2018-01-05 09_28_50-5

The method added for the controller is before martial with the following calls to the clopia libs:


importPackage(com.cloupia.feature.powershellAgentController.wftasks);

var agentPairs = PSAgentTabularLOV.getAllPowerShellAgentsLOV();
page.setEmbeddedLOVs(id + ".psAgent", agentPairs);

Here is what my script code looks like in the custom task (Powershell is highlighted in blue in the Java script).

For Clarity here is the script$s = “D:\Director\powershell\director\Get-LastUCSDError.ps1”; if(test-path $s){. $s;}else{“Cannot find $s check path on PowerShell Agent Server”}


// Auto generated to code invoke following task
// Task Label:  Execute Native PowerShell Command
// Task Name:  Execute Native PowerShell Command
importPackage(java.lang);
importPackage(java.util);
importPackage(com.cloupia.model.cIM);
importPackage(com.cloupia.service.cIM.inframgr);

function Execute_Native_PowerShell_Command()
{
    var task = ctxt.createInnerTaskContext("Execute Native PowerShell Command");

    // Input 'Label', mandatory=true, mappableTo=
    task.setInput("Label", input.label);

    // Input 'PowerShell Agent', mandatory=true, mappableTo=gen_text_input
    task.setInput("PowerShell Agent", input.psAgent);

    // Input 'Hide Input in PSA, inframgr logs', mandatory=false, mappableTo=gen_text_input
    task.setInput("Hide Input in PSA, inframgr logs", input.isHideInput);

    // Input 'Hide Output in PSA, inframgr logs', mandatory=false, mappableTo=gen_text_input
    task.setInput("Hide Output in PSA, inframgr logs", input.isHideOutput);

    // Input 'Commands/Script', mandatory=true, mappableTo=gen_text_input
    //Changed the command to be hard coded to a script on the
	var command = '$s = "D:\\Director\\powershell\\director\\Get-LastUCSDError.ps1"; if(test-path $s){. $s;}else{"Cannot find $s check path on PowerShell Agent Server"}';
    task.setInput("Commands/Script", command);

    // Input 'Commands/Rollback Script', mandatory=false, mappableTo=gen_text_input
    task.setInput("Commands/Rollback Script", '');

    // Input 'Output Format', mandatory=false, mappableTo=
    task.setInput("Output Format", input.outputFormat);

    // Input 'Depth', mandatory=true, mappableTo=
    task.setInput("Depth", input.depth);

    // Input 'Maximum Wait Time', mandatory=true, mappableTo=
    task.setInput("Maximum Wait Time", input.maxWaitTimeMinutes);

    // Now execute the task. If the task fails, then it will throw an exception
    task.execute();

    // Now copy the outputs of the inner task for use by subsequent tasks
    // Type of following output: gen_text_input
    output.POWERSHELL_NATIVE_COMMAND_RESULT = task.getOutput("POWERSHELL_NATIVE_COMMAND_RESULT");
}

// Invoke the task
Execute_Native_PowerShell_Command();
<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span>

Monitor your Connection to Internet – PowerShell

Recently I’ve been having issues with my internet connection.  So I decided to Write a script to monitor my connection and then record how long my connection drops to my Internet Service Provider.

To start this process out I had to make sure that I could ping the gateway of the adapter that I’m using to connect to the internet with.  So first step was to find my IP address and gateway.  I was able to do this using Get-NetIPConfiguration.


Get-NetIPConfiguration -InterfaceAlias 'vEthernet (ExternalSwitch)'

InterfaceAlias : vEthernet (ExternalSwitch)
InterfaceIndex : 11
InterfaceDescription : Hyper-V Virtual Ethernet Adapter
NetProfile.Name : Conn
IPv4Address : 192.168.1.12
IPv6DefaultGateway :
IPv4DefaultGateway : 192.168.1.1
DNSServer : 192.168.1.1

This told me my address and my Gateway.  So I put them in a variable:


$IP = (Get-NetIPConfiguration -InterfaceAlias 'vEthernet (ExternalSwitch)').ipv4address.ipaddress

$gateway = (Get-NetIPConfiguration).ipv4defaultGateway.nexthop

Now that I have them in a variable I can begin the process of  Pinging both addresses.  I chose to add this to a Function:


function Start-ConnectionMonitoring
{
param($isp, $gateway, $Logfile,[int]$Delay = 10,[Ipaddress] $adapter, [switch]$ispPopup, [switch]$gateWayPopup)
$spacer = '--------------------------'
while($true)
{
if(!(Test-Connection $gateway -source $adapter -count 1 -ea Ignore))
{
get-date | Add-Content -path $Logfile
"$gateWay Connection Failure" |add-content -Path $Logfile
$outagetime = Start-ContinousPing -address $gateway -adapter $adapter -Delay $Delay
"Total Outage time in Seconds: $outageTime" | Add-Content -path $Logfile
if($gateWayPopup)
{
New-PopupMessage -location $gateway -outagetime $outagetime
}
$spacer |add-content -Path $Logfile
}
if((!(Test-Connection $isp -Source $adapter -count 1 -ea Ignore)) -and (Test-Connection $gateway -count 1 -ea Ignore))
{
get-date | Add-Content -path $Logfile
"$isp Connection Failure" | Add-Content -Path $Logfile
$outagetime = Start-ContinousPing -address $isp -adapter $adapter -Delay $Delay
"Total Outage time in Seconds: $outageTime" | Add-Content -path $Logfile
if($ispPopup)
{
New-PopupMessage -location $isp -outagetime $outagetime
}
$spacer|add-content -Path $Logfile
}
Start-Sleep -Seconds $Delay
}
}

In this function I have two Nested functions.  I’ll explain the first function(Start-ContinousPing).  If the connection/ping to either the local router ($gateway) or ($isp) fails then we call this function.  This puts the Ping/connection check in a loop until the connectivity comes back. At the end of the non connectivity the function passes back the seconds that we couldn’t reach that resource.

The Second function (New-PopupMessage) serves as a means to allow the user to choose whether or not they get a popup when there is  a period of no activity.  If the switch -ispPopup is set then when we have no connectivity to the ISP resource we’ll get a popup indicating no connection and how long the connection was out.

Finally we’ll look at the contents of the log:

12/27/2017 4:59:29 PM
192.168.1.1 Connection Failure
Total Outage time in Seconds: 0.0380652
————————–
12/27/2017 4:59:33 PM
http://www.cox.com Connection Failure
Total Outage time in Seconds: 0.0353273
————————–

As you can see the connection to my first gate way was out of .038 seconds. Also the connection to my provider cox.com was out for .035 seconds.

The entire script is located in this gist:


function Start-ConnectionMonitoring
{
param($isp, $gateway, $Logfile,[int]$Delay = 10,[Ipaddress] $adapter, [switch]$ispPopup, [switch]$gateWayPopup)
$spacer = '————————–'
while($true)
{
if(!(Test-Connection $gateway source $adapter count 1 ea Ignore))
{
get-date | Add-Content path $Logfile
"$gateWay Connection Failure" |add-content Path $Logfile
$outagetime = Start-ContinousPing address $gateway adapter $adapter Delay $Delay
"Total Outage time in Seconds: $outageTime" | Add-Content path $Logfile
if($gateWayPopup)
{
New-PopupMessage location $gateway outagetime $outagetime
}
$spacer |add-content Path $Logfile
}
if((!(Test-Connection $isp Source $adapter count 1 ea Ignore)) -and (Test-Connection $gateway count 1 ea Ignore))
{
get-date | Add-Content path $Logfile
"$isp Connection Failure" | Add-Content Path $Logfile
$outagetime = Start-ContinousPing address $isp adapter $adapter Delay $Delay
"Total Outage time in Seconds: $outageTime" | Add-Content path $Logfile
if($ispPopup)
{
New-PopupMessage location $isp outagetime $outagetime
}
$spacer|add-content Path $Logfile
}
Start-Sleep Seconds $Delay
}
}
function Start-ContinousPing
{
param($address,[ipaddress] $adapter, [int]$Delay = 10)
$currentTime = get-date
While(!(Test-Connection $address Source $adapter count 1 ea Ignore))
{
Sleep Seconds $Delay
}
$outageTime = ((get-date) $currentTime).TotalSeconds
$outageTime
}
function New-PopupMessage
{
param($location, $outagetime)
$Popup = New-Object ComObject Wscript.Shell
$popup.popup("$location Failure – seconds: $outagetime ",0,"$location",0x1)
}
$Logfile = "c:\temp\connection.log"
$isp = 'http://www.cox.com'
if(!(test-path $Logfile))
{
new-item Path $Logfile
}
$IP = (Get-NetIPConfiguration InterfaceAlias 'YourAdapterName').ipv4address.ipaddress
$gateway = (Get-NetIPConfiguration).ipv4defaultGateway.nexthop
Start-ConnectionMonitoring isp $isp gateway $gateway Logfile $Logfile adapter $IP ispPopup gateWayPopup

I Hope this helps someone.

 

Until then

Keep Scripting

 

Thom

Merging hashtables

Hash tables in PowerShell are very useful and can be used for a bunch of things.   Recently I had to use some code I found on StackOverflow to Merge hash tables. This post is about my experience and the really cool piece of code that iRon posted on StackOverFlow.

First I need to login to azure and find my application:


add-azurermaccount

get-azurermresource

I see my resource in the Get-azureRmresource so now I know that I can query for it’s app settings using this command:


$myapp = Get-AzureRmWebAppSlot -resourcegroupname myresourcegroup -name myresourcename -slot production

This produces and object that contains all my web application settings in azure for my app in question.  The item i want to work on is the .siteconfig.AppSettings

 

This portion of the object will have the properties of the appsettings in the azure blade as shown below:

Applicationsettings


PS C:\Users\me&amp;gt; $myApp.siteconfig.AppSettings

Name Value


WEBSITE_NODE_DEFAULT_VERSION 6.9.1

Now that I have the current version of what is in my application I now need to see what to do to put new settings in place and not wipe out any existing settings.  The cmdlet to do the addition is Set-AzureRmWebAppSlot.  After looking through the help I can see that I have a parameter that  I can pass for the settings i want called -appSettings.  It like most of the other settings require a hash table:  [[-AppSettings] <Hashtable>]

So The $myApp.SiteConfig.Appsettings is a list:


$appSettings = $Myapp.siteconfig.appsettings
&amp;nbsp;$appsettings -is [pscustomobject]
False
&amp;nbsp;$appsettings.gettype()

IsPublic IsSerial Name BaseType
-------- -------- ---- --------
True True List`1 System.Object

$appsettings -is [hashtable]
False

This means I need to convert my object from List`1 to a hashtable so I’ll iterate through it and create a hashtable:


$appSettingsHash = @{}
&amp;nbsp;foreach($k in $appSettings) { $appSettingsHash[$k.name] = $k.value }
&amp;nbsp;$appsettingshash

Name Value
---- -----
WEBSITE_NODE_DEFAULT_VERSION 6.9.1

&amp;nbsp;$appsettingshash -is [hashtable]

True

Ok now that I have my current settings in a hashtable I need to now work with entries that I want to add to a hashtable and then post it.


$appSettings ='{"AppSettings:testkey1": "45test","AppSettings:TestId": "This is a Test Key 28"}'

$newAppSettings = $appSettings | convertfrom-json 

$newAppSettingsHash = @{}
 $newAppSettings.psobject.properties | ForEach-Object { $newAppSettingsHash[$_.Name] = $_.Value }

$newappsettingsHash -is [hashtable]
True

This is where the magic of iRon‘s script comes into play. Since  I need to use this in a deployment from TFS I created the hashtable in Json Format first and then convert the Json format to a [hashtable]. Then I call iRon’s Script with the $newappSettingsHash and the $appsettingsHash. Now I have a merged hashtable that I can now update my application with.


Function Merge-Hashtables([ScriptBlock]$Operator) {
$Output = @{}
ForEach ($Hashtable in $Input) {
If ($Hashtable -is [Hashtable]) {
ForEach ($Key in $Hashtable.Keys) {$Output.$Key = If ($Output.ContainsKey($Key)) {@($Output.$Key) + $Hashtable.$Key} Else {$Hashtable.$Key}}
}
}
If ($Operator) {ForEach ($Key in @($Output.Keys)) {$_ = @($Output.$Key); $Output.$Key = Invoke-Command $Operator}}
$Output
}

$hashtable = $newAppSettingsHash, $appSettingsHash | Merge-Hashtables {$_[0]} 
$results = Set-AzureRmWebAppSlot -AppSettings $hashtable -name $website -ResourceGroupName $resourceGroup -slot $slot
$r = $results.SiteConfig.AppSettings
Write-Output $r

The really cool thing about the Merging of the hashtables function is that you can merge more than 2 hash tables.  See this comment from iRon about how it works:

 

hashtablemerge

Full code for this merge hashtable function against a azure application is below:


param($websitename = 'TEst' ,$resourceGroup = 'SchuTest',$slot = 'production', $appSettings ='{"AppSettings:testkey1": "45test","AppSettings:TestId": "This is a Test Key 28"}')
#https://stackoverflow.com/questions/8800375/merging-hashtables-in-powershell-how
Function Merge-Hashtables([ScriptBlock]$Operator) {
$Output = @{}
ForEach ($Hashtable in $Input) {
If ($Hashtable -is [Hashtable]) {
ForEach ($Key in $Hashtable.Keys) {$Output.$Key = If ($Output.ContainsKey($Key)) {@($Output.$Key) + $Hashtable.$Key} Else {$Hashtable.$Key}}
}
}
If ($Operator) {ForEach ($Key in @($Output.Keys)) {$_ = @($Output.$Key); $Output.$Key = Invoke-Command $Operator}}
$Output
}
try {
foreach($website in $websiteName)
{
ConvertFrom-Json $appSettings ErrorAction Stop
#it is expected that the app settings is a string representation of a hashtable that is writtin in Json. So that it can be converted to a powershell hashtable during runtime
$newAppSettings = $appSettings | convertfrom-json
$newAppSettingsHash = @{}
$newAppSettings.psobject.properties | ForEach-Object { $newAppSettingsHash[$_.Name] = $_.Value }
$Application = get-azurermwebappslot Name $website ResourceGroupName $resourceGroup Slot $slot
$ExistingSettings = $Application.siteconfig.AppSettings
$appSettingsHash = @{}
foreach($k in $ExistingSettings)
{
$appSettingsHash[$k.name] = $k.value
}
#https://stackoverflow.com/questions/8800375/merging-hashtables-in-powershell-how
$hashtable = $newAppSettingsHash, $appSettingsHash | Merge-Hashtables {$_[0]}
$results = Set-AzureRmWebAppSlot AppSettings $hashtable name $website ResourceGroupName $resourceGroup slot $slot
$r = $results.SiteConfig.AppSettings
Write-Output $r
}
}
catch
{
Write-Error "$appsettings must be in JSON format"
}

I hope that when you need to merge hashtables this article makes it a  bit easier for you.

 

Until then keep scripting

 

thom

Uploading files to Azure Applications (kudu)

I needed to copy some content to my azure application that the Build and deploy that I constructed for it wouldn’t need to do every deploy every time.   So my quest began on how do I upload files to an Azure application.  The most common and recognized way of uploading files to azure applications is through webdeploy. I didn’t think I needed to package up and use webdeploy so I sought out a way to do this with PowerShell.  This post is about that pursuit.

Thanks to this article most of the work was done Copy files to Azure Web App with PowerShell and Kudu API.  All I needed to do was to put a loop around my file upload and use Octavie van Haaften‘s scripts.

So I started with get-childitem -recurse “$downloadFolder\content”.  Now that I had my content in a variable called $files I can put this in a foreach loop and use Octavie van Haaften‘s  Upload-FileToWebapp.

During the upload of the files I need to determine if the file from my local disk is a File or Directory.  I used the following classes to determine this:

[System.IO.DirectoryInfo] &  [System.IO.FileInfo]

If the item was a directory then I had to make the upload location match the location on disk.  I did this through a little bit of replacement logic and used the $kudufolder as my variable to use for the upload function from Octavie.


$kudufolder = ((($file.FullName).Replace($uploadfrom,'Content'))`
.replace('\','/')).trimstart('/')
$kudufolder = "$kudufolder/"
Upload-FileToWebApp -resourceGroupName myresourcegroup`
-webAppName mywebapp -kuduPath $kudufolder

The same holds true for the upload of a file. The only difference between the file and the directory is the /. When you are uploading/creating a directory / to kudu means a directory.


$kudufile = ((($file.FullName).Replace($uploadfrom,'Content'))`
.replace('\','/')).trimstart('/')
Upload-FileToWebApp -resourceGroupName myresourcegroup`
-webAppName mywebapp -localPath $file.FullName -kuduPath $kudufile

Here is the full script in the foreach loop with each check for a directory or file.


$downloadfolder = 'c:\temp\myAzureStorage'

$uploadfrom = "$downloadfolder\Content"

$files = get-childitem -Recurse "$downloadfolder\Content"

foreach($file in $files)
{
if($file -is [System.IO.DirectoryInfo])
{
$kudufolder = ((($file.FullName).Replace($uploadfrom,'Content')).replace('\','/')).trimstart('/')
$kudufolder = "$kudufolder/"
Upload-FileToWebApp -resourceGroupName myresourcegroup -webAppName mywebapp -kuduPath $kudufolder
}
elseif($file -is [System.IO.FileInfo])
{
$kudufile = ((($file.FullName).Replace($uploadfrom,'Content')).replace('\','/')).trimstart('/')
Upload-FileToWebApp -resourceGroupName myresourcegroup -webAppName mywebapp -localPath $file.FullName -kuduPath $kudufile
}
}


I hope this helps someone
Until then keep Scripting
Thom


 

Azure File Storage Download

If you have an Azure account and you want to download files out of azure storage either individually or a whole folder. This script is about how I was able to do this with Powershell.

First we need to login to azure and get a storage context.  The StorageContext will require a key.


Add-AzureRmAccount -credential (get-credential) -tenantid yourid

$key = (get-azurermstorageAccountkey -resourcegroupname myresourcegroup -name mystorageaccountName | where-object{$Psitem.keyname -eq 'key1'}).value

$storageContext = New-AzureStorageContext -StorageAccountName "mystorage" -StorageAccountKey $key

Now that we have the storage context and key. Now we need to find the files that are in our AZURE File storage.


$content = get-azurestoragefile -storageaccountname "mystorage" -storageAccountkey $key

If we look at the contents of our $content variable we should see something similar to this:

files

Now that we have the content in a variable now we can begin the process of figuring out how to download each file. To start with to download a single file we need to use get-azureStorageFileContent 


$content = get-azurestoragefile -storageaccountname "mystorage" `

-storageAccountkey $key

get-azurestoragefilecontent -sharename "myshare" -path $content[0].uri.localpath `

-replace "$($content[0].share.name)/",'' -destination "c:\temp\" -context $storageContext

After much trial and error I found that in the object you get back from Azure there are two different Object types that you must check for:

Microsoft.WindowsAzure.Storage.File.FileDirectoryProperties

and the other type is:

Microsoft.WindowsAzure.Storage.File.CloudFile

By the class names you can see that one is a file and the other is a Directory.  With that in mind now I can put this in a function that recursively calls itself to get all the contents.


function Get-AzureFiles
{
param([string]$shareName = 'mystorage', [object]$storageContext, [string]$downloadFolder, [string]$path)
$content = get-azurestoragefile -sharename $sharename -Context $storagecontext -path $path| Get-AzureStorageFile

foreach($c in $content)
{
$Parentfolder = $c.uri.segments[($c.uri.segments.count -2)] -replace '/','\'

if(!(test-path $destination))
{mkdir $destination}
$p = $c.uri.LocalPath -replace "$($c.share.name)/" ,''
if(Get-AzureStorageFile -ShareName $c.share.name -path $p -Context $storageContext )
{
if($c.properties -is [Microsoft.WindowsAzure.Storage.File.FileDirectoryProperties])
{
$d = [Microsoft.WindowsAzure.Storage.File.CloudFileDirectory]::new($c.uri.AbsoluteUri)
#Get-AzureStorageFileContent -directory $d -ShareName $c.share.name -Destination "$destination$($c.name)" -Context $storageContext #-Path $p
$path = $d.Uri.LocalPath -replace "/$sharename/" , ""
$dest = $path -replace '/','\'
"$($c.name) is a directory -- getting $downloadfolder\$dest files"
if(!("$downloadfolder\$dest"))
{mkdir "$downloadfolder\$dest"}
Get-AzureFiles -shareName $shareName -storageContext $storageContext -path $path -downloadFolder "$downloadFolder\$dest"
}
elseif($c -is [Microsoft.WindowsAzure.Storage.File.CloudFile])
{
Write-Output "downloading --- $destination$($c.name)"
$destination = (($c.Uri.LocalPath -replace "/$sharename/" , "") -replace '/','\')
$dest = "$downloadFolder\$destination"
$dest
$de = $dest -replace $c.name, ""
if(!(test-path $de))
{
mkdir $de
}
if(!(test-path $dest))
{Get-AzureStorageFileContent -ShareName $c.share.name -Path $p -Destination $dest -Context $storageContext }# -WhatIf}
else
{
Write-Output "already downloaded --- $dest"
}
}
}
}
}

Now if we call the function with we’ll get a downloaded copy of all the folders from the location that you specify in azure to your local host:

get-AzureFiles -sharename “myShare” -storageContext $storageContext -downloadfolder “C:\temp\azurefiles”

There you go now you have your files in a directory from azure. Stay tuned for my next article where I’ll show you how to upload these same files to an Azure application (kudu).


I hope this helps someone
Until then keep Scripting
Thom


 

Finding 500,401 Errors in IIS logs

If you’ve  ever had to troubleshoot issues with IIS you’ll known that you are often drawn to looking at IIS Logs.    This post is about a script that I modified to search IIS logs for specific errors.

To begin with I found this article on StackOverFlow that got me started on this path.

To begin with we need to get the webadministration module so we can get the name of the website we want to get the log file for:


Import-Module WebAdministration

$site = Get-Item IIS:\Sites\$website
$id = $site.id
$logdir = "$($site.logfile.directory)\w3svc$id"

Now that we have the  The $logdir (log directory) we can now put the rest of the file name together by getting the date:


$File = "$logdir\u_ex$(((get-date).adddays(-$days)).ToString("yyMMdd")).log"
s

Assuming you are using a logfile per day the name of the log file is:

u_ex(yyMMdd)  which should be something similar to this: u_ex170824.log

Now that we have our log file we need to strip off unnecessary lines.  Specifically the First three lines that start with #S, #D, or #V which ends up being the Sofware, Version and Date items at the top of the log.

Then we’ll need to Build the columns based on the #fields value in the Log. That way each field in the results can be arranged into columns so we can sort our data based on what the item in the field is.

The additional column we’ll add is the name of the log file.


$Log = Get-Content $File | where {$_ -notLike "#[D,S-V]*" }

$Columns = (($Log[0].TrimEnd()) -replace "#Fields: ", "" -replace "-","" -replace "\(","" -replace "\)","").Split(" ")

$Columns += 'LogFile'

$Count = $Columns.Length

Now that we have the column titles now it’s time to filter out the log and only get the lines that have the value we want.  In my case I was searching for the Error code of 500.


$Rows = $Log | where {$_ -like "*$errorType 0 0*"}

Now that we have all the data we want in the row’s variable we can now construct a table of Columns and rows.


$IISLog = New-Object System.Data.DataTable "IISLog"

foreach ($Column in $Columns) {
$NewColumn = New-Object System.Data.DataColumn $Column, ([string])
$IISLog.Columns.Add($NewColumn)
}
# Loop Through each Row and add the Rows.
foreach ($Row in $Rows) {
$Row = $Row.Split(" ")
$AddRow = $IISLog.newrow()
for($i=0;$i -lt $Count; $i++) {
$ColumnName = $Columns[$i]
if($ColumnName -eq 'LogFile')
{$AddRow.$ColumnName = $file }
else {$AddRow.$ColumnName = $Row[$i]}
}
$IISLog.Rows.Add($AddRow)
}

$IISLog

Full function is below:


  function get-ErrorLogs
  {
  param($website = 'myWebSite', $errorType = '500',[int] $days =0)

    Import-Module WebAdministration

    $site = Get-Item IIS:\Sites\$website
    $id = $site.id
    $logdir = "$($site.logfile.directory)\w3svc$id"

    $File = "$logdir\u_ex$(((get-date).adddays(-$days)).ToString("yyMMdd")).log"
    $Log = Get-Content $File | where {$_ -notLike "#[D,S-V]*" }
    $Columns = (($Log[0].TrimEnd()) -replace "#Fields: ", "" -replace "-","" -replace "\(","" -replace "\)","").Split(" ")
    $Columns += 'LogFile'
    $Count = $Columns.Length
    $Rows = $Log | where {$_ -like "*$errorType 0 0*"}
    $IISLog = New-Object System.Data.DataTable "IISLog"
    foreach ($Column in $Columns) {
      $NewColumn = New-Object System.Data.DataColumn $Column, ([string])
      $IISLog.Columns.Add($NewColumn)
    }
    foreach ($Row in $Rows) {
      $Row = $Row.Split(" ")
      $AddRow = $IISLog.newrow()
      for($i=0;$i -lt $Count; $i++) {
        $ColumnName = $Columns[$i]
        if($ColumnName -eq 'LogFile')
        {$AddRow.$ColumnName = $file }
        else {$AddRow.$ColumnName = $Row[$i]}
      }
      $IISLog.Rows.Add($AddRow)
      }

    $IISLog
  }

Now that I have this in a full function I can just call it like this:

get-ErrorLogs -website “Website2” -errorType 500 -days 4

And get results similar to this:

500


I hope this helps someone
Until then keep Scripting
Thom


Just Hash It

I have been looking high and low for a good means to compare one variable to another and do it quickly.  In my search I found this article on Stack Over flow.  This led me to create a function that you could use for comparing one variable to another and getting a simple $true or $false answer if they are the same or not.   This article explains that concept

To start with I need to create a function block and pass two parameters. The item i’m using as a reference $reference and the item/variable I’ll use as the difference.


function Compare-Variables
{
param([string]$Reference, [string]$difference)

}

Using the example from the Post on Stack overflow I need to create an Object to hold the text encoding System.Text.UTF8Encoding  System.Security.Cryptography.MD5CryptoServiceProvider and a System.BitConverter.

To work backwards from the object to the comparison here is what takes place.

Step 1: Take the contents of each variable and turn them in to json – Using Converto-Json

$ref = $reference.CacheValue | ConvertTo-Json -Depth 100
$diff = $difference.CacheValue | ConvertTo-Json -Depth 100 

Now that I have it in a json object (so long as the object isn’t nested beyond 100) I’ll have the entire variable and it’s children objects.  This is in a variable called $ref and $diff.

Step 2: Since I have those in a variable I can get the bytes for the variable from Calling the UTF8 getBytes method.  Using a variable of test you’ll see that I get back a set of bytes for each character:


$utf8 = [System.Text.UTF8Encoding]::new()

$utf8.GetBytes('test')
116
101
115
116 

Step 3: Now that my variable is in bytes I can now compute a hash for those bytes with the System.Security.Cryptography.MD5CryptoServiceProvider


$md5 = [System.Security.Cryptography.MD5CryptoServiceProvider]::new()
$md5.ComputeHash($utf8.GetBytes('test'))
9
143
107
205
70
33
211
115
202
222
78
131
38
39
180
246 

Step 4: Now that I have my Computed hash I can convert this into a readable MD5 Sum with  System.BitConverter.

[System.BitConverter]::ToString($md5.ComputeHash($utf8.GetBytes('test')))
09-8F-6B-CD-46-21-D3-73-CA-DE-4E-83-26-27-B4-F6
 

Step 5: Now that I have that for both my variables I can simply ask if $ref -eq $diff and get an answer of $true or $false.

Completed Script is below:

function Compare-Variables
{
 param([object]$Reference, [object]$difference, [int]$objectDepth='2')
 $utf8 = [System.Text.UTF8Encoding]::new()
 $match = $false
 $md5 = [System.Security.Cryptography.MD5CryptoServiceProvider]::new()
 $ref = $reference | ConvertTo-Json -Depth $objectDepth
 $diff = $difference | ConvertTo-Json -Depth $objectDepth
 $hashref = [System.BitConverter]::ToString($md5.ComputeHash($utf8.GetBytes($ref)))
 $hashdif = [System.BitConverter]::ToString($md5.ComputeHash($utf8.GetBytes($diff)))
 $match = $hashref -eq $hashdif
 $hashref = $diff = $ref = $utf8 = $md5 = $null
 $match
}
 

Testing this Function:

Simple Test with just text: Now I can call this function and get a $true if the variables match and a false if they don’t.

$a = 'test'
$b = 'test'
Compare-Variables -Reference $a -difference $b
True
$a = 'test2'
$b = 'test'
Compare-Variables -Reference $a -difference $b
False

Test with an object from a Rest API: Now lets try something that we know will have a fair amount of data in it.  Githubs Rest Api:

 $a = 'test'
$b = 'test'
Compare-Variables -Reference $a -difference $b
True
$a = Invoke-RestMethod -uri http://api.github.com
$b = Invoke-RestMethod -Uri http://api.github.com
Compare-Variables -Reference $a -difference $b
True
$a = Invoke-RestMethod -Uri http://api.github.com/emojis
$b = Invoke-RestMethod -uri http://api.github.com
 Compare-Variables -Reference $a -difference $b
Flase

I hope this helps someone

Until then keep Scripting

Thom

 

 

 

What the Null??

Recently I’ve been working on some code for Querying schedules for SSRS.  I discovered the way that PowerShell passes a null to another function isn’t what the SSRS method expected.

So this started me on what the Heck is PowerShell null really set to?

Based on this blog article we can see I’m not the only one that has this question.   Cody Konior uncovered other ways to declare null:

If you test each of these against PowerShell’s null you get a false:

whattheNull

If we use some of the other comparisons maybe we’ll get to what $null is really set to:

PS ps:\> $b = $null
PS ps:\> [string]::IsNullOrWhiteSpace($b)
True
PS ps:\> [string]::IsNullOrWhiteSpace($b)
True
PS ps:\>

These evaluate the value you’d expect all $true.   I for sure don’t know the language as well as Kirk Munro (@Poshoholic).  He pointed me to a class that I used to compare to  PowerShell’s Null and it came up true:

[System.Management.Automation.Internal.AutomationNull]::Value

In a Blog post about a issue around null it’s explained this way by Jason Shirk (@lzybkr):

ShirkNull

Now I can test for PowerShell’s null, and this explains why $null is not Equal to the C# equivalent.

$Null -eq [System.Management.Automation.Internal.AutomationNull]::Value

True

Yet more detail on the why $Null is different in PowerShell (a more detailed example).

Moral of the story if you are calling a method that expects a $null make certain you get the Right $null for the Method you are calling.

 


I hope this helps someone

Until then keep Scripting

Thom